Dive Brief:
- Apple has reportedly restricted internal use of ChatGPT for some employees citing concerns over confidential corporate data, according to a Wall Street Journal article published Friday. Representatives for Apple and Microsoft did not immediately respond to a request for comment.
- Citing internal sources and documents, the report states Apple also told employees to not use GitHub’s Copilot, the AI-based assisted coding tool.
- The company joins other large organizations in their reported move to block internal use of generative AI. JPMorgan and Samsung have reportedly blocked or limited ChatGPT use for workers.
Dive Insight:
Concerns over data privacy represent a key stumbling block for enterprise plans to adopt generative AI, as quality outputs require detailed prompts.
Providers have moved to address privacy concerns through product updates. Microsoft-backed OpenAI, maker of ChatGPT, added a new feature to the platform last month that lets users remove chat history.
Despite worries, most executives say the potential benefits outweigh the risk, according to a Gartner survey. Nearly 1 in 5 companies have advanced pilots in place to adopt generative AI, or are in the production stage.
A broader concern deals with the impact of AI tools on society, especially as the technology develops at a faster rate than regulation. While voluntary guidance, such as The White House’s Blueprint for an AI Bill of Rights, exists, OpenAI CEO Sam Altman joined other industry leaders at a Senate Judiciary subcommittee hearing earlier this week to ask legislators to regulate generative AI use.
Executives must create and clearly communicate company policy around generative AI adoption in order to lessen the risks of misuse, analysts and experts previously told CIO Dive.