Dive Brief:
- There was “a significant issue” in OpenAI’s ChatGPT due to a bug in an open source library, OpenAI CEO Sam Altman said in a tweet Wednesday. Altman said a “small percentage” of users were able to see the titles of other users’ conversation history.
- Some ChatGPT users began flagging issues related to viewing unusual chat history Monday in OpenAI’s Discord server. OpenAI did not respond to requests for comment.
- While OpenAI said a fix has been released and validated, users’ past conversation history is not fully restored, Altman said in a tweet. The ChatGPT web interface outage lasted nearly four hours, and the issue was marked resolved Monday evening, according to the company’s status website.
Dive Insight:
Although it has quickly dominated news cycles, attention spans and popular culture, ChatGPT is less than four months old. The recent bug highlights the need for businesses and employees to proceed with caution when using the tool, especially if sensitive company data becomes an input.
“When we see new technologies evolve quickly, there are almost always bugs and unintended behaviors,” Steve Grobman, SVP and CTO at McAfee, said in an email. “While we identify ways to harness the power [of] ChatGPT and similar technology, we also need to temper the pace of using it in applications and situations that deal with sensitive data, or the output of the model could have negative consequences.”
The problem for a lot of businesses, though, is that employees are using AI tools without IT approval or notice. Nearly 7 in 10 employees said they were using AI tools without first informing their bosses, according to a Fishbowl data.
While data submitted by users via the ChatGPT API is not being used to improve and train models, the situation for non-API customers using the web-based ChatGPT is different.
For non-API users accessing products like ChatGPT and Dall-E, prompts, responses, uploaded images and generated images are used to improve and train the model, creating an issue for companies where shadow IT is prevalent.
“Consider that any data that is shared will be harvested by the large language model and made available to any and all future requests,” Bill Wong, principal research director at Info-Tech Research Group, said in an email. “Given that risk, many companies have advised their employees to limit their use of ChatGPT to totally banning the use of it.”
ChatGPT bans have made headlines as more executives worry about employees putting sensitive company information into the tool.
“This is a huge reminder that ChatGPT is in [the] early ‘beta’ stages while attracting huge usage and attention but regardless should not be depended on for maintaining any level of confidentiality or security,” Anurag Lal, CEO and president of enterprise communication platform Netsfere, said in an email. “If you share your information, you do so at your own risk. Based on the current usage trajectory, these kinds of issues will continue to come to light.”
When developing a policy on the use of generative AI applications, Wong suggests companies:
- Align an AI strategy with the organization’s mission (this is driven by the business stakeholders, not IT).
- Develop a set of responsible AI guiding principles that govern the development and use of Generative AI.
- Assess the viability of adopting any generative AI application by evaluating it against business criteria that are aligned with the organization’s mission and values, Responsible AI principles, and the complexity of the project.
- Document and communicate policies that govern the use of generative AI applications for the use cases being assessed.
Shadow IT isn’t just a tech problem; HR leaders are starting to think about where guidance plays a role in curbing usage. Nearly half of HR leaders said they are in the process of formulating guidance on employees’ use of OpenAI’s ChatGPT, according to Gartner data.