ChatGPT has made a name for itself as a productivity and efficiency booster in the workplace. But if managers and leaders in an enterprise are not communicating how ChatGPT and other similar tools on the market should be used, it can lead to security and data privacy concerns.
While OpenAI offers a ChatGPT API so data entered into the model is not used for training purposes, it does not have broad adoption. Instead, it is more common for individual employees to use ChatGPT without explicit permission.
While OpenAI did add more data privacy guardrails, appeasing regulators in Italy along the way, protections within the tools are not a replacement for corporate guidance.
It’s critical for businesses to craft and communicate a policy to employees about using ChatGPT and other similar tools because many are already using them. Without guidance, employees could unknowingly put sensitive corporate information into the model or repurpose generated content and present it as their own.
“As long as you have team members using these public models, you have to give them a framework for usage right away,” Maya Mikhailov, co-founder of SAVVI AI, said. “That should be your first priority.”
In order to create the most bulletproof policy, here are three key steps to consider.
Connect with key stakeholders
Before an organization can formulate a policy, CIOs and tech leaders need to connect with other business unit leaders to gauge interest levels, possible use cases and risks.
Simply banning ChatGPT may work as a temporary policy for now, Mikhailov said, but if a company is a Microsoft or Google customer, these models are becoming embedded in the software they’ve already purchased.
“[Organizations] need to think about a information security policy yesterday, because the convenience that these tools offers is so tremendous,” Mikhailov said.
More than two-thirds of workers said they were using AI tools without first informing their bosses, according to a Fishbowl survey of nearly 11,800 Fishbowl users.
CIOs should meet with legal, compliance, IT, risk management, privacy, data analytics, security and line of business teams to ensure policies represent the needs and requirements of the organization, according to Gartner research.
“Companies seeking to implement generative AI must exercise strong governance practices and adhere to well-defined guidelines,” Suma Nallapati, CIO at Insight Enterprises, said in an email. “Privacy, data security and algorithmic transparency within AI models must all be top of mind to mitigate any risks related to ethical and legal compliance.”
CIOs and technologists need to communicate what the tool can and cannot do to members outside the tech team to set expectations. The risks of using out-of-the-box ChatGPT include fabrications, factual errors, biased or off-base responses, potential copyright violations and sensitive data exposure, according to Gartner research.
Everything generated by an AI model should be treated as a first draft, Avivah Litan, analyst at Gartner, said.
“You should have domain experts review the quality and accuracy of the information before it’s sent to anyone, whether it’s customers, partners or even other employees,” Litan said.
Ask, assess, adapt
When assessing whether a use case is acceptable in an enterprise, there is a framework businesses can follow based on already established goals and risk tolerance, according to Bill Wong, principal research director at Info-Tech Research Group.
Businesses that are not constrained by tight budgets or resource allocation have more room to work, but enterprises that have other priorities may choose to be more cautious when assessing use cases. If budgets and resources are a factor, companies should first go after high customer impact and low complexity use cases, Wong said.
Leaders should ask:
- Does this use case align with the business?
- Does this use case follow my organization’s responsible and ethical AI guidelines?
- Is this use case feasible for my organization?
Executives need to assess whether the use case aligns to the value proposition communicated to customers, follows regulatory and legal compliance and that the organization can take on the potential risks involved, according to Wong.
“Companies should put on safeguards for their brand identity, provide education for their employees about all the risks and they should use AI technology in an ethical and responsible manner,” Yulia Groza, VP of e-commerce technology at Levi Strauss, said in an SAP webinar in April.
When deciding if a use case is feasible, leaders should evaluate the likelihood of success, implementation complexity and time sensitivity, according to Wong.
If the use case falls outside the framework, businesses should stop and reevaluate, Wong said.
Every organization will have a different comfort level as it relates to employee use of generative AI at work.
“We are working on a roadmap because every company should decide what’s right for itself,” Lea Sonderegger, chief digital officer at Swarovski, said during an April SAP webinar. “For us, it’s important that AI also fits to the luxurious experiences our customers expect.”
Humans should get the final word in decision-making and Swarovski wants to ensure that they don’t overwhelm the customer.
Ultimately, it’s about using AI where it fits and not where it doesn’t, Sonderegger said.
“It’s worth repeating that technology should never be adopted just for the sake of it,” Nallapati said.
Language for the policy
Businesses can create a policy simply restricting employees from reproducing AI-generated content, or policies can outline specific use cases, Wong said.
A guideline restricting employee use could include language such as, “You are free to use Generative AI in your research, but there are no circumstances under which you are to reproduce Generative AI (text, image, audio, video, etc.) output in our research,” according to Wong’s research.
Organizations can clearly define how to use ChatGPT by including clear guidance, such as:
- ChatGPT should augment, not replace, research.
- If you use ChatGPT, assess the accuracy of its responses, check for bias and judge its relevance.
- Be transparent about how ChatGPT is being used.
After a policy is established and communicated, businesses should work to educate employees on what can happen if they misuse the technology.
“Historically, when you tell people you cannot use it in any shape or form, people find workarounds,” Wong said. “One way to govern it is through education and saying, ‘Listen I know it’s productive here, but do you really want our competitors to understand our supply chain algorithms?’”