Editor’s note: The following is a guest post from Caroline O’Brien, chief data officer and head of product at Afiniti, and Kristin Johnston, associate general counsel, AI, privacy and security at Afiniti.
Generative AI seems to promise swifter innovation, improved efficiency and more productive employees.
Amid macroeconomic concerns in the US and Europe, tools like ChatGPT, Google’s Bard, Microsoft’s Bing and Salesforce’s Einstein GPT, offer a tempting way forward for businesses under pressure to do more with less.
As leaders of responsible AI at a global contact center AI company, we see the upside of businesses using generative AI for rapid process improvements and content creation. But we also see the potential for harm.
Here are four things companies can do to ensure they’re using generative AI responsibly.
1. Develop a responsible AI governance framework and policy
While more than a third of companies globally report using AI within their organization, according to IBM data, most haven’t implemented processes to guarantee their AI is trustworthy and responsible.
When it comes to responsibly integrating generative AI into your business, a recommended starting point is creating a governance framework to ensure you’re using AI ethically and legally.
A responsible AI governance framework outlines the functions, processes and groups involved in developing and managing an effective responsible AI program. This framework requires a cross-functional team to ensure key stakeholders have a seat at the table: data governance, privacy, legal, and information security among others.
This governance effort requires establishing and overseeing your company’s responsible AI policy.
A responsible AI policy is a formal document that outlines your company’s ethical principles used to guide the development, procurement and deployment of an AI service.
In addition to applicable regulatory requirements, your policy should commit to at least following three fundamental principles: fairness, transparency, and explainability.
- Fairness means that you proactively identify areas where an AI model may exhibit potential bias and discrimination. If you find it, you must correct it.
- Transparency is about being able to see enough of the data and processes to show how an AI model makes a decision.
- Explainability means AI’s decisions shouldn’t just be visible but understandable and backed up by evidence.
Transparency and explainability do not necessarily require exposing the provider’s intellectual property to the world. These principles are about providing stakeholders with a reasonable amount of information to understand the inputs and outputs of an AI model, including its impacts. This is central to building trust.
2. Understand both inputs and outputs
Responsible AI principles can touch every stage of your company’s data supply chain — starting with the data your generative AI models are drawing on.
One key risk is that AI-generated content, trained on historical data, may infringe on others’ intellectual property.
While generative AI can be highly efficient at automating time-intensive processes like creating marketing copy, or images for your website or product packaging, it’s critical to understand what inputs are being used to generate such content.
It’s also vital to put in place steps to ensure proper oversight over the inputs and outputs of such content, to avoid any potential violation of copyright law.
Using chatbots to respond to customer queries is another function requiring responsible control of inputs and outputs. Because generative AI creates new content, the outputs of its models can have issues with accuracy and truthfulness.
Each time a new model is put into production, the data it uses should be documented, classified and assigned a risk category. It should also be continually monitored for discriminatory output.
3. Measure the outputs
It’s not enough to track the data used to train your generative AI models. You also have to regularly evaluate their outputs to ensure they are functioning as intended and that any potential risks – like bias – are being managed.
One way of addressing this is to build measurability into your products and services from the outset. Be sure to develop well-defined metrics to understand bias and controls that allow you to spot when a particular AI algorithm might be creating bias.
For instance, at Afiniti we track how our own AI algorithms are serving our customers by intermittently switching our technology on and off.
This benchmarking ability doesn’t just allow us to demonstrate how our AI is performing for them. By using a randomized control group, it also allows us to detect whether the AI is, for example, exhibiting any racial or gender bias when it is switched on, so we can mitigate it.
4. Keep human beings in the loop
The last step may sound obvious, but is arguably the most important: a human being must be involved in overseeing generative AI. Develop a comprehensive review process for any AI-produced output, and empower employees to speak up if something’s not right.
Generative AI systems are still new, and have already shown they are not always perfectly accurate or unbiased in their outputs. A human eye is vital to ensuring the technology is being responsibly deployed.
If a human pays close attention to the data inputs, and applies a critical eye to the outputs, then the technology you’re using is more likely to work in concert with your principles, instead of subverting them.