Enterprises have a lot to consider before launching AI projects, from which use case to pursue to which guardrails to implement. Behind every good idea lies a battery of legal implications, requiring CIOs to tread carefully.
Compliance with the various laws and regulations on AI in different countries, states and jurisdictions is not an easy feat. Trying to future-proof projects only adds to the complexity.
“You’ve suddenly got to this point where all these different nuances need to be captured within governance programs, which is really hard,” Martha Hirst, associate in Debevoise and Plimpton’s data strategy and security team as well as the white collar and regulatory defense group, said during a Thursday panel at IAPP’s Global Privacy Summit.
Businesses should aim to create processes and governance that rein in AI’s risks while enabling innovation, panelists said during the event last week.
Here are four questions with legal implications that CIOs should consider when planning and moving forward with AI projects:
1. Is the AI use case high risk?
AI use cases are typically considered high risk if they involve critical infrastructure, employment opportunity or biometrics. Businesses may already have these types of use cases without realizing it, such as within HR functions.
“Companies are struggling to be able to use their traditional recruiting processes,” Cynthia Bilbrough, global head HR privacy, digital and risk at GSK, said during a Thursday panel, referring to a spike in applications across open roles.
Enterprises could be breaking laws if leaders use AI to filter through applications. AI-produced results can contain bias, which could make businesses noncompliant with existing anti-discrimination laws, for example.
Enterprises will need to take extra precautions should leaders plan to move forward with high-risk use cases. Decision-makers should also become familiar with laws that require guardrails on automated decision-making to avoid fines and other repercussions.
2. In what jurisdiction will this tool be used?
Ensuring compliance with existing rules and regulations requires businesses to know exactly where an AI tool is being used. Laws vary across jurisdictions.
“There's already 140 privacy laws,” Christina Montgomery, VP and chief privacy and responsible technology officer at IBM, said during a panel. “We’ve got the EU AI Act. We’ve got every jurisdiction pretty much out there regulating AI at this point.”
Some organizations are navigating the diverging regulatory approaches by aiming for compliance with the highest standards.
“A siloed approach… just doesn’t cut it anymore,” Stephanie Westfield, senior privacy counsel at Ford Motor Company, said during a panel. “Now is the time to think about how you align with a global approach and find the highest watermark.”
3. How is data used — and where does it come from?
Quality data makes AI better. But organizations have varying levels of comfort as to what they want their AI systems to be trained on and what employees are allowed to put in.
Brigette Guyer, global data protection, privacy and risk officer at The Wendy’s Company, said every project goes through a risk review process before kicking off. As part of the initial assessment, teams consider using real data versus using synthetic data, according to Guyer.
Synthetic data doesn’t contain actual personally identifiable information, putting organizations closer to compliance with privacy laws such as the European Union’s General Data Protection Regulation. If done correctly, using synthetic data could also reduce historical biases found in real-world data sets.
Businesses can also benefit from data-mapping exercises.
“Be organized,” Amber Cordova, AI, privacy and cybersecurity counsel at Vistra Corp., said in a panel discussion. “Data mapping is something nobody wants to do, and yet is so important and worth putting resources into.”
4. Build or buy?
The need to manage third-party risk cannot be overstated, according to several experts speaking at IAPP’s event last week.
Technology leaders should understand how third-party organizations plan to use information entered into their systems and what guardrails are in place to limit bias, reduce risks and promote responsible practices.
Often, organizations want to take risk management a step further than what’s already within boilerplate contracts, such as copyright indemnification, which is offered to some extent by leading providers.
Panelists talked about contractually requiring additional information from vendors and protection against potential AI mishaps. CIOs can lean on their privacy counterparts during negotiations to suss out avoidable risks.