Europe’s AI Act is setting the stage for a tone shift in how enterprises will experiment, deploy and monitor generative AI.
The EU AI Act, which is expected to go into effect in May or June, will "force companies to be adults about developing and deploying this technology,” Rayid Ghani, distinguished career professor in the Machine Learning Department and the Heinz College of Information Systems and Public Policy at Carnegie Mellon University, told CIO Dive. “There’s very little evidence that the majority of companies are doing any of this responsibly, and if they are, they can now prove it.”
Passed earlier this month, the AI Act emphasizes the use cases and corresponding risks of AI systems, separating unacceptable use cases from those that are high, medium or low risk.
Unacceptable use cases, including biometric categorization systems, emotion recognition in the workplace and AI that manipulates human behavior, are banned.
The international regulatory changes will require businesses exploring generative AI adoption to rigorously evaluate use cases through a risk lens. Organizations already deploying AI will require thorough audits to ensure compliance and avoid potential penalties.
The AI Act will go into effect 20 days after its publication in the EU's official journal. Businesses will have to comply with most rules by 2026, while rules requiring compliance with bans on prohibited practices and obligations for providers of general-purpose AI models will apply after six or 12 months.
Organizations that introduce or operate AI systems in the EU market, regardless of their home base, are subject to the rules.
It’s not the first time U.S. organizations were disrupted by overseas rules on tech. The EU's General Data Protection Regulation, which took effect in 2018, set the standard for how businesses treat personal data, even if they operate outside of Europe.
Experts expect the AI Act to have a similar international influence. “We know that a lot of regulators in the U.S., Brazil and other countries have looked at the GDPR and made their own versions,” said Enza Iannopollo, principal analyst at Forrester.
There was an appetite for regulation then, and there’s an arguably larger appetite for regulating AI now, according to Iannopollo.
If U.S. organizations work to meet the EU’s AI Act requirements, the hope is that tech leaders will likely find themselves on par with or close to compliance with regulatory mandates down the line, Iannopollo said.
Hunger for regulation
While GDPR is a model, it's not a perfect comparison.
“Regulating AI will be much more difficult than it has been regulating personal data, which also is very difficult, by the way,” Iannopollo said.
The rollout of GDPR wasn’t necessarily pretty, either, and many companies struggled to meet compliance targets before audits began.
Even companies that met the standards felt unprepared. More than two-thirds of businesses feared they couldn’t sustain compliance, according to a 2020 Tanium report, which was released nearly two years after GDPR took effect.
Some critics have also pointed out that fines don’t always curb the behavior of wealthy organizations.
“Europeans are very good at writing the rules, but they are not very good at enforcing the rules and I think that is true, especially when it comes to the size of the fines,” Iannopollo said.
The AI Act will levy fines ranging from 7.5 million to 35 million euros ($8.2 million to $38 million) for non-compliant organizations. To adequately enforce the rules, the EU will need to create different entities at the EU and member-state level, Iannonpollo said.
“There is a real challenge in how long it will take for these networks to appear and be able to work,” Iannopollo.
EU compliance officers will need time to cultivate AI skills, a snag that could slow down implementation.
The AI talent gap is widespread, and while generative AI jobs aren’t common, related postings are proliferating.
U.S. regulators have already acknowledged a need for upskilling to craft rules for the new technology. Congressional leaders held insight forums and subcommittee hearings to learn about the potential impact of AI and explore policy recommendations from stakeholders and tech CEOs
What does it all mean for CIOs?
Though the U.S. has tried to manage the rapid onset of AI with a targeted executive order, regulatory frameworks and efforts to deter businesses from overpromising and under delivering on AI, it will fall to Europe to kick-start AI enforcement.
CIOs can start prepping their organization for the AI Act’s enforcement by getting a clear sense of what tools employees are using and for what purpose, Ghani said.
Amassing a tool inventory and identifying specific use cases will help organizations classify risk.
In some cases, seemingly low-risk use cases can actually be much more complex, Iannonpollo said, especially when tech leaders start to consider how much personal data is actually being exposed.
Some organizations are already thinking about AI and generative AI deployment with a use-case-focused lens rather than governing with a one-size-fits-all strategy.
Venkat Achanta, EVP, CTO and chief data and analytics officer at TransUnion, is bullish about what AI can do for the business.
TransUnion unveiled its AI and ML platform OneTru earlier this month. Sitting at the core of the business, the platform provides a single environment for employees to solve complex data issues, improve identity resolution and enhance collaboration among teams.
But that doesn’t mean the company is embedding generative AI into every business function, department or solution. Careful consideration for the use case is vital.
"We don't use the same techniques in credit because it has to be very explainable," Achanta said. "We use some explainable ML techniques, but we're not comfortable with a black box or generative AI ... whereas, in fraud, you can go after an objective function, even if it is less explainable.”