Dive Brief:
- The European Union voted Wednesday to approve the AI Act, the world’s first major set of regulatory ground rules on AI. The regulation separates unacceptable use cases from high, medium or low risk.
- The set of rules bans AI applications that pose risks to citizens’ rights, such as biometric categorization systems, emotion recognition in the workplace and AI that manipulates human behavior. General-purpose AI systems and models will need to comply with EU copyright law and publish detailed summaries of the content used for training.
- The rules are still subject to a final check as well as formal endorsement by the Council. Once the act is published in the Official Journal, certain rules will begin applying in the following six to 36 months.
Dive Insight:
As the regulation framework takes shape, CIOs at global organizations operating in Europe should get comfortable with their compliance counterparts — if they haven’t already.
“There is a lot to do and little time to do it,” said Enza Iannopollo, principal analyst at Forrester, in an email. “The extra territorial effect of the rules, the hefty fines, and the pervasiveness of the requirements across the AI value chain mean that most global organizations using AI must – and will – comply with the act.”
Meeting the requirements effectively will require strong collaboration among teams, from IT and data science to legal and risk management, and close support from the C-suite, Iannopollo said.
Unlike other regulatory actions, such as the Biden Administration’s executive order or voluntary commitments, the EU’s AI Act aims to bring consequences to those organizations that break the rules. Noncompliant organizations are looking at paying fines ranging from 7.5 million to 35 million euros ($8.2 million to $38 million), depending on the infringement and size of the company.
The EU’s efforts, while dating back before the current wave of interest in AI, have accelerated in recent months.
“The fact that the EU brought this vote forward by a month also demonstrates that they recognize that the technology and its adoption is moving so fast that there is no time to waste, especially when there isn't an alternative framework available,” Iannopollo said.
Industry watchers and experts anticipate U.S. lawmakers will use the AI Act to shape forthcoming domestic policies on the technology, making it even more vital for organizations to assess how their processes line up against the new rules.
CIOs could also point to the General Data Protection Regulation enactment as a precedent and cautionary tale for businesses that weren’t ready for the change.
“The time for simply establishing sound risk governance and risk management AI programs is quickly passing – the time for implementing, operationalizing, demonstrating and sustaining effective risk practices is now,” said Amy Matsuo, principal and national leader, regulatory insights, at KPMG U.S.
In the U.S., federal agencies are getting to work on the AI executive order by launching requests for comment related to the technology, among other actions. Congress established a 24-member bipartisan task force established in February to study guardrails and explore how lawmakers can help AI innovation.