Dive Brief:
- European Parliament and Council negotiators have established obligations for AI systems based on potential risks and levels of impact as part of a provisional agreement tied to the European Union’s AI Act, the group said Friday.
- Developers of general purpose AI models must draw up technical documentation, comply with EU copyright law and disseminate detailed summaries about training content, according to the initial agreement.
- Noncompliant developers will be served fines ranging from 7 million to 35 million euros, depending on the infringement and size of the company.
Dive Insight:
The goal of the EU's AI Act is to ensure that proposed regulations enable the safe development and use of the technology while also trying to avoid stifling innovation. It’s the same target other governing agencies are aiming at as AI continues to evolve.
However, there are some differences between the AI Act and other regulatory efforts.
“The EU AI Act has a product-based approach, and this means that each AI product needs to be classified based on the risks it brings, with more stringent controls wrapped around high-risk AI,” said Ivana Bartoletti, global chief privacy officer at Wipro and cofounder of think tank Women Leading in AI, in an email. “The requirement for conformity assessments as fundamental rights assessment stems from here: this is a product-based legislation, not context-specific, like the U.S. approach.”
The EU also clearly states penalties for noncompliance, while the Biden administration’s executive order does not.
“As such, it is possible that the U.S. Congress and the U.S. federal government will use the EU AI Act as a template to pass similarly granular rules and regulations, and enforcement mechanisms,” Avivah Litan, distinguished VP analyst at Gartner, said in an email.
“There are still issues with the EU AI Act – for example, human rights organizations argue that there are too many exemptions for law enforcement agencies, and those arguments and rules need to be ironed out in the coming years,” Litan said.
The EU’s proposed regulations prohibit certain applications of AI, including biometric categorization systems that use sensitive characteristics, untargeted scraping of facial images, emotion recognition in the workplace, social scoring based on behavior or characteristics, circumventing humans’ free will and using the tech to exploit vulnerabilities, according to the agreement.
However, there are several law enforcement exemptions, including the technology's use for targeted searches, prevention of terrorist threats or for locating and identifying a suspect.
AI systems that are classified as high-risk will undergo a mandatory rights impact assessment, which is also applicable to the insurance and banking sectors, according to the agreement.
“The new legislation is mostly of concern for AI vendors, however, organizations using AI systems may find they need to respond to new obligations if they are classified as a ‘high-risk system,’ and the financial industry is specifically called out here,” said Brian Jackson, principal research director at Info-Tech Research Group.
The provisional agreement and proposed regulations were a result of lengthy negotiations that lasted until late Friday. The guardrails and obligations also came from repeated iterations, starting with the first draft of the act, which was released in 2021, before the recent wave of generative AI implementation.
“The deal reached on Friday night includes significant requirements for GPAI, and brings GPAI providers under the direct supervision of a new AI Office that will be created within the European Commission,” said Laura De Boel, partner at Wilson Sonsini Goodrich & Rosati, in an email. “Similar to the [General Data Protection Regulation], I expect the AI Act to have a significant impact on non-EU companies that have a nexus with the EU.”
The next steps for the EU will be formally adopting the regulations as EU law, which will be voted on by Parliament’s Internal Market and Civil Liberties committees in a forthcoming meeting, according to the announcement.
“We're still a few steps away from enforcement of AI regulations, but … the time is now for organizations to develop a responsible AI framework and be ready to put in place effective governance as they deploy use cases,” Jackson said in an email.