The European Union has raised the bar for AI governance, security and risk management as the provisions in its AI Act go into force Aug. 1.
The AI Act’s rules and guardrails assign AI applications to three risk categories:
- Unacceptable ones are banned, such as social scoring, toys with voice assistance, facial recognition databases through internet scraping and inferring emotions in workplaces.
- High-risk use cases have many obligations, including when used in non-banned biometrics, critical infrastructure and recruitment and employment.
- Limited risk AI systems, such as chatbots, are subject to lighter transparency obligations.
The rest are viewed as minimal risk and largely left unregulated, including spam filters and video games.
Non-compliance can carry fines of up to $37.8 million (35 million euros). Regulators will take the nature, gravity and duration of the infringement into account. Supplying enforcers with incomplete or misleading information, for example, is subject to fines of up to $8.1 million (7.5 million euros).
The clock is ticking for providers and enterprises deploying AI systems to become compliant as they face monetary consequences and investigations by regulators. Analysts anticipate the EU rules will set the global standard for how businesses treat AI systems, comparing the impact to that of the General Data Protection Regulation.
In response, enterprise tech leaders have grown closer to their compliance, privacy and legal C-suite counterparts as priorities shifted to AI adoption. The partnership has become critical as enterprises try to find the best way forward.
Unilever emphasized the need for full visibility of its AI estate to ensure the organization kept pace with recent frameworks, including the EU AI Act and the White House’s AI executive order, according to a Tuesday blog post.
Vigilance and scalable processes that can adapt to developing regulations are key, Chief Data Officer Andy Hill and Chief Privacy Officer Christine Lee said in the post.
Ahead of the EU AI Act’s enforcement, CIO Dive broke down what to expect in the coming months and years:
When do the rules take effect?
While the AI Act takes effect Aug. 1, parts of the comprehensive AI regulation won't become enforceable until February.
When will the EU begin enforcing the rules?
The European AI office of the European Commission will enforce the rules around general-purpose AI models, while other AI systems will fall to the national enforcement level, according to a Wilson Sonsini report. Each EU country has one year to identify enforcers after the AI Act becomes law.
Enforcers will take a phased approach over a two-year transitional period.
Key months to watch:
-
August 2024
Rules take effect.
-
February 2025
Prohibitions on unacceptable AI use cases will apply.
-
May 2025
The AI Office and Board will share the codes of practice, including the level of detail needed for summaries.
-
August 2025
Providers of general-purpose AI models in the EU will need to publish a summary of the content used to train the model. EU officials are expected to release templates for the disclosure prior to the deadline. Technical documentation addressing testing processes and evaluations is also required.
General-purpose AI model providers will need to put in place a policy to comply with copyright and other related rights.
Additional obligations are placed on providers of general-purpose AI models that fall under the systemic risk category. -
February 2026
The Commission will provide guidelines specifying the practical implementation with a comprehensive list of examples of high-risk and not high-risk use cases.
-
August 2026
The remainder of the AI Act will apply.
-
August 2027
General-purpose models already on the market by August 2025 will have until 2027 to become compliant.