For enterprises operating in the European Union, there are new ground rules for AI development and integration — and the countdown to enforcement is ticking.
Tech leaders along with their organizations are working to understand where existing practices align with the provisions in the EU AI Act and how to bridge existing gaps.
U.K.-based Unilever kept the evolving regulatory landscape in mind as it pursued its growth goals, driven in part by the company’s “go wide and go deep” AI strategy. The consumer goods company has more than 400 brands, including Ben & Jerry’s ice cream, Axe deodorant and Dove personal care, and operates in 190 countries.
“’Go wide' is all about how we raise the fuel on productivity across our total employee base by bringing in and developing AI capabilities that are going to enable them to do their job better or easier,” Chief Data Officer Andy Hill told CIO Dive. This part of the strategy encompasses tooling, foundational capabilities, training and policies, he added.
The key to compliance with the EU AI Act — and other AI regulations on the horizon — lies in deploying and scaling responsible AI practices, C-suite collaboration and continuous risk mitigation, according to Unilever executives.
There’s not a one-size-fits-all approach or a single maneuver to master compliance. Instead, it’s a multistakeholder process that is always improving, Chief Privacy Officer Christine Lee said.
“We’ve used our privacy team, our legal team, our data and analytics teams, our business teams, our operation teams,” Lee said. “Everybody leans in to support our EU AI Act compliance program.”
Unilever has trained around 16,000 employees so far this year on generative AI. The training is tailored to how employees will use generative AI in their day-to-day tasks, ranging from introductory explanation courses to ones focused on prompt skills.
The initiative aligns with Article 4 of the EU AI Act, which requires developers and providers of AI systems to ensure a sufficient level of AI literacy in their staff.
“The 'go deep' part is where we are identifying really big areas of opportunity that AI can help enable that will transform the organization and that we can scale to have a big impact on our sales, our productivity or in the way that we connect with consumers around the world,” Hill said.
The company currently has more than 500 AI projects globally and more than 330 of those are currently active.
Compliance in practice
Most enterprises grapple with modernization needs, pulling focus to mitigate and improve gaps in infrastructure, data ecosystems and security. Tech leaders anticipate tighter AI regulations will amplify those challenges.
More than half of executives expect compliance to increase costs and nearly two-thirds predict requirements will become more complex, according to a KPMG survey published last month.
Tech leaders have a role to play in AI initiatives, but successful organizations will encourage shared responsibility, Unilever executives recommended. Enterprises that have dabbled in the technology before the rush of generative AI tend to have an upper hand.
“We have been running an AI deployment strategy for the last five years and it’s always integrated and embedded as part of the overall business strategy,” Hill said.
Unilever created an AI assurance process in 2019 to evaluate systems for ethical and performance-related risks to ensure alignment with the company’s responsible principles.
The EU AI Act calls for risk management programs associated with high-risk AI systems. AI applications go through Unilever’s assurance process after a risk has been identified.
“It’s a longstanding program that has very much evolved,” Lee said. When the program started, it focused on singular use cases, but has now matured to analyze alignment across different functions to deliver efficient data for reports and business plans, she added.
So far, around 150 projects have been prioritized and made it through the process. Around half of those AI applications needed some level of remediation.
The needed improvements varied from performance issues to mitigating unwanted bias and transparency gaps. “That requires collaboration, too,” Hill said.
High-risk AI models must meet certain criteria and organizations deploying them should review them for possible biases and put in place measures to detect, prevent and mitigate the bias, according to Article 10 of the EU AI Act.
In addition to the assurance program, a cross-functional team at Unilever assesses the risks of potential use cases with new AI systems. Training has also gone a long way in underlining the importance of responsible AI use.
“Our training and education very much mirror our ‘go wide, go deep’ approach,” Lee said.
The company’s training ranges from basic best practices to university-level data science courses. An AI Lab is used to train smaller groups on AI literacy and responsible deployments.
The sharp focus on compliance and responsible use is guiding future initiatives as well. Unilever expanded its December partnership with Accenture this month to simplify its digital foundation and scale use cases.
Compliance requirements will likely evolve, but adapting existing practices and assessing for gaps is already underway.
“We are in a really good place because we started early and we had that foundation of understanding,” said Lee.