WASHINGTON — AI was top of mind for many of the 5,000-plus privacy and risk management professionals gathered this week for IAPP’s Global Privacy Summit. The technology's adoption is quickly reshaping dynamics between privacy and IT professionals.
“This is a moment where we need new ways to think about privacy, given the shifts in the technology landscape,” OpenAI CEO Sam Altman said during the conference’s ending keynote Thursday.
From panels and passing conversations, it became clear that privacy pros want — and are in need of — better collaboration with technology leaders and IT teams if businesses want to avoid the outsized risks of broader AI adoption.
Privacy teams have already grown closer and more familiar with the technologists due to the European Union’s seven-year-old General Data Protection Regulation and similar data privacy laws around the world. AI, however, calls for a new era of partnership between the two departments.
“You really need this collaboration of different experts to make compliance work,” Jenny Le, director at EY in Munich, said during a Wednesday panel. Businesses will not reach AI compliance because they simply have a good lawyer or IT pro; they need both at the table, Le added.

Creating and sustaining AI governance is a difficult task. Nearly half of organizations blame a lack of understanding of AI, underlying technologies and compliance obligations as top challenges, followed by unclear organizational expectations, according to IAPP’s 2025 AI governance report published this month.
Most global organizations are aligning AI compliance efforts with the EU’s AI Act, which analysts say could serve as a bellwether. Slightly more than half of businesses are somewhat or totally confident in their organization’s ability to comply with the AI Act, according to the IAPP report.
Federally, the United States is taking a hands-off approach to AI regulation. In the absence, individual states are stepping in with their own potential rules. The diverging approaches have increased complexity, underlining the need for close collaboration between legal and technical experts.
“Almost every client I’ve ever talked to has more AI than they think they have, or at least [more than] their privacy teams know about,” Le said.
Teamwork makes the AI dream work
Technology providers have added countless AI features to existing solutions, and sometimes new capabilities are enabled by default. Organizations also have a host of experiments ongoing, contributing to the elusiveness.
Speakers at the conference urged companies to add touchpoints between privacy and IT during AI development and procurement processes to improve accountability and track the technology within the business.
But CIOs and privacy chiefs don’t have to reinvent the wheel.
Randstad, for example, has added a mini AI assessment that employees go through before starting a project, according to Martin Woodward, the company’s director global legal and global responsible AI officer. Historically, workers would answer a few questions related to security and privacy risk.
“Make it simple for your business partners,” Woodward said during a Thursday panel.
The stakes are high for business leaders to cultivate a collaborative culture and ensure employees are familiar with regulatory requirements so that well-thought-out strategies are executed accordingly.
After all, AI mishaps come with consequences.
Depending on the nature, gravity and duration of the infringement, non-compliance with the EU AI Act can bring up to $39.8 million (35 million euros) in fines. Public perception should also be top of mind. Nearly three-quarters of shoppers surveyed by Everseen said they wanted grocers and retailers to increase transparency of AI use.
Speakers recommended leaders conduct tabletop exercises and create preliminary checklists for potential projects, in addition to building better relationships. CIOs and privacy chiefs who can engage their teams and underline the importance of responsible practices will see better results.
“By design exhaustion is a real thing: You’ve got security by design, safety by design, privacy by design,” Brandon Kerstens, VP, associate general counsel and chief privacy officer at Match Group, said during a Thursday panel.

Leaders have big ambitions about AI's impact on their business.
Most enterprises have identified more potential generative AI use cases than they could possibly fund, according to a Snowflake survey.
Enterprises’ latest target is agentic. Nearly all IT leaders are expanding AI agent use in the coming months, with half prepping for widespread implementation, according to a Cloudera survey. Financial services institutions plan to use the technology to improve fraud detection, risk assessment and investment allocation, while retailers plan to enhance customer service and demand forecasting with the tools.
For these aspirations to become a reality, governance is key.
“If you’re trying to go super fast, but with no clear intention, you’re basically creating tech debt,” Idriss Kechida, global data protection officer at OpenAI, said during a Thursday panel.
Without considering potential legal, security or regulatory ramifications during the AI build-or-buy process, tech teams set themselves up for duplicative work and exacerbate risks to the success and reputation of the business.
“The companies that will win [with AI] are also those that actually maintain trust,” Kechida said.