Dive Brief:
- As enterprise AI adoption advances, C-suite leaders are torn on whether innovation, responsibility or a balanced approach should be the priority, according to NTT Data. The IT services firm surveyed 2,300 global C-suite leaders and decision-makers for a report published Wednesday.
- Nearly one-third of leaders surveyed said they prioritize innovation over responsibility, while a similar proportion of respondents said safety should be the priority. The remaining group prized responsibility and innovation equally, according to the report.
- Despite differences in their approach, 3 in 5 leaders agree there's a significant gap between innovation and responsibility — which includes models for safety, sustainability, ethics and inclusivity — amid AI deployment.
Dive Insight:
While boardroom leaders disagree on how to lead their organizations through AI adoption, a similar rift is playing out in the global political landscape.
U.S. Vice President JD Vance called for a sharper push toward AI deregulation, speaking Tuesday in Paris before a meeting of leaders and organizations.
“America wants to partner with all of you,” Vance said during a keynote speech at the AI Action Summit. “But to create that kind of trust, we need international regulatory regimes that foster the creation of AI technology rather than strangle it, and we need our European friends, in particular, to look to this new frontier with optimism.”
The Trump administration undid several Biden-era initiatives on AI oversight through two executive orders enacted in January.
Europe, however, continues its tighter approach to AI regulation. The EU AI Act, which went into effect last year, shapes how businesses can use the technology and how model developers should operate.
Unclear government regulation hurts enterprises’ ability to innovate, according to most respondents in the NTT survey. More than 4 in 5 of respondents say government regulations on generative AI specifically are unclear, which prevents further investment in the technology.
As the Trump administration continues to hone its approach to tech oversight, questions remain over the future of the voluntary generative AI guardrails put forth by the National Institute of Standards and Technology.
In 2024, NIST issued a set of about 200 risk-mitigating actions for organizations, which addressed common risks associated with generative AI, including information integrity, security, bias and environmental impacts.