Dive Brief:
- Eight additional software and technology companies have agreed to participate in a White House initiative on AI risk management, the Biden administration announced Tuesday.
- Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI and Stability committed to test AI products before releasing them to the public, share information on AI risk management and prioritize research on the public-facing risks of AI.
- The new group of companies join Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, which in July made a similar voluntary pledge toward curbing AI risk.
Dive Insight:
Alongside AI's swift development, business leaders and government stakeholders have shown concern over the ethical implications of the technology and potential risks to the public.
The White House called the addition of the eight businesses to its pledge a "bridge to government action."
"These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI — safety, security, and trust — and mark a critical step toward developing responsible AI," the White House said in an announcement.
The White House is preparing further AI oversight, including an executive order currently in development and the pursuit of bipartisan legislation on the subject of AI
However, the White House is not alone in its push toward AI oversight. Senate Majority Leader Chuck Schumer, D-NY, convened a bipartisan congressional summit Wednesday with top AI leaders, including Alphabet CEO Sundar Pichai and Meta CEO Mark Zuckerberg.
In June, Schumer also unveiled a legislative framework for AI, urging his congressional colleagues to advance on regulation as the technology rapidly evolves.
At the state level, too, California governor Gavin Newsom directed agencies to study AI use and propose a framework for how the technology is deployed by state institutions.
Businesses haven't stopped experimenting with the technology despite concerns. More than half of leaders cite inaccuracies and cybersecurity as areas of risk associated with generative AI, according to a report published by QuantumBlack, AI by McKinsey.