Dive Brief:
- More than 200 organizations across big tech and academia joined a newly formed AI safety alliance, the U.S. Department of Commerce announced Thursday.
- The U.S. AI Safety Institute Consortium (AISIC) will be housed under the National Institutes of Standards and Technology's U.S. AI Safety Institute. Google, Microsoft, NVIDIA and OpenAI are among the inaugural member cohort.
- Alongside local government and nonprofit members, the consortium will help shape red-teaming guidelines, evaluations around AI features, risk management, safety and security as well as other AI guardrails.
Dive Insight:
Executives pushed forward on AI adoption last year, but questions remain over the ethical and risk implications of AI systems. Oversight efforts underway from global governments could trickle down, shaping the offerings that vendors can provide and influencing adoption.
In October, the Biden administration issued an executive order on AI, aimed at placing guardrails around the next generation of AI systems. The order required developers of foundation models that pose serious national risk to share their safety results with the federal government.
The priority actions laid out in the executive order will help guide the work of the consortium, said U.S. Secretary of Commerce Gina Raimondo, in the announcement.
"By working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly," Raimondo said.
In an emailed statement, Google said it was looking forward to participating in the consortium and sharing its expertise.
"Working together, we can align responsible AI practices globally and ensure this pivotal technology benefits everyone," said Kent Walker, president, global affairs at Google and Alphabet.
Enterprise concerns over AI adoption are wide-ranging. Companies are looking closely at the security implications of AI generated code as well as the legal and cybersecurity risks of deploying systems.
"As adoption of AI systems increases across different industry domains, it is vital that appropriate attention is given to individual data privacy, systemic safety and security, and the interoperability of data, models, and infrastructure," said Richard Searle, VP of confidential computing at consortium member company Fortanix, in an emailed statement.
Prior to the executive order, The White House announced in July it had secured the voluntary commitment of seven top AI companies — Google, Microsoft, OpenAI and Amazon among them — as part of a pledge to secure AI development. In September, eight additional providers joined the list.