Dive Brief:
- Microsoft, Google, Amazon and 13 other top software providers agreed to the Frontier AI Safety Commitments Tuesday, a set of voluntary agreements aimed at responsible development of the technology.
- The U.K. and South Korean governments announced the commitments during the AI Seoul Summit, which brought together policymakers and representatives from major tech organizations. IBM, Meta and OpenAI are also among the signatories.
- The providers agreed to create and unveil a safety framework focused on severe AI risks and create risk thresholds that would lead providers to shut down a model. Companies will aim to publish those thresholds by early 2025, during the AI Action Summit in France.
Dive Insight:
AI oversight has emerged as a global challenge given the risk implications of a relatively new technology that relies on data as its fuel. Governments, concerned over bias, misinformation and cybersecurity, have added urgency to regulatory efforts.
In addition to the voluntary vendor commitments, the EU and 10 countries including the U.S. committed to creating a global network of publicly supported AI Safety Institutes, the countries announced Tuesday. The network will follow the roadmap established by the U.K., which created its own institute in 2023.
AI oversight is top of mind for U.S. legislators as well. The Bipartisan Senate AI Working Group last week released a long-awaited roadmap for AI regulation, a 31-page document outlining policy recommendations on oversight, workforce training and developing standards for AI use.
In the U.S, top software providers already signed a White House-led voluntary pledge aimed at secure AI development in July 2023.
Tech giants, including Google and Microsoft, were among the inaugural cohort of a 200-member alliance formed in February focusing on shaping AI guardrails. The group, called the U.S. AI Safety Institute Consortium (AISIC), is part of the National Institutes of Standards and Technology’s U.S. AI Safety Institute.
The initiatives to curb risk have not slowed the pace of innovation, as key providers continue to iterate on existing models and systems. But the safeguards built into five major LLM available today are ineffective, according to an assessment by the U.K. AI Safety Institute published Monday.
Despite regulatory efforts, enterprise leaders are carefully monitoring risks associated with AI, including those related to generative AI.Gartner expects risk concerns over generative AI use, including legal action and copyright infringement, to considerably slow down adoption efforts by 2026.