Dive Brief:
- The National Institute of Standards and Technology released four guidance and planning draft documents Monday to improve the safety, security and trustworthiness of AI systems.
- The guidelines focused on mitigating generative AI risks offer developers 400 ways to manage specific threats, such as dangerous, violent, toxic or biased model recommendations. NIST’s generative AI public working group of 2,500 members provided input for the guidance.
- NIST is also piloting an AI evaluation program. Distinguishing between human and AI-generated content is one of the program’s goals. The evaluations will help identify best practices as well.
Dive Insight:
The NIST resources arrived six months after the Biden administration issued an executive order on AI oversight.
“The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time,” U.S. Secretary of Commerce Gina Raimondo said in a statement. “With these resources and the previous work on AI from the Department, we are continuing to support responsible innovation in AI and America’s technological leadership.”
Around the world, governments are stepping up to rein in AI’s potential risks.
Earlier this month, the U.S. and the U.K. signed a nonbinding agreement to jointly develop tests and build a standardized approach to AI safety testing, accelerating the work of both nations' AI safety institutes.
The U.S. AI Safety Institute Consortium was formed by the Department of Commerce in February, featuring more than 200 organizations across big tech and academia. The group’s mission is to shape red-teaming best practices and guardrail guidance.
To support the efforts, the federal government is bolstering its AI expertise. The White House plans to hire 100 AI professionals by the summer and agencies are upskilling existing employees to bridge gaps.
Enforcement of the European Union’s AI Act is also on the horizon.
Businesses that release or operate AI systems in the EU market, regardless of their home base, are subject to the rules, which are expected to go into effect in May or June. The act bans unacceptable use cases, such as emotion recognition in the workplace.