Dive Brief:
- The OpenAI board of directors formed a safety and security committee Tuesday, which will evaluate and provide recommendations for the AI provider’s operations, according to a blog post.
- The committee is led by board members Bret Taylor, former Salesforce co-CEO; Adam D’Angelo, Quora CEO; Nicole Seligman, former EVP, global general counsel and president at Sony; and CEO Sam Altman. OpenAI’s head of preparedness, head of alignment science, head of security and chief scientist will also serve on the committee.
- Over the next 90 days, the committee will offer guidance on the company’s safeguards and security processes. Following a board review, OpenAI will publicly share details regarding the adopted recommendations, the company said.
Dive Insight:
OpenAI is working to solidify its security and safety ahead of releasing its next model. It's a push that could help earn enterprise customer buy-in, too.
“OpenAI’s initiative to form this committee underscores the growing recognition within the industry of the importance of safety and ethical responsibility in AI development,” Alon Yamin, co-founder and CEO of AI-based text analysis platform Copyleaks, said in an email. “This approach is crucial for maintaining public trust and ensuring that AI advancements benefit society while minimizing potential risks.”
OpenAI said it recently started training its “next frontier model,” in a blog post Tuesday. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”
The company said it will lean on safety recommendations from the new committee to inform critical security decisions for all OpenAI projects. OpenAI plans to also consult with other safety, security and technical experts to support the efforts.
The safety push follows multiple key departures, including Ilya Sutskever, co-founder and chief scientist, and Jan Leike, senior researcher and leader of OpenAI’s superalignment team. The two led OpenAI’s team focused on long-term risks, or superalignment, to prevent rogue AI. OpenAI dedicated 20% of the compute it had secured in July to the effort and launched a $10 million grant program.
In a post on X, formerly Twitter, earlier this month, Leike said he believed much more of the company’s bandwidth should have been spent getting ready for next-generation models, security, monitoring, adversarial robustness, and safety among others.
“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” Leike said in the post. The researcher announced Tuesday he joined rival AI startup Anthropic.
An OpenAI spokesperson said co-founder John Schulman has taken on an expanded role as head of alignment science, in an email to CIO Dive Tuesday.
“This work will be coordinated within research instead of as a separate effort, and we believe this structure will be more effective for long-term safety research,” the spokesperson said. “This is a priority for OpenAI and the company expects its investment to increase over time.”