California is preparing to erect AI guardrails across the state as part of its Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which was passed by the state Assembly Wednesday. The final text awaits one last confirmation vote and Governor Gavin Newsom’s signature.
The bill mostly targets the development of AI models, requiring providers to comply with several protocols, such as:
- Implement a capability that would allow a complete shutdown of the model
- Retain an unredacted copy of safety and security protocols
- Submit a statement of compliance to the state's attorney general
- Provide whistleblower protections
Organizations that spend more than $10 million fine-tuning an existing model are held liable as developers under the rules. Businesses not complying with the new rules could face a lawsuit from the state’s attorney general.
The act also sets the scene for California to establish a board separate from the Department of Technology that will update definitions, issue regulations and develop a framework that advances responsible AI deployment.
Technology leaders are keeping a watchful eye on the evolving patchwork of regulation focused on AI. Large enterprises operating globally are assessing existing practices and adapting protocols to stay on track as leaders anticipate stricter guidelines on the horizon.
But most C-level executives admit their organization falls behind with responsible AI practices at scale, according to an IBM Institute for Business Value study published earlier this month. More than 2 in 5 execs cite growing concerns over regulation and its potential to delay adoption.
While most technology providers have championed AI regulation in theory, some have also sounded the alarm on what they see as an overreach by California lawmakers. OpenAI, Meta and more than 130 startup founders have questioned whether the bill will deter AI innovation in California, home to many of the leading providers.
“Bottom line: SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing, namely, test their large models for catastrophic safety risk,” Senator Scott Wiener, who introduced the bill, said in a statement last week. “We’ve worked hard all year with open source advocates, Anthropic, and others, to refine and improve the bill.”
Ahead of SB 1047’s enactment, CIO Dive broke down what technology leaders should know about enforcement:
When will the act become law?
After a final confirmation vote, Governor Newsom will receive the bill and has until Sept. 30 to sign it into law or veto the act.
Key dates for enforcement
-
Jan. 1, 2026
Developers are required to retain a third-party auditor to perform an independent review of compliance.
Operators will also need to implement written policies and procedures for when a customer utilizes compute resources that would be sufficient to train a covered model.
The board will submit a report to the state legislature that outlines a responsible framework for developing and deploying AI models.
For a first violation that occurs on or after the date and results in imminent risk to public safety will result in a civil penalty equivalent to up to 10% of the cost of computing power used to train the model. The cost of a second violation and civil penalty can reach up to 30% of the cost of computing power. -
Jan. 1, 2027
The government operations agency will update the definition of a covered model and issue regulations no later than this date.
The agency will establish binding auditing requirements and issue guidance for mitigating “unreasonable” risks.