Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB-1047)
SB-1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would have established new requirements in California for developers of powerful artificial intelligence (AI) models, aiming to address risks of advanced AI while supporting innovation.
Purpose and Scope
The act targets “frontier” or highly advanced AI models, defined by the computing power and costs needed to train them. It aims to mitigate catastrophic misuse, such as weapons creation, cyberattacks, or other harms to public safety.
Key Requirements for Developers
Developers must implement and document safety and security protocols before training, including rapid shutdown capability, testing for critical harms, and procedures for updates. Redacted safety protocols must be published, with full versions available to the Attorney General. Annual third-party audits are required, with results submitted and published in redacted form. Developers must submit annual compliance statements and report safety incidents within 72 hours. Safeguards extend five years after public release, with required record retention. Models posing unreasonable risk cannot be deployed.
Oversight, Whistleblower Protections, and Enforcement
The Attorney General may enforce compliance with civil penalties, damages, and injunctions, with penalties tied to computing costs. Strong whistleblower protections prevent retaliation and encourage reporting, including by contractors.
Computing Cluster Operators
Operators of large-scale computing centers must adopt policies to identify customers training covered models, retain records for seven years, and maintain shutdown capability.
Board of Frontier Models and CalCompute
A Board of Frontier Models within the Government Operations Agency will update thresholds and definitions with stakeholder input. CalCompute, a public computing cloud, will support safe AI research and broaden access for academia and startups.
Legislative Findings and Intent
The law recognizes both the potential and risks of advanced AI, aiming to balance state oversight with innovation, provide access for smaller researchers, and protect public safety.
This Act marks a landmark in regulating high-risk AI development, combining oversight, safety, transparency, access, and enforcement.
