Skip to main content
Home
Home

Implications of California Governor Newsom’s Veto of AI Safety Bill SB 1047

Implications of California Governor Newsom’s Veto of AI Safety Bill SB 1047

Virtual World

Governor Gavin Newsom has vetoed SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, marking a significant development in California’s approach to AI regulation. Both the tech industry and policymakers had closely watched the bill due to its potential impact on the development and deployment of frontier AI models.

Overview of SB 1047

SB 1047 aimed to regulate "covered models," focusing on AI models that cost more than $100 million to develop and require substantial computational power. It also extended to models fine-tuned on a covered model with an additional $10 million investment and significant computational resources, as well as to “covered model derivatives,” which are defined as certain types of copies of covered models. While no current models meet these thresholds, experts expect several to do so soon, meaning the bill would have affected developing frontier AI models and their modifications. This method of calculating covered models was one aspect of the bill that critics highlighted as objectionable.

For models falling within its scope, the bill proposed a range of safeguards, such as “kill switch” mechanisms, rigorous testing, auditing protocols, enhanced cybersecurity protections, and the creation of a new state authority known as the Board of Frontier Models. SB 1047 stirred debate among tech leaders and a bipartisan group of influential members of California's community U.S. Congressional delegation, who highlighted the tension between fostering innovation and protecting consumers. Indeed, while the bill was awaiting Governor Newsom’s signature, Speaker Emerita Nancy Pelosi argued the bill would trigger “significant unintended consequences that would stifle innovation and will harm the U.S. AI ecosystem.”

Governor Newsom’s Statement

In his statement vetoing SB 1047, Governor Newsom acknowledged both the importance of California’s role in AI innovation and the importance of regulating AI. Regarding the bill itself, he raised concerns that the bill’s focus on high-cost, large-scale models would provide a false sense of security. He emphasized that smaller, specialized models could pose equally significant risks and that AI regulation must adapt to the technology’s rapid evolution. Governor Newsom also stated that SB 1047 applied stringent standards to even basic functions, so long as they were deployed by large systems, without considering the specific risks of how an AI system is deployed, including whether it involves critical decision-making or sensitive data.

While Governor Newsom acknowledged that California must not wait for a catastrophe before implementing AI safeguards, he argued that regulation must be grounded in empirical evidence and keep pace with technological advancements. He pointed to ongoing efforts in his administration, including executive orders and the signing of more than a dozen AI-related bills in recent weeks, to address immediate concerns like AI-generated misinformation and threats to critical infrastructure. 

Future of AI Regulation in California

Governor Newsom’s veto raises questions about the future trajectory, at least in California, of sweeping AI regulation targeted at large models alone. Although SB 1047 may not advance in its current form, Governor Newsom’s comments suggest an openness to a more flexible AI regulatory framework in the foreseeable future. He suggested that future regulations should mainly prioritize empirical risk assessments over broad, cost-based thresholds. 

California’s influence in the AI space is considerable. With 32 of the world’s 50 leading AI companies headquartered in the state, any regulatory developments in California are likely to influence national and international standards. Governor Newsom’s veto of SB 1047, along with the signing of 17 other AI-related bills in the last 30 days, signals that while California may not adopt sweeping legislation like the EU’s AI Act, it remains committed to balancing innovation and consumer protection through targeted regulation.  

Takeaways

The veto of SB 1047 means there remains no California regulatory AI safety framework governing the development and deployment of large AI models. But as Governor Newsom’s comments in his veto statement suggest, this regulatory void may not be lasting. 

As with privacy law, the United States thus far is regulating AI by varying state laws and limited executive orders rather than by comprehensive federal legislation. The current Congress appears unable to reach a bipartisan consensus on a federal approach amid a presidential election year, and the composition of the House, Senate, and White House remains uncertain. In the meantime, engaging with state and federal lawmakers and contributing to the development of AI safety and governance standards will be essential for shaping a regulatory environment that balances innovation and safeguards.

Related insights

Home
Jump back to top