Gov. Gavin Newsom said that he had asked technology and legal scholars to work with legislators in writing a new bill.
Credit...Jim Vondruska for The New York Times

California Governor Vetoes Sweeping A.I. Legislation

The bill would have been the first in the nation to place strict guardrails on the new technology, but Gov. Gavin Newsom said the bill was flawed.

by · NY Times

Gov. Gavin Newsom on Sunday vetoed a California artificial intelligence safety bill, blocking the most ambitious proposal in the nation aimed at curtailing the growth of the new technology.

The first-of-its-kind bill, S.B. 1047, required safety testing of large A.I. systems, or models, before their release to the public. It also gave the state’s attorney general the right to sue companies over serious harm caused by their technologies, like death or property damage. And it mandated a kill switch to turn off A.I. systems in case of potential biowarfare, mass casualties or property damage.

Mr. Newsom said that the bill was flawed because it focused too much on regulating the biggest A.I. systems, known as frontier models, without considering potential risks and harms from the technology. He said that legislators should go back to rewrite it for the next session.

“I do not believe this is the best approach to protecting the public from real threats posed by the technology,” Mr. Newsom said in a statement. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.”

The decision to kill the bill is expected to set off fierce criticism from some tech experts and academics who have pushed for the legislation. Governor Newsom, a Democrat, had faced strong pressure to veto the bill, which became embroiled in a fierce national debate over how to regulate A.I. A flurry of lobbyists descended on his office in recent weeks, some promoting the technology’s potential for great benefits. Others warned of its potential to cause irreparable harm to humanity.

California was poised to become a standard-bearer for regulating a technology that has exploded into public consciousness with the release of chatbots and realistic image and video generators in recent years. In the absence of federal legislation, California’s Legislature took an aggressive approach to reining in the technology with its proposal, which both houses passed nearly unanimously.

While lawmakers and regulators globally have sounded the alarm over the technology, few have taken action. Congress has held hearings, but no legislation has made meaningful progress. The European Union passed the A.I. Act, which restricts the use of riskier technology like facial recognition software.

In the absence of federal legislation, Colorado, Maryland, Illinois and other states have enacted laws to require disclosures of A.I.-generated “deepfake” videos in political ads, ban the use of facial recognition and other A.I. tools in hiring and protect consumers from discrimination in A.I. models.

But California’s A.I. bill garnered the most attention, because it focused on regulating the most powerful and ambitious A.I. models, which can cost more than $100 million to develop.

“States and local governments are trying to step in and address the obvious harms of A.I. technology, and it’s sad the federal government is stumped in regulating it,” said Patrick Hall, an assistant professor of information systems at Georgetown University. “The American public has become a giant experimental population for the largest and richest companies in world.”

California has led the nation on privacy, emissions and child safety regulations, which frequently affect the way companies do business nationwide because they prefer to avoid the challenge of complying with a state-by-state patchwork of laws.

State Senator Scott Wiener of San Francisco said he had introduced California’s A.I. bill after talking to local technologists and academics who warned about potential dangers of the technology and the lack of action by Congress. Last week, 120 Hollywood actors and celebrities, including Joseph Gordon-Levitt, Mark Ruffalo, Jane Fonda and Shonda Rhimes, signed a letter to Mr. Newsom, asking him to sign the bill.

Mr. Newsom said the bill needed more input from A.I. experts in academia and business leaders to develop a deeper science-backed analysis of the potential for frontier models and their potential risks.

The California governor said that the bill was “well-intentioned” but left out key ways of measuring risk and other consumer harms. He said that the bill “does not take into account whether an A.I. system is deployed in high-risk environments, involves critical decision making or the use of sensitive data.”

Mr. Newsom said he had asked several technology and legal scholars to help come up with regulatory guardrails for generative A.I., including Fei-Fei Li, a professor of computer science at Stanford; Mariano-Florentino Cuéllar, a member of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research; and Jennifer Tour Chayes, dean of the College of Computing, Data Science, and Society at University of California, Berkeley.

Ms. Li of Stanford, whom Mr. Newsom referred to as the “godmother of A.I.”, wrote in an opinion piece last month that the bill would “harm our budding AI ecosystem,” and give the biggest A.I. companies an advantage by penalizing smaller developers and academic researchers who would have to meet testing standards.

OpenAI, Google, Meta and Microsoft opposed the legislation, saying it could stifle innovation and set back the United States in the global race to dominate A.I. Venture capital investors, including Andreessen Horowitz, said the measure would hurt A.I. start-ups that didn’t have the resources required to test their systems.

Several California representatives in Congress wrote Mr. Newsom with warnings that the bill was too hypothetical and unnecessarily put safety standards on a nascent technology. Representative Nancy Pelosi, the former House speaker, also asked her fellow Democrat to veto the bill.

“While we want California to lead in A.I. in a way that protects consumers, data, intellectual property and more, S.B. 1047 is more harmful than helpful in that pursuit,” Ms. Pelosi wrote in an open letter last month.

Other technologists and some business leaders, including Elon Musk, took the opposite position, saying the potential harms of A.I. are too great to postpone regulations. They warned that A.I. could be used to disrupt elections with widespread disinformation, facilitate biowarfare and create other catastrophic situations.

Mr. Musk posted last month on X, his social media site, that it was a “tough call” but that “all things considered,” he supported the bill because of the technology’s potential risks to the public. Last year, Mr. Musk founded the A.I. company xAI, and he is the chief executive of Tesla, an electric vehicle manufacturer that uses A.I. for self-driving.

This month, 50 academics sent a letter to Mr. Newsom describing the bill as “reasonable” and an important deterrent for the fast deployment of unsafe models.

“Decisions about whether to release future powerful A.I. models should not be taken lightly, and they should not be made purely by companies that don’t face any accountability for their actions,” wrote the academics, including Geoffrey Hinton, a University of Toronto professor known as the “godfather” of A.I.

Amba Kak, president of the AI Now think tank and a former adviser on A.I. to the Federal Trade Commission, said, “When debates about regulating A.I. get reduced to Silicon Valley infighting, we lose sight of the broader stakes for the public.”


Explore Our Coverage of Artificial Intelligence


News and Analysis


The Age of A.I.