Yoshua Bengio: California’s AI Safety Law Will Protect Consumers and Innovation

6 Min Read

GettyImages 2149615354 1

As a fellow AI researcher, I have enormous respect for the scientific contributions of Dr. Fei-Fei Li to our field. However, I disagree with her recently published position on California’s SB 1047. I believe this bill represents a critical, light-hearted, and measured first step in ensuring the safe development of groundbreaking AI systems to protect the public.

Many experts in this field, including myself, agree that SB 1047 outlines a bare minimum for effective regulation of cutting-edge AI models against foreseeable risks and that the compliance requirements are light and not intentionally prescriptive. Instead, it relies on model developers to make their own risk assessments and implement basic safety testing requirements. It also focuses only on the largest AI models – which cost more than $100 million to train – ensuring it won’t hinder innovation at startups or smaller companies. Its requirements are closely aligned with voluntary commitments that many leading AI companies have already made (notably with the White House and at the AI Summit in Seoul).

We can’t let companies do their own homework and simply give nice-sounding guarantees. We do not accept this in other technologies, such as pharmaceutical, aerospace and food safety. Why should AI be treated differently? It is important to move from voluntary to legal obligations to level the playing field between companies. I expect this bill will strengthen public confidence in the development of AI at a time when that is the case many wonder whether companies act responsibly.

Critics of SB 1047 have claimed that this bill will punish developers in a way that stifles innovation. This claim does not hold up to scrutiny. It makes sense that any industry that builds potentially dangerous products is subject to regulations that ensure safety. This is what we do across everyday industries and products, from cars to electrical appliances and residential building codes. While it is important to hear the industry’s perspectives, the solution cannot be to completely repeal a bill as purposeful and well-considered as SB 1047. Instead, I am hopeful that, with additional key amendments, some of the industry’s key concerns can be addressed, while staying true to the spirit of the bill: protecting innovation And citizens.

See also  California's governor signs laws to protect actors from unauthorized use of AI

Another specific area of ​​concern for critics was SB 1047’s potential impact on the open-source development of advanced AI. I have been a lifelong supporter of open source, but I don’t view it as an end in itself that is always good, regardless of the circumstances. Consider, for example, the recent case of an open source model widely used to generate child pornography. This illegal activity is outside the developer’s terms of use, but now that the model has been released, we can never go back. With much more capable models being developed, we can’t wait for their open release before taking action. For open source models that are far more sophisticated than those that exist today, compliance with SB 1047 will not be a trivial exercise, like exempting “illegal activities” from the terms of service.

I also welcome the fact that the bill requires developers to retain the ability to quickly disable their AI models, but only if under their control. This exception is explicitly intended to enable compliance for open source developers. Overall, finding policy solutions for highly capable open-source AI is a complex issue, but the threshold between risks and benefits should be determined through a democratic process, and not based on the whims of whichever AI company is most is reckless or overconfident.

Dr. Li calls for a “moonshot mentality” in AI development. I deeply agree with this point. I also believe that this AI moonshot requires strict safety protocols. We simply cannot hope for companies to prioritize safety when the incentives to prioritize profits are so enormous. Just like Dr. Li, I would also prefer to see robust AI safety rules at the federal level. But Congress is deadlocked and federal agencies are limited, making state action indispensable. In the past, California has led the way green energy And consumer privacyand it has a great opportunity to once again take the lead in AI. The choices we make now in this area will have profound consequences for current and future generations.

See also  Kohl's (KSS) Q1 2024 Earnings

SB 1047 is a positive and reasonable step toward promoting both safety and long-term innovation in the AI ​​ecosystem, especially by encouraging AI safety research and development. This technically sound legislation, developed in collaboration with leading AI and legal experts, is desperately needed, and I hope California Governor Gavin Newsom and the Legislature will support it.

More must-read commentary published by Fortune:

The opinions expressed in Fortune.com commentary pieces are solely the opinions of their authors and do not necessarily reflect the opinions or beliefs of Fortune.

Recommended newsletter: CEO Daily provides important context for the news leaders across the business world need to know. Every weekday morning, more than 125,000 readers rely on CEO Daily for insights about – and from inside – the C-suite. Subscribe now.
Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *