Analysis of the California AI Transparency Act (SB 942) — Signed by Governor Newsom into Law
With California Governor Gavin Newsom’s signature on September 19, 2024, California Senate Bill 942 (SB 942) — the California AI Transparency Act (CAITA) — is now law. It has three major components. First, SB 942 would require that starting in 2026, large Generative Artificial Intelligence (GenAI) providers — those with over 1 million monthly users – must give users the option to have a “manifest disclosure” (i.e., visible and easily perceived) placed on any GenAI-generated image, video, or audio content that the user has generated. It would also require (i.e., not optional) that large GenAI providers label the same type of GenAI-generated content with a “latent disclosure” (i.e., imperceptible to the human eye). Finally, SB 942 would require large GenAI providers to make an AI detection tool available at no cost that takes advantage of the manifest and/or latent disclosures on their generated content, thereby enabling users to help determine whether the content was created or altered by the provider’s GenAI system.
Genesis of SB 942
I am pleased to have proposed this bill to State Senator Josh Becker and co-drafted the initial version based, in part, on U.S. Senators Schatz and Kennedy’s federal proposal that is known as the AI Labeling Act. The purpose of that federal bill, as stated by Senator Schatz, was to let people know “whether or not the videos, photos, and content they see and read online is real or not.”
SB 942 is certainly not an original idea by any stretch, and while SB 942 has the same overarching goal as the AI Labeling Act, one big difference between SB 942/CAITA and the AI Labeling Act was that I added into SB 942 the concept of a content verification mechanism that is referred to in the bill as the AI Detection Tool. The thought process behind this was having AI labeling (visible or invisible) is a good thing to have for detecting synthetic content, but why not also give the option for a consumer to simply upload content and directly ask a GenAI provider like OpenAI and others the simple question: “Did you create this?” i.e., go to a possible source of content creation versus deciphering labels that may be hard to read.
Coupled with the bill requiring an API to remotely execute the AI Detection Tool, any third party could conceivably then build a website or system that would let consumers simultaneously query dozens of large GenAI providers to help them figure out if a video or image or audio file was created by any of these providers. In other words, this could be the initial beginning of a one-stop portal for consumers to detect synthetic content, where the third party could then layer on its own AI Detection capabilities as additional checks. This is in line with how I envisioned how SB 362, aka the California Delete Act, could offer a one-stop portal for the deletion of personal data from data brokers. As you may tell, I am a big fan of making privacy easy and AI transparent, and I like this consumer-facing portal approach as it empowers consumers and the last two bills (now laws!) I proposed to Senator Becker give consumers a portal.
The bill also requires the GenAI provider to support a feedback mechanism with the AI Detection Tool, so if you go to OpenAI, generate an image, then upload it to their Detection Tool (either directly or via the API), and if the OpenAI Detection Tool does not correctly say “yep I created it” when it was literally generated by them a few seconds ago, a consumer can tell OpenAI your tool ain’t working. So, hopefully, the API and feedback mechanism can help the GenAI providers make their detection solutions improve over time. Furthermore, the GenAI providers don’t have to create their own AI Detection Tool, as the bill allows them to utilize a third-party or open-source solution.
I should point out that the bill did greatly evolve from my initial draft, which is typical in the legislative sausage-making factor, and credit also goes to various staff members with various Assembly and Senate Committees who shaped the bill (I even testified three times to various committees on behalf of the bill), as well as a pragmatic collaboration by Senator Becker and his team with industry. In the end, there was no formal registered opposition to SB 942 as their concerns regarding technical feasibility were sufficiently mitigated to a large degree. So, thanks should also be given to industry groups for knowing that this “is it real or synthetic” thing is a big issue, and they provided constructive feedback on what can and cannot work. So, to be clear, the final text of the bill represents the work of many hands and much give-and-take, so the bulk of what I initially drafted has been long modified and replaced, but the original concepts are there. And at the end of the day the person who really made this happen was Senator Becker, who took this challenge on and worked with key constituents to get this into law!
A Good First Step, with More Steps Needed
I am certainly not saying the bill or the concepts in it are perfect (the same can be said of any bill!). Requiring GenAI providers to give consumers the option to let them add manifest disclosures to the content they generate is useful, and requiring latent disclosures has great value. Why? Because we are right now facing a situation where synthetic content is becoming indistinguishable from genuine content, and this has opened the door to significant risks. Having AI labeling and detection tools can be very helpful.
So, SB 942 is a good first step. In fact, I believe SB 942 represents the US’ initial regulatory foray into helping consumers determine if content is human-generated or machine-generated, and I am proud that California is the first to release a Version 1.
A bill such as SB 942 should and will evolve to keep up with changes in technology. A similar bill, Assemblymember Buffy Wick’s AB 3211, took a much more prescriptive approach, had a bunch of opposition to it, and was unable to make it through the State Senate at the end of the legislative process. But with SB 942 in place, it is conceivable that elements of AB 3211 could be added to SB 942 in subsequent sessions. Furthermore, it is also conceivable in the years to come that the law could mandate support for text or require APIs to be enhanced to enable real-time detection of a video or audio stream, etc. — but obviously, all contingent on technical feasibility and political will. This bill is an initial Lego block, and much can be added, but this is a start.
And, of course, there are many other areas of AI Transparency that need addressing. For example, my friends at the Transparency Coalition are sponsors of AB 2013 that would, per the Future of Privacy Forum:
“require developers of generative AI systems to post documentation regarding the data used to train the system such as the number of data points in the datasets, a description of the types of data points within the datasets, whether the datasets include any data protected by copyright, trademark or patent, and whether the datasets include any personal information or aggregate consumer information.”
So one should also look at this bill in the context of other AI legislation that was signed by the Governor this session (e.g., bills relating to AI-related election deepfakes, bills involving sexually explicit deepfakes, etc.). So, if you step back, and look at the totality of what California is doing with respect to AI, and look at SB 942 as one of the key puzzle pieces, it is clear that California is doing something and is, in fact, taking the lead in this area in the US as compared to the inability of Congress to enact comprehensive legislation in privacy, AI transparency, etc.
What SB 942 is Not
Finally, let me clarify a few things about SB 942.
First, if you have not figured it out by now and/or have not read the bill, content in the form of text is not covered in this bill. Probably everyone now has a great story or anecdote about how their kids’ homework was flagged as being AI (and I have a few stories myself), and we can certainly blame existing tools for barfing out false positives, etc., but when it comes to detecting AI-generated text … text is not covered in this law.
Second, the AI Detection Tool that each large GenAI provider must offer (and the bill says they can utilize a third party to do this) only applies to content generated by the large GenAI provider. i.e., OpenAI just has to say whether or not they created a given image, leveraging the latent disclosure they will be required to add. They don’t have to say, “This image was created by some AI product.” So don’t read “AI Detection Tool” as meaning detecting any and all AI-generated content from any and all AI providers. That being said, the API requirement in the bill will provide the ability to query multiple AI Detection Tools from multiple large GenAI providers, so you can check in bulk.
Third, and as discussed above, the API and feedback mechanism should improve AI detection capabilities, as well as technological advancements that will come into play by the time the law becomes active in 2026. My point here is one can’t declare victory or failure with this bill until mid-2026, and as mentioned above, the bill could further evolve next legislative session. And while it is nice to know that a given person’s opinion of AI Detection tools may not be positive circa 2024, this bill won’t kick in for 2 years (so there is plenty of time for technology improvements per below), and don’t forget the detection will be predicated and based in part on the latent disclosures this bill requires being added to the generated content. i.e., the labeling and AI Detection go hand-in-hand / are a package deal.
Fourth, the bill only applies to the very largest GenAI providers, those that have over 1 million monthly users. The reality is that this does not apply to the vast majority of startups, and the fact is that GenAI companies that this bill covers either supported this bill (based on letters written to Governor Newsom), supported a more prescriptive bill (AB 3211), and/or completely dropped formal opposition based on subsequent amendments. If this was burdensome or not technically feasible to industry, they would have complained and lodged opposition to the Governor, and Lord knows these vendors have enough lobbyists to do so. My only other comment on this point was that when the bill was first introduced in January 2024, the industry said certain things were not technically feasible, but what we have also seen over the last nine months is new or updated technology, such as Google’s SynthID, has made all of this to be very feasible, and have in fact exceeded the requirements in SB 942 (e.g., SynthID supports text), and the 2026 deadline should give enough buffer to enable implementation.
And fifth, the bill is not designed to deal with every risk associated with GenAI. For example, it does not specifically address the harms associated with sexually explicit deepfakes, such as what we are seeing happening in classrooms. But two other bills do this and have been signed into law by Governor Newsom. The same with three bills involving AI-related election deepfakes. I could go, but when SB 942 was drafted, State Legislators knew there were other focused bills out there addressing specific GenAI risks, so SB 942 was designed to complement these, i.e., it is not a one-size-fits-all. I think the point is one needs to look at SB 942 in the context of the “forest” of other complementary AI bills that were floating around Sacramento that were focused on specific harms. I think that is how the Governor looked at these when he came out with press releases grouping multiple AI bills together as part of a package deal, i.e., multiple Lego pieces snapping into each other.
Summary
While more work remains to fully address AI transparency, SB 942 is a significant and technically viable initial foray. The measures in SB 942 are both reasonable and necessary to protect Californians from some (and definitely not all!) of the potential misuse of GenAI while fostering a responsible innovation environment. A big thanks to State Senator Becker and his team for pushing this through, the Committee staff who worked on this, sponsors such as the Transparency Coalition and others such as CITED who wrote letters of support, and Governor Newsom for signing.