New law helps consumers identify AI-generated content
SACRAMENTO, CA – California Governor Gavin Newsom signed the California AI Transparency Act, AB 853, into law today. The new law will make it easier for consumers to distinguish between authentic and AI-generated content online.
“The California AI Transparency Act comes at a pivotal moment as AI-generated content becomes increasingly ubiquitous, and scammers find new ways to rip consumers off with fraudulent AI deepfakes” said Grace Gedye, policy analyst for AI issues at Consumer Reports. “While artificial intelligence offers exciting new possibilities, it also introduces risks—from scams to misinformation—that demand greater protections. The AI Transparency Act builds on a 2024 measure that required generative AI systems to label the content they create. The new law ensures online platforms make it clear when content is AI-generated, altered, or authentic, helping people know what’s real and what’s not.”
What AB 853 does
Last year, SB 942 was enacted to ensure provenance information will be embedded into AI-generated content that will allow users to identify its origins. AB 853 complements this effort by:
- Requiring that large online platforms, such as social media sites, mass messaging platforms, and search engines, provide consumers with an easy, conspicuous way to discover if there’s any provenance information available that reliably indicates whether the content was generated with (or substantially altered by) a generative AI system or an authentic content capture device. If that information is available, the large online platform shall make clear the name of the generative AI system, or the name of the device, among other information.
- Prohibiting platforms and websites that make source code or model weights available for download from knowingly making available a GenAI system that doesn’t provide the disclosures required under SB 942. That law requires providers of certain GenAI systems to include latent disclosures in the content their system generates, including the name of the company, the name and version of the GenAI system that created or altered the content, and more.
- At the point of content creation, AB 853 enables provenance markings on authentic, human-generated content by requiring that recording devices sold in California, such as cameras and video cameras, include the option to embed such information.
Together with the foundation laid by SB 942, AB 853 empowers consumers to distinguish between AI-generated and human-created content, helping to slow the tide of misinformation and AI-powered fraud. It equips individuals with the tools they need to make informed decisions about the trustworthiness of the media they encounter. It also would accelerate the adoption of voluntary provenance standards that major tech companies are currently developing, such as those proposed by the Coalition for Content Provenance and Authenticity (C2PA).
AI voice and likeness cloning tools have unlocked scammers’ abilities to generate deepfake videos falsely depicting celebrities and political figures endorsing products, suggesting investments, and urging citizens to take action. Research suggests that consumers struggle to recognize deepfake videos as false, and also overestimate their own ability to detect deepfakes.
Consumer Reports recently released a study on how AI voice cloning tools can facilitate fraud and impersonation. CR assessed six products available for free or low cost online, and found that a majority of the products assessed did not have meaningful safeguards to stop fraud or misuse of their product.
Contact: cyrus.rassool@consumer.org