The House Subcommittee on Commerce, Manufacturing, and Trade will hold a hearing tomorrow titled AI Regulation and the Future of US Leadership. This hearing follows last week’s introduction of a budget reconciliation bill that includes a sweeping ban on state laws or regulations related to AI and automated decision systems. The bill’s language would block the enforcement of existing state laws and prevent states from adopting future AI protections.
Consumer Reports is opposing the measure, warning that it would prevent states with existing AI-related rules from addressing consumer-related harms. “We appreciate Congressman Brett Guthrie and Congressman Gus Bilirakis for organizing tomorrow’s hearing to have a thoughtful discussion about the impact of AI legislation, though we wish this opportunity had occurred before the committee voted on language that would prohibit the enforcement of any state law regarding artificial intelligence. We urge Congress to recognize the important role states have already played in protecting their residents and to avoid advancing legislation that would override those protections,” said Grace Gedye, policy analyst for AI issues at Consumer Reports.
Amid ongoing gridlock in Congress, states have taken the lead on tech policy—especially in the areas of AI and consumer privacy. Last year, Colorado made history by enacting Senate Bill 205, becoming the first state in the country to establish baseline accountability and transparency for the use of AI in high-stakes decisions affecting consumers and workers, such as decisions about access to housing, lending, medical care, insurance, employment, and more. California passed a law, SB 942, that would help consumers discover which content is generated by AI. Tennessee passed the ELVIS Act, protecting performing artists from the unauthorized use of their voice and likeness.
Twenty states have also passed comprehensive privacy laws, most of which include protections around the use of automated decisionmaking systems.
Gedye continued, “It’s unusual to see Congress intervene to block states from protecting their residents—especially without offering meaningful alternatives. States have advanced legislation to bring transparency to flawed AI systems and protect consumers. So far, Congress isn’t advancing real replacements for many of these safeguards—only removing them.”
Justin Brookman, director of technology policy at Consumer Reports, will testify tomorrow at 2:30 PM ET, at a hearing held by the United States Senate Judiciary Subcommittee on Privacy, Technology and the Law. The hearing is entitled, The Good, the Bad, and the Ugly: AI-Generated Deepfakes in 2025. The hearing will focus on how deepfake technologies are used for the creation of non-consensual intimate images, to coopt the identity of performers and recording artists, and to spread election misinformation. This is another area where the states have taken action to protect their citizens: A majority of states have passed legislation relating to intimate AI deepfakes, and more than 20 have passed legislation relating to election deepfakes.
In May 2024, CR’s survey research team conducted a nationally representative multi-mode survey of 2,022 US adults on several topics, including AI and algorithmic decision-making. The full report on the AI and algorithmic decision-making survey results is available here.
We asked Americans how comfortable they felt with the use of AI and algorithms in a variety of situations, such as banks using algorithms to determine if they qualified for a personal loan, landlords using AI to screen potential tenants, hospitals using AI to help make diagnoses and develop treatment plans, and potential employers using AI to analyze applicants’ video job interviews. We found a majority of Americans are uncomfortable with the use of AI in each of these high-stakes decisions about their lives. Consumer Reports also recently released a study on how AI voice cloning tools can facilitate fraud and impersonation. CR assessed six products available for free or low cost online, and found that a majority of the products assessed did not have meaningful safeguards to stop fraud or misuse of their product.
Contact: cyrus.rassool@consumer.org