Welcome to Consumer Reports Advocacy

For 85 years CR has worked for laws and policies that put consumers first. Learn more about CR’s work with policymakers, companies, and consumers to help build a fair and just marketplace at TrustCR.org

Consumer Reports backs signing of high-risk AI bill, calls on Colorado General Assembly to strengthen it before it goes into effect

Denver, Colorado – Consumer Reports commends Colorado Gov. Jared Polis for signing SB 205, a bill that would establish baseline accountability and transparency for the use of AI in high-stakes decisions affecting consumers and workers, such as decisions about access to housing, lending, medical care, insurance, employment, and more. 

SB 205 is the first comprehensive AI bias law in the nation. 

“We applaud Governor Polis for signing this bill into law and the Colorado General Assembly for working to advance this key piece of legislation. Colorado is the first state in the country to extend baseline protections to its citizens when it comes to high-risk AI-decision technology,” said Grace Gedye, policy analyst for Consumer Reports.

“Colorado stood firm against pushback from tech industry lobbyists. Consumers shouldn’t be guinea pigs for tech companies’ unbridled experimentation. This new law will establish a sorely needed floor of protections for Coloradans. Right now, consumers are totally in the dark about the AI software companies use to help decide which Coloradans get a rental apartment, insurance, a spot in a top school, or screened out of a job. We know that AI-decision technology makes mistakes and can be biased. If strengthened, this law should shed a bit of light on how AI helps make high-stakes decisions that shape our lives,” said Gedye.

SB 205 goes into effect in 2026, giving the Colorado General Assembly a chance to close loopholes that tech companies could exploit, and strengthen the law. 

“In order for this law to live up to its aim of addressing bias in high-risk AI, it’s clear that loopholes need to be closed and the law needs to be strengthened. Consumer Reports looks forward to working with Colorado legislators and the Governor throughout the next year to ensure that protections for consumers and workers work as intended, and that tech industry lobbyists do not undermine this foundational law,” said Gedye.

SB 205 must be strengthened in order to protect consumers

Consumer Reports believes that this law makes an important step forward, but it does not go far enough to protect consumers from biased AI systems. There are several loopholes that ought to be closed and provisions that must be updated over the course of Colorado’s next legislative session. For example, the bill exempts AI technology that performs “narrow procedural task[s]” from its definition of high-risk AI. This term is undefined, and companies may argue that all manner of high-stakes decisions – screening out resumes, scoring college applicants – are “narrow procedural tasks.” The bill’s trade secret protections are overbroad. Companies should not be able to unilaterally withhold crucial information or hide evidence of discrimination by claiming that such information is a trade secret. The enforcement provisions must be strengthened.

This is not a comprehensive list. Consumer Reports calls on policymakers and enforcers to consult with consumer advocates, civil rights groups, labor unions, and other civil society representatives in the year before SB 205 takes effect to ensure that the bill fulfills its promise of bringing transparency and accountability to the shadowy world of AI-driven decisions.

What SB 205 does

Key provisions of the bill include:

  • Companies that make and use AI to help make high-stakes decisions (high-risk AI) about consumers must use reasonable care to avoid algorithmic discrimination
  • Companies developing high-risk AI must provide basic information on their websites. They also must provide information to companies using their tools, and alert them and the state Attorney General about any known or reasonably foreseeable risks of algorithmic discrimination
  • Companies using AI to make high-stakes decisions must assess their use of AI for risks and manage risks. When they use AI to make a high stakes decision about a consumer, they must notify the consumer, explain the decision, provide an opportunity to correct any incorrect personal information the decision was based on, and provide the opportunity to appeal the decision. Companies using AI to make high-stakes decisions must also display some information on their website. There is an exemption for small businesses.
  • Companies that make AI systems intended to interact with consumers must disclose  consumers that they are interacting with AI

CR recently published an AI policy guide that outlines our key positions and recommendations for policymakers. 

Contact: Cyrus Rassool, cyrus.rassool@consumer.org