Welcome to Consumer Reports Advocacy

For 85 years CR has worked for laws and policies that put consumers first. Learn more about CR’s work with policymakers, companies, and consumers to help build a fair and just marketplace at TrustCR.org

White House executive order on government use of AI can pave the way for rules to protect consumers

Consumer Reports urges policymakers to establish clear standards and require independent testing of AI tools used to make significant decisions about Americans

WASHINGTON D.C. – Today, the White House released an executive order on AI that addresses critical issues such as the detection of AI-generated content, safety, privacy, and the need for greater AI expertise in the government workforce. 

CR applauded the White House for issuing the executive order as artificial intelligence becomes more integrated in society and highlighted two key areas: the call for new standards to detect and mark AI-generated content in government communication with Americans; and the rapid hiring of AI professionals into government to design smart policy and keep up with the pace of innovation. 

Grace Gedye, a policy analyst focused on artificial intelligence at Consumer Reports, said, “Today’s executive order is an important step forward for AI safety and transparency. In addition to marking government communication, consumers deserve to know when they are interacting with an AI chatbot, or looking at images generated by algorithms. Consumer Reports supports expanding these transparency standards to industry to protect consumers in the future. While work still remains to mitigate the unintended consequences of AI, we are encouraged by the White House’s leadership on this issue.” 

The order outlines new pre-deployment testing standards to protect national security in certain infrastructure sectors.

Artificial intelligence is also used to make important decisions about consumers, like who gets access to credit, housing, certain job opportunities, and more. When tools are used for these high-stakes decisions, they should undergo independent testing for bias and safety pre-release.

Algorithmic tools used in lending can perpetuate bias and result in discriminatory outcomes. Generative AI-powered virtual assistants can lead to consumers receiving wrong information; bad actors are also using the technology to impersonate loved ones and defraud vulnerable consumers.

Consumer Reports advocates for legislation at the state and federal level to protect consumers from algorithmic discrimination and other AI-related harms. Clearer standards are needed regarding the responsible use of AI across multiple industries, as well as tools for conducting risk assessments. 

“As powerful AI and machine learning continue to be integrated and used in various financial services, testing is critical to ensure these technologies do not further entrench systemic discrimination and other harm to consumers,” said Jennifer Chien, senior policy counsel for financial fairness at Consumer Reports. “We need clear standards and testing of AI tools across all industries to ensure consumers reap the potential benefits of AI without the serious risks that can come with these innovative technologies.”

Consumer Reports has analyzed different approaches to public-interest auditing and the current legal hurdles, and generated a set of policy recommendations.

Contact: cyrus.rassool@consumer.org, michael.mccauley@consumer.org