Consumer Reports was invited to testify before the New York Assembly Committee on Consumer Affairs and Protection and the Committee on Science and Technology for a hearing on artificial intelligence.
CR submitted written testimony focused on subjects including transparency, fairness and bias, implications for consumers’ privacy, validity issues and ‘snake oil’ products, implications for third party testing, and more. CR also provided policy recommendations, including:
- Require clear disclosure when an algorithmic tool is being used to help make a consequential decision about a consumer, like whether they qualify for a loan, are selected for a rental apartment, get a promotion, or see their insurance rates or utility bills go up.
- Require companies to explain why a consumer received an adverse decision when an algorithmic tool was used to help make a consequential decision about a consumer. Explanations must be clear enough that, at a minimum, a consumer could tell if the decision was based on inaccurate information. Explanations should include actionable steps consumers can take to improve their outcome. If a tool is so complex that the company using it cannot provide specific, accurate, clear, and actionable explanations for the outputs it generates, it should not be used in consequential decisions.
- Require disclosure of AI-generated content when it might cause confusion or deception, including text, video, audio, and images. For example, chatbots that a reasonable person might misunderstand as a human-to-human interaction should be labeled.
- Prohibit algorithmic discrimination. Existing civil rights laws prohibit many forms of discrimination, but it is worth identifying any potential gaps and clarifying how these laws apply to companies developing and deploying AI and algorithmic tools, as well as how they are expected to comply and how these laws will be enforced when it comes to certain uses of AI.
- Establish an affirmative duty for companies to search for and implement less discriminatory algorithms used for consequential decisions. Often, there are alternative algorithms that are similarly effective and less discriminatory. Companies should be required to consider fairness as a performance metric and search for less discriminatory alternatives throughout the model development process.
- Require companies to have appropriate data governance frameworks in place to address bias. This includes ensuring appropriateness and suitability of data and addressing any issues with accuracy, completeness, and representativeness of data or historical bias in data.
For more, read the testimony that CR submitted to the Assembly committees.