Welcome to Consumer Reports Advocacy

For 85 years CR has worked for laws and policies that put consumers first. Learn more about CR’s work with policymakers, companies, and consumers to help build a fair and just marketplace at TrustCR.org

Consumer Reports highlights potential benefits and risks of artificial intelligence used by financial institutions for consumers

CR details safeguards needed to ensure consumers are protected from risks 

WASHINGTON – In a comment letter submitted to the Treasury Department, Consumer Reports highlighted some of the potential benefits that artificial intelligence used by financial institutions can provide to consumers, but also warned of the potential risks the technology can pose depending on how it is deployed. CR’s letter is in response to a request for information by the Department and details a number of safeguards policymakers should ensure that institutions adopt in order for consumers to enjoy the benefits of AI while mitigating the risks.

CR’s letter points out that there are a number of ways financial institutions are using AI, ranging from powering digital chatbots and virtual assistants, to augmenting or even automating credit underwriting, to digital marketing and fraud monitoring. AI and machine learning (ML) offers many potential benefits for consumers, including increasing access to credit for traditionally underserved consumers with limited credit histories, expanding the availability of new and innovative products potentially at lower cost, and providing faster customer service.

“Recent advances in AI and machine learning are game changing for both industry and consumers, but these new technologies are a double-edged sword,” said Jennifer Chien, senior policy counsel for Consumer Reports. “AI can be used by banks and other financial institutions to improve customer service and expand access to credit, but it can also reinforce bias and be used to target vulnerable consumers with predatory products.”

Chien continued, “We need clear and strong safeguards to mitigate the risks posed by artificial intelligence so it is used in a safe and responsible manner and consumers can reap the benefits and avoid the harm these advanced technologies can cause.”

AI/ML models can be used to increase access to finance but it can also perpetuate and exacerbate bias against some consumers. Digital targeted marketing can be used to expand access to services, but also for aggressive marketing of predatory products to vulnerable consumers that exploits behavioral biases. GenAI may allow for quicker responses to customer service queries, but may also result in inaccurate responses or prevent consumers from reaching live agents to resolve urgent matters.

CR’s letter to the Treasury Department details a number of policy recommendations for regulators to ensure the safe and responsible use of AI/ML:

  • Provide greater clarity and consistency on requirements and expectations for financial institutions regarding internal transparency into AI/ML models, including use cases where inherent interpretability may be required;
  • Require financial institutions to clearly inform consumers when an AI tool is being used to help make a consequential decision about them;
  • Require financial institutions to provide consumers with a clear, specific explanation when they receive a consequential adverse decision made by AI;
  • Provide consumers with the right to appeal an AI-driven decision for human review;
  • Establish good practices for digital chatbots and virtual assistants, including clear disclaimers to consumers when interacting with such tools, offering an easily accessible means to escalate to human assistance, and providing clear information regarding the capabilities and limitations of such tools;
  • Ensure financial institutions utilizing consumer-facing AI systems meet high consumer protection standards, including regarding accuracy;
  • Provide clearer guidance on acceptable and prohibited practices in digital targeted marketing;
  • Provide greater clarity on the full range of measures financial institutions can and should be employing throughout each stage of the model development pipeline to directly address potential sources of algorithmic discrimination;
  • Clearly establish the expectation that financial institutions should conduct robust searches for less discriminatory alternative (LDA) models for credit products, as well as in other high-risk contexts, and provide guidance on what a robust LDA search entails;
  • Establish strong and more consistent safeguards to ensure transparency, accountability, and fairness regarding the use of AI/ML in the insurance sector; and
  • Provide greater clarity on the appropriateness of optimized pricing models.

For a more detailed explanation of some of the risks of AI/ML and the safeguards needed to mitigate them, see CR’s letter to the Treasury Department.

Michael McCauley, michael.mccauley@consumer.org

 

IssuesMoney