Welcome to Consumer Reports Advocacy

For 85 years CR has worked for laws and policies that put consumers first. Learn more about CR’s work with policymakers, companies, and consumers to help build a fair and just marketplace at TrustCR.org

Consumer Reports supports New York Department of Financial Services’ efforts to protect consumers from algorithmic discrimination in insurance

CR offers recommendations for further strengthening Department’s efforts to ensure consumers are treated fairly by AI systems used by insurers

YONKERS, NY – Consumer Reports applauded the New York Department of Financial Services’ proposed circular letter outlining the steps it expects insurance companies to take to protect consumers from algorithmic discrimination in underwriting and pricing. In a comment letter submitted to the Department, CR highlighted its strong support for the proposal while also suggesting further steps that could be taken to provide fuller protection for consumers.

“Financial institutions rely on artificial intelligence and machine learning for everything from marketing, pricing and underwriting to claims management, fraud monitoring, and customer service,” said Jennifer Chien, senior financial fairness policy counsel for Consumer Reports. “While these technologies can provide benefits for both insurers and consumers, they also pose some serious risks, including the potential for discriminatory pricing and underwriting decisions that harm consumers.”

Chien continued, “As insurers continue to use AI to make decisions about consumers, we need to make sure these systems are designed to minimize bias so that everyone is treated fairly. We commend the Department’s proactive efforts to ensure transparency, accountability and fairness in insurance underwriting and pricing and encourage it to take some additional steps to protect consumers.”

The risk of algorithmic discrimination is well established and can occur when an automated decision-system repeatedly creates unfair or inaccurate outcomes for a particular group. While the risk of discrimination exists with traditional models, these risks are exacerbated by machine learning techniques for automated decision-making that rely on the processing of vast amounts of data using often opaque models.

Biased results can arise from a number of sources, including underlying data and model design. Unrepresentative, incorrect, or incomplete training data as well as biased data collection methods can lead to poor outcomes in algorithmic decision-making for certain groups. Data may be tainted by past discriminatory practices. Biases can also be embedded into models through the design process, such as through the improper use of protected characteristics directly or through proxies.

CR’s letter points out a number of real-world examples to illustrate these risks. A fraud monitoring algorithm may systematically flag consumers on the basis of race or proxies for race, as illustrated in the recent lawsuit against State Farm claiming that its fraud detection software has a disparate impact on Black consumers. Telematics programs that obtain consumer-generated driving data for insurance pricing may result in unintended bias and disparate impacts. A joint investigation by CR and The Markup found that an advanced algorithm proposed by Allstate seemed to charge prices based on a consumer’s willingness to pay rather than actual risk.

The Department’s proposed circular makes clear that insurers should not use external consumer data and information sources and artificial intelligence systems unless they can establish through a comprehensive assessment that their underwriting or pricing guidelines are not unfairly or unlawfully discriminatory. If the insurer’s comprehensive assessment finds a disproportionate adverse effect, it must seek out a “less discriminatory alternative” variable or methodology that reasonably meets its legitimate business needs.

In its letter to the Department, CR recommended that it require insurers to proactively search for and implement less discriminatory alternatives on an integrated, ongoing basis rather than as part of a one-off assessment after an AI model is developed. CR called on the Department to provide further guidance to insurers on how to conduct such a search and to select among alternatives to reduce disparate impact as much as possible.

CR highlighted its support for the Department’s proposed requirement that insurers be more transparent with consumers about its reasoning and the data it relied upon when policies are refused or canceled. CR urged the Department to ensure that such notices be provided in plain language with specific steps consumers can take to achieve a better result and recommended that consumers be given the right to request human review of automated decisions.

The Department’s draft circular currently applies to AI used for insurance pricing and underwriting and explicitly excludes algorithms and machine learning used for marketing, claims settlement, and fraud monitoring. CR urged the Department to take further action to address algorithmic discrimination that may arise across all stages of the insurance lifecycle.

IssuesMoney