The Insurance Industry’s Equality Problem

October 4, 2021 by and

The insurance industry has a problem.

In order to determine how to charge our customers, insurers must find ways to put people into buckets based on their risk type and expected losses. Historically, the industry has relied on stand-in variables like credit score, education level, occupation and ZIP code to classify risk and determine pricing.

Now, consumer advocates, regulators and customers alike want the industry to examine to what extent this approach has unfairly disadvantaged communities of color, immigrants and poor communities.

Washington Insurance Commissioner Mike Kreidler recently told Bloomberg that the use of credit scores in setting casualty insurance rates is “a surrogate for redlining,” referring to the discriminatory practice of denying services (typically financial) to residents of certain areas based on their race or ethnicity. Redlining has contributed to systemic racial inequality, displacement and exclusion throughout society.

Meanwhile, the National Association of Insurance Commissioners has made diversity and inclusion a strategic regulatory priority for 2021. And consumers, too, are questioning why carriers use certain kinds of information to determine premiums for insurance policies that are required by law, such as auto insurance.

Many insurers are struggling with how to address these issues. Insurers do not want to unfairly discriminate against their customers, but how can we balance the need to segment with addressing systemic inequality?

Have the Hard Discussions

Insurance executives must be able to have frank and open discussions about the impact their underwriting decisions have on racial equity, social disparities and financial exclusion. For example, what implications does using a variable like education have on affected groups? These are hard questions, but they must be asked and answered if we want to make progress on this issue as an industry.

The insurance industry needs to acknowledge that systemic inequality continues to impact consumers today and that this inequality is built into their businesses.

Let’s consider the practice of using points on a driver’s license to determine if we should charge someone more for car insurance — this might be fair if we lived in a society that equally policed everyone.

However, when we think about this from the context of over-policing certain communities, we quickly realize that using a variable like this disproportionately harms people of color. An open discussion among leaders in the insurance industry can pave the way for us to consider alternatives.

Telematics, for example, allows carriers to directly track erratic and reckless driving behaviors like missing stop signs at 2 a.m., bad turning and weaving across lanes. These data are a better predictor of risk, regardless of whether someone gets pulled over or not.

The Role of Data

This does not mean we should remove every proxy variable out there and start to price insurance based only on individual behaviors. This is not realistic for a number of reasons. With a few exceptions, the industry cannot entirely link pricing to individual behaviors. Legacy technology systems can also limit what is feasible and economical for a traditional insurer to undertake immediately.

Still, we can strive to remove as many of these suspect discriminatory proxy variables as realistically possible from the underwriting and pricing process.

Having more data allows more opportunity to de-bias segmentation. Insurance is based on number crunching. The industry can direct investment to parts of the value chain that produce the data that allow for more insightful scoring.

In some cases, this may mean investing in telematics and IoT devices to support underwriting. In other cases, it might make more sense to partner with a provider to get scoring or devices.

Newer insurers may have an advantage here.

At Root Insurance, underwriting is built on telematics and data infrastructure. Older, larger companies will have to overcome decades of reliance on infrastructure built on analyzing demographic data to price policies.

These are strategic decisions and can involve large investments, so insurers have to be thoughtful about how to approach these questions without negatively impacting their businesses. We must recognize that while there will be clear answers for some of these questions, others will be more complex.

Driving Forces

Identifying and focusing on business opportunities can help drive the industry towards better outcomes.

Consider the case of Infinity Insurance. For years, insurance underwriters have avoided non-standard auto insurance policies. This type of coverage is reserved for drivers with certain risk factors, including new or young drivers, drivers with low or no credit scores, and drivers with an unusual driver’s license status.

But this classification negatively and unfairly impacted first-generation Latino communities, who were not necessarily high-risk drivers but instead had some of these unusual circumstances. Infinity Insurance understood the nuances of this underserved community and reached out to these drivers with affordable non-standard policies offering bilingual customer support and sales agents. Infinity has since grown to become the second-largest writer of non-standard auto insurance in the U.S.

Insurers can find good risk that is worth underwriting by addressing segmentation and using variables that are more closely aligned with actual behavior.

Thoughtful regulators can also focus on reducing reliance on pricing factors that impose a higher tax on under-represented communities. But insurers will have to work to find the right balance to maintain a healthy insurance market. No one wants a situation where insurers are pulling out because it is no longer profitable for them to do business, and no one wants their carrier to go into receivership either.

It’s worth noting that as we increasingly apply advanced technologies like AI to insurance, we are moving into uncharted territory. We must beware of automating and baking bias into AI models. Simply using proxy data and AI together is not enough. It just ends up scaling historically biased and unequal outcomes into an unapologetic and unemotional AI model.

It is important that the industry has these conversations, so it can move toward greater fairness and equality in risk classification and pricing — for the good of the industry and its customers.