Analyst Warns Regulatory Battle Over AI Bias to Grow; Lemonade Argues It’s Fair
As insurers introduce artificial intelligence into pricing and claims handling activities, regulatory focus on disparate impact will grow into “one of the biggest topics of the next 10 years,” a well-known insurance analyst predicted at an industry conference.
V.J. Dowling, managing partner of Dowling & Partners Securities LLC, made the forecast during a session of the 2020 Joint Industry Forum last month titled, “Insurance Vision: Seeing Beyond 2020,” responding to a question from moderator David Sampson, president and chief executive officer of the American Property Casualty Insurance Association.
While U.S. regulators talk about the importance of innovation and enabling technology on the one hand, efforts to restrict “even existing actuarial factors that can be used in underwriting” continue, Sampson said, asking Dowling to comment on what will play out as carriers add data analytics to pricing toolkits.
“What’s fascinating today is how the whole AI comes into play,” said Dowling. Offering a personal example, Dowling imagined image recognition software capturing his facial characteristics as part of a pricing algorithm.
“So, I’m an Irish guy, and maybe I drink too much. All of a sudden, if Irish is a protected class and I am buying life insurance and they look at my face, they say, he’s a little chubbier, he drinks too much as do all those other Irish guys, and you have to pay more for insurance. That’s illegal because there’s a disparate impact on that group,” he said.
“And this is just beginning. It’s not just not on underwriting. It’s on claims…What is going to happen when one particular group disproportionally gets sent to the special investigation unit?” he asked.
Dowling said that 30 years ago, personal lines insurers might have put insureds into broad risk buckets to set a price for each. “Technology and data has allowed the number of buckets to increase until arguably, you get to a point where each individual person has their own price based on their specific characteristics. And what that means is, you get a much bigger dispersion of rates from high to low. The subsidization starts going away,” he said.
“Then, on top of that, AI goes in and looks at data and comes up with a price. It’s not saying because you lived here. It’s doing stuff, you don’t even know what it’s doing. But in the end what happens is, you get the infamous disparate impact—that certain protected groups end up paying more.”
Dowling recommended that executives in the audience read a recent blog item written by Daniel Schreiber, the CEO of rental insurer Lemonade, which described the same history of pricing—from large buckets to tiny ones possible through AI. Unlike Dowling, however, Schreiber argues that “algorithms we can’t understand can make insurance fairer” in the blog post titled, “AI Can Vanquish Bias“.
Following the logic of the article, Schreiber would argue that with AI, each Irishman would not be treated as a stereotypical drinking Irishman. Instead, an AI algorithm that would identify someone’s proclivity to drink and “charge them more for the risk that this penchant actually represents.” In the article, Schreiber actually uses his own religious background to make the point, observing that Jewish people engage in the practice of Shabbat candle-lighting every Friday to usher in the Sabbath and “burn through about two hundred candles over the eight nights of Hanukkah.”
Writes Schreiber: “The fact that such a fondness for candles is unevenly distributed in the population, and more highly concentrated among Jews, means that, on average, Jews will pay more. It does not mean that people are charged more for being Jewish.”
He underscored his point with the observation that while all cows have four legs, not all things with four legs (chairs, for example) are cows. Then, he invoked the words of Dr. Martin Luther King, Jr. : “We dream of living in a world where we are judged by the content of our character.”
Schreiber said everyone wants to be assessed as an individual, not by reference to racial, gender, or religious markers. “If the AI is treating us all this way, as humans, then it is being fair. If I’m charged more for my candle-lighting habit, that’s as it should be….,” he wrote
Schreiber’s blog urges regulators how to recognize unfair pricing; he proposes a “uniform loss ratio test” for pricing outcomes. According to Schreiber, a pricing system “is fair—by law—if each of us is paying in direct proportion to the risk we represent.”
Regulators can tell whether this is a case since loss ratios will be constant across the customer base when an insurance company charges all customers a rate proportionate to the risks they pose, according to the Lemonade executive. “We’d expect to see fluctuations among individuals, sure, but once we aggregate people into sizable groupings— say by gender, ethnicity or religion—the law of large numbers should kick in, and we should see a consistent loss ratio across such cohorts. If that’s the case, that would suggest that even if certain groups—on average—are paying more, these higher rates are fair, because they represent commensurately higher claim payouts,” he suggests.
No Easy Answers
While Schreiber promotes the “uniform loss ratio test” as being “simple, objective and easily administered,” Dowling suggests that easy answers aren’t coming in the near term. “What’s going to happen when all of a sudden the underwriting and the claims start coming up with disparate impact? Watch this. It’s going to be one of the biggest topics the next 10 years,” he said.
“We are only beginning to see what is going to happen.” While West coast technology companies “think they can use this technology, we’re about to have a huge battle” that has already started in New York, he said, referring to a letter that the New York Department of Financial Services wrote to life insurers last year. “If you haven’t read it, you should. It basically says, you can do this, but if it has a disparate impact on the end result, you can’t do it. To me, there was a double negative. You effectively can’t do it,” he concluded.
New York Regulator
The actual language of the letter Dowling seemed to be referencing, Insurance Circular Letter No. 1 (2019), issued on Jan. 18, 2019, begins by stating that the N.Y. department “fully supports” innovation and the use of technology to improve access to financial services.
“Indeed, insurers’ use of external data sources has the potential to benefit insurers and consumers alike by simplifying and expediting life insurance sales and underwriting processes,” the letter says. “External data sources also have the potential to result in more accurate underwriting and pricing of life insurance….”
But, the circular letter continues, “an insurer should not use an external data source, algorithm or predictive model for underwriting or rating purposes unless the insurer can establish that the data source does not use and is not based in any way on race, color, creed, national origin, status as a victim of domestic violence, past lawful travel, or sexual orientation in any manner, or any other protected class.”
It continues: “An insurer may not simply rely on a vendor’s claim of non-discrimination or the proprietary nature of a third-party process as a justification for a failure to independently determine compliance with anti-discrimination laws. The burden remains with the insurer at all times.”
Lawyers commenting on the letter note the more typical application of such rules to homeowners insurers.
The circular letter adds that an insurer should not use any of those innovations “unless the insurer can establish that the underwriting or rating guidelines are not unfairly discriminatory.” The letter outlines rules about transparency in explanations of pricing results from insurers using predictive models and external data to customers. “The reason or reasons provided to the insured or potential insured must include details about all information upon which the insurer based any declination, limitation, rate differential or other adverse underwriting decision, including the specific source of the information upon which the insurer based its adverse underwriting decision,” and insurers can’t hide behind the “proprietary nature of a third-party vendor’s algorithmic processes to justify the lack of specificity.”
At the forum, APCIA’s Sampson asked a second panelist Hayley Spink, head of global operations at Lloyd’s, to describe the headaches that insurers—and all companies—faced to come into compliance with the general data protection regulation in 2018. GDPR is a regulatory and legal framework that covers how companies handle, collect and process personal data, levying monetary fines for companies that are not in compliance.
“Especially in our industry, we deal with personal data all the time and we share that personal data between ourselves and third parties”—MGAs, coverholders, brokers, Lloyd’s, insurers and regulators. “So this has had a big big impact across the EU,” Spink said. GDPR affords individuals protection over how people are using their personal data, but “it starts to create a bit of a tension between how we [in the insurance industry] make best use of our customer data to ensure we’re getting them products [they] need, [while] making sure we’re using that appropriately as well.”
This is an edited version of an article originally published by Carrier Management, Wells Media’s publication for the P/C insurance C-suite.
- P/C Insurers Put a Price Tag on Uncovered Coronavirus Business Interruption Losses
- Forecaster Sees ‘Greater Than Average’ Hurricane Season This Year
- Travelers to Provide $100 Million Cash Flow Boost to Agents Amid Coronavirus Crisis
- P/C Insurance Industry Backs New Government Fund to Help Businesses, Workers Hurt by Coronavirus Shutdowns