Viewpoint: How Insurance Industry Can Use AI Safely and Ethically

September 18, 2024 by

Executive Summary: Artificial intelligence (AI) is on the brink of transforming most aspects of business, including insurance, but it needs to be used responsibly, according to Zywave Chief Technology Officer Doug Marquis, who discusses some of the practical steps insurance companies can take to use AI safely and ethically.

Several types of artificial intelligence are already being adopted by various parts of the insurance industry and they have the potential to deliver extraordinary efficiency savings, opening the door for even more profitability, innovation, and complex problem solving.

While the use cases for AI-based large language models, such as those used in ChatGPT, in the insurance industry are evolving, at present examples of how it is being used include summarizing and generating documents, carrying out data analytics, and acquiring data for risk assessment and underwriting. As an insurtech company, we are also looking at how AI can help us write software in an automated way and exchange data between two entities across the insurance ecosystem.

AI Risks

There are, however, multiple risks that can arise when using AI — primarily because it can easily generate errors. For example, AI can ingest statute information from one U.S. state and posit that it applies to all states, which is not necessarily the case. AI can also hallucinate – make up facts – by taking a factual piece of information and extrapolating the wrong answer.

AI also can be biased if it uses data that could be inherently prejudiced and creates algorithms that discriminate against a group of people based on, for example, ethnicity or gender. This could result in the AI recognizing that one racial or ethnic group has higher mortality rates, and then inferring that they should be charged more for life coverage.

AI bias also presents a danger when it comes to recruitment, potentially discriminating against people who are from certain regions or socio-economic backgrounds. For these reasons, there is still a critical need for human oversight of AI decisions to ensure inclusivity, fairness and equal opportunity.

New AI Regulations

AI technology has been moving so quickly over the last two years that regulation has been trailing far behind. Legislators are trying to catch up with the breakneck development of AI and the potential risks it might pose, which means insurers must be prepared for a raft of new regulation.

Earlier this year, Colorado became the first state to pass comprehensive legislation to regulate developers and deployers of high-risk AI to protect the consumer. High-risk AI systems are those that have a substantial input into consequential decisions relating to education, employment, financial or lending services, essential government services, healthcare, housing, insurance, or legal services.

Avoiding Algorithmic Bias

The Colorado AI Act, which goes into effect on Feb. 1, 2026, insists developers and deployers of AI high-risk systems must use care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination or bias.

This means that developers have to share certain information with deployers, including harmful or inappropriate uses of the high-risk AI system, the types of data used to train the system, and risk mitigation measures taken. Developers must also publish information such as the types of high-risk AI systems they have released and how they manage risks of algorithmic discrimination.

In turn, AI users must adopt a risk management policy and program overseeing the use of high-risk AI systems, as well as complete an impact assessment of AI systems and any modifications they make to these systems.

Transparency Required

The Colorado legislation also has a basic transparency requirement, similar to the recent EU AI Act, the Utah Artificial Intelligence Policy Act, and chatbot laws in California and New Jersey. Consumers must be told when they are interacting with an AI system such as a chatbot, unless the interaction with the system is obvious. Deployers are also required to state on their website that they are using AI systems to inform consequential decisions concerning a customer.

Moving forward, it’s likely other states will begin adopting similar AI regulations to those in Colorado. However, it’s important to note that many governance measures, such as risk-ranking AI, control testing data, and data monitoring and auditing, are already covered by other laws and regulatory frameworks not only in the U.S., but around the world. Given the expanding layers of legislation at every level, we can expect the AI landscape to only become more complex in the near future. For the time being, there are several actions companies can take to help ensure they are protected.

Five Practical Steps for Insurers

Given the increased usage and advancement of AI over the past few years, it’s likely the technology is here to stay. And although the extra administrative and oversight work required to ensure AI is used safely and ethically may seem daunting, the new technology offers tremendous business value with the potential automation drastically improving efficiency and profitability.

There’s no doubt the benefits outweigh the additional work of developing a robust AI protocol. By putting in place stringent guardrails, the insurance industry will reap the rewards of AI while remaining compliant within a quickly evolving regulatory landscape.