Generative AI: A Double Agent Serving Good and Evil in the World of Claims
Claims professionals need to adapt to the AI evolution at a fast pace as the same technologies used to fight claims fraud are also being used to perpetuate it, according to experts.
“It’s truly become an arms race,” said Shane Riedman, president of anti-fraud analytics at Verisk. “It’s an arms race of good versus evil.”
He explained that while insurers are continuously working to identify and defend against the risk, criminals are working just as hard to leverage continuously changing AI technology to increase risk.
Attestiv CEO Nicos Vekiarides said that while the insurance industry should be proud of how many claims are settled on photos alone as more efficiencies are created in the space, this is also driving risk that these photos have been manipulated or altered. Attestiv provides AI-powered digital content analysis and forensics and works to identify deepfakes, fraud and cyber threats in photos, videos and documents.
“We are seeing fraudulent images that are being used as part of insurance claims—and a lot of them,” he said.
Detecting AI-Related Claims Fraud
Insurance companies must ensure they are continuously updating deepfake detection tools, said Sam Krishnamurthy, chief technology officer at Turvi, a new SaaS technology provider for property/casualty claims, initially developed in-house at Crawford & Company.
“Deepfake technology is evolving at such a rapid pace that the major challenge for insurance companies will be adopting the technology themselves, rather than relying on traditional, outdated methods, which will no longer be sufficient to detect anomalies indicative of falsified information,” he said.
[inline-ad-1]
Riedman said expecting an adjuster to carefully examine every photo or document for what are now just trace indicators of tampering or AI generation is not always feasible and can create an unrealistic amount of work for the adjusters.
“We think the best defense is an application of a detection analytics program that is looking across all of the images and all of the documents, all digital media—a sort of dragnet approach to monitor for indicators of fraud,” he said.
However, these programs take time to implement and operationalize, so while insurers are on that journey, he said it’s important to double down on the basics.
“We find claims that have digital media fraud also have other suspect elements—the more traditional red flags,” Riedman said.
The Responsibility Is on Insurers
Darcy Rittinger, chief risk officer at insurtech CoverGenius, said insurance providers have a significant responsibility to detect and prevent fraud in the claims process, which stems from fiduciary duties to policyholders and shareholders, and specific regulatory requirements set by government authorities. Now, detecting deepfakes will be part of those obligations to prevent fraud.
“There will likely be more access to deepfake detection technologies, which will become an essential part of claims platforms,” said Rittinger. “That said, these advancements are not always accessible to the insurance industry, and insurers have historically lacked the resources to develop or acquire cutting-edge tech tools independently.”
Rittinger said regulatory incentives for the insurance sector to invest in detection technologies would be beneficial, but she sees legislation first addressing individuals, the public and the public interest before it takes aim at insurers and the unique challenges deepfakes pose to the industry.