Viewpoint: Deepfake Fraud Is On the Rise. Here’s How Insurers Can Respond
It is interesting—and telling—that one of the concerns around generative artificial intelligence has to this point largely revolved around copyright infringement. The most high-profile lawsuits around GenAI have so far centered on the idea that this technology will absorb the work of artists and writers without compensation and churn out passable replicas for pennies on the dollar.
But this wouldn’t be a concern if a consensus didn’t exist that this technology genuinely is powerful—that it really can manufacture persuasively human-seeming texts and images. And while the copyright infringement implications of this matter, there are far more sinister implications to this technology that we need to reckon with—particularly for the insurance industry.
Put simply: insurance professionals cannot do their jobs if they cannot distinguish fact from fiction. And the rise of generative AI tools has guaranteed the blurring of those lines. The term “deepfake” entered popular consciousness long before the average person had heard of OpenAI, but it is only in recent years—with the rise of consumer GenAI technology—that these deepfakes have begun to pose a real threat.
Today, anyone can easily manufacture fraudulent imagery through text-to-photo or text-to-video generative AI platforms. Most people won’t—but if there is a way to commit fraud, you can be sure some percentage of people will take advantage of it.
The implications here are profound and far-reaching. For insurance professionals, these deepfakes have the potential to wreak havoc on daily operations and lead to billions in lost revenue. Fighting back requires understanding the nature of the threat—and how you can take proactive steps to prevent it.
Why deepfakes are so dangerous for the insurance industry
It is estimated that upwards of $308.6 billion is lost annually to insurance fraud—a tally that amounts to a quarter of the entire industry’s value. Clearly, the insurance industry struggled to prevent fraud even before the rise of hyper-realistic, easily-generated synthetic media. And with the continued rise of back-end automation procedures, things are poised to get a lot worse.
The emerging paradigm for the insurance industry right now is self-service on the front-end and AI-facilitated automation on the back-end. Accordingly, 70% of standard claims are projected to be touchless by 2025. This paradigm has definite advantages for the insurance industry, which can now outsource repetitive work to the machines while focusing human ingenuity on more complex tasks. But the sad reality is that automation can very easily be turned against itself. What we are verging on is a situation in which images manipulated by AI tools will be waved through the system by AI tools—leading to incalculable losses along the way.
While I wrote about this very topic in 2022, prior to the widespread accessibility of generative AI frameworks, this is no longer hypothetical: already, fraudsters are photoshopping registration numbers onto “total loss” vehicles and reaping the insurance benefits. And GenAI has also opened the door to fabricated paperwork: in a matter of seconds, bad actors can now draw up fabricated invoices or underwriting appraisals replete with real-seeming signatures and letterhead.
It’s true that some degree of fraud is likely inevitable in any industry, but we are not talking about misbehavior on the margins. What we are confronted with is a total epistemological collapse, a helplessness on the part of insurers to assess the truth of any given situation. It’s an untenable situation—but there is a solution.
Turning AI against itself: how AI can help detect fraud
As it happens, this very same technology can be deployed to combat fraudsters—and restore a much-needed sense of certainty to the industry at large.
As we all now know, AI is nothing more or less than its underlying models. Accordingly, the very same mechanisms that allow AI to create fraudulent imagery allow it to detect fraudulent imagery. With the right AI models, insurers can automatically assess whether a given photograph or video is suspicious. Crucially, these processes can run automatically, in the background, meaning insurers can continue to reap the benefits of advanced automation technology—without opening the door to fraud.
As with other AI innovations, this kind of fraud detection involves close collaboration between systems and employees. If and when a claim is flagged as fraudulent, human employees can then evaluate the problem directly, aided in their decision-making by the information provided by AI. In effect, AI lays out its case for why it thinks the image or document in question is fraudulent—for instance, by drawing attention to identical images on the internet or to subtle but distinctive irregularities found in synthetically generated images. In this way, a reasonable determination can be quickly and efficiently reached.
Given the damage deepfakes have already caused, it is bracing to remember that this technology is in its relative infancy. And there is little doubt that, in the months and years to come, bad actors will attempt to wring every advantage they can out of each new development in GenAI’s evolution. Preventing them from doing so requires fighting fire with fire—because only cutting-edge tools can hope to combat cutting-edge fraud.
- Homeowners Insurance Does Not Cover Cryptocurrency Theft, 4th Circuit Affirms
- Jury Hands $5.2B Verdict Against Vegas Company in Bottled Water Liver Damage Suit
- Dozens of Alleged Looters, Unlicensed Construction Workers Arrested in Florida
- The Data Behind Rising Homeowners Premiums: By Peril and By State