Emerging Risks to Watch: AI, Data Centers, and Autonomous Vehicles

May 4, 2026

Artificial intelligence has moved rapidly from promise to operational reality. For insurers, it’s already influencing underwriting, claims, risk selection, and customer service. As with every major technological inflection point, the conversation around AI often swings between optimism and apprehension.

We examine how the expanding use of AI is reshaping the risk landscape for insurance–not only within the industry but across the broader economy and physical world. From cybersecurity and data privacy concerns to the infrastructure required to support AI, to its growing role in autonomous systems, understanding where AI-related risks may emerge is essential to managing them effectively.

Increased Adoption, Increased Risk?

Artificial intelligence is having a seismic impact across numerous industries. The wide-ranging–and growing–list of AI capabilities and applications makes it difficult to determine how the transformative technology may impact potential losses across specific insurance lines of business.

But a closer examination of available data on the businesses using AI provides a starting point from which we can begin to better understand how large a footprint AI may have, and where. Public reports of AI incidents can offer insight into the types of things that can go wrong with AI and machine learning systems, while ongoing litigation can show us the types of allegations and causes of action that might be brought against AI developers and users as the technology evolves.

In addition to understanding which industries are leveraging AI, public reports of AI incidents show the types of issues businesses may face when adopting this technology. According to an analysis by the Verisk Core Lines Emerging Issues team, approximately 77% of AI incidents reported in 2025 potentially held some form of insurance impact. Such incidents can be initiated by malicious actors–such as cybercriminals leveraging AI agents to carry out phishing attacks–or through simple human error.

‘[U]nderstanding where AI-related risks may emerge is essential to managing them effectively.’

A recently reported security incident involving data access from a popular enterprise generative AI tool highlights how the mass adoption of large language models (LLMs) and the concentration of the office software market may increase the potential for systemic liability exposure across cyber and professional and management liability lines. Earlier this year, a leading provider of office software and one of the world’s leading GenAI developers revealed that a flaw in its GenAI code enabled the tool to access, read, and summarize information that had been labeled confidential. The bug reportedly enabled the GenAI tool to bypass data security protocols designed to ring-fence that information from it.

The company in question has since announced a fix for the issue; however, this reported incident is only the latest in a series of data privacy issues that have dogged LLMs since the public release of the first GenAI chatbot in late 2022.

The Data Center Boom

Underpinning much of this technological adoption across numerous industries is a vast and growing network of brick-and-mortar data centers. Though these computing facilities have existed for decades, today’s modern iterations–requiring, in some cases, thousands of servers and miles of connection equipment–have quickly developed into a critical component in the advancing digital economy.

Accompanying this boom in computing power is a massive increase in the demand for data center construction across the U.S. According to one projection, data center construction spending is poised to increase 23% this year over last.

The rise in data center construction may present a potential growth market for brokers and reinsurers. One estimate suggests that these facilities could generate up to $10 billion in new premium in 2026.

This rapid growth is not without its share of potential risks and challenges. For insurers, these may include power failures that result in business interruption, and adjacent exposures such as fire and supply chain risks. Data center development may also increase exposure across general liability, as construction firms and contractors line up to serve this growing market.

The question of power is a primary concern. According to one growth projection, U.S. data center electricity needs could reach as high as 106 GW by 2035. (For a small sense of perspective, the U.S. had about 25 GW-worth of operating data centers just two years ago.) Even if that forecast is arguably too ambitious, data centers could still account for up to 12% of peak U.S. electricity demand by 2028.

The large demand for power from new and larger data centers may, according to the North American Electric Reliability Corporation (NERC), add strain to the U.S. power grid over the next several years, potentially putting much of the country at elevated or high risk of energy shortfalls, brownouts, or blackouts.

These potential blackouts can also increase the risk of business interruption losses. For example, a 2025 cloud region outage in Northern Virginia led to a preliminary insured loss estimate of $581 million.

Another lingering question regarding data centers is water consumption. Estimates suggest that the average data center uses around 300,000 gallons of water per day, while larger “hyperscale” facilities can use up to 5 million gallons each day to cool servers and related IT equipment. These figures have raised public concerns over potential water shortages, especially in those communities where water scarcity is already acute.

AI Behind the Wheel

Many of today’s technological innovations depend on AI technology to operate and scale, and autonomous vehicles (AVs) are no exception. In cities and municipalities across the U.S., self-driving taxis–or “robotaxis”–are driving alongside human motorists. At the same time, autonomous semi-trailers are being tested on many of the nation’s lengthy highways. This technology depends on an intricate orchestra of sensors, cameras, and radar systems, and much of this orchestration is conducted by AI.

According to a recent report, some industry experts also believe that advances in end-to-end and hybrid AI learning models could lead to more human-like driving behavior by self-driving vehicles.

The continued advance of AI driving capabilities underscores an important new emerging reality for auto risk: They may drive more “human-like,” but they could also fail in ways that a human driver would not. A recent study by researchers at UC Santa Cruz found that embodied AI systems increasingly integrated into the physical world–such as self-driving cars, drones, and autonomous robots–could be vulnerable to hijacking by malicious actors through the injection of misleading text into real-world environments, such as road signs.

These issues raise questions about the future of accident liability–shifting responsibility from driver to fleet managers or manufacturers. This could, some argue, cause auto insurance products to pivot away from third-party liability and eventually resemble product liability.

As insurers evaluate the next phase of AI adoption, the challenge is less about any single risk and more about how these exposures intersect. AI systems concentrate digital, physical infrastructure, and liability risks in new ways–whether through reliance on shared platforms, the growing footprint of data centers, or autonomous technologies. Navigating this shift will require insurers to continuously reassess exposures, refine coverage frameworks, and evaluate how these technologies reshape loss potential.

Shavel is president and chief executive officer of Verisk, a data analytics and technology partner to the global insurance industry. He has nearly 30 years of experience advising and leading publicly traded companies to Verisk.