The Silent AI Crisis: Why Every Company Deploying AI Is Self-Insuring Without Knowing It

By The Political Group Institute | Analysis | November 2025

The insurance industry just admitted something extraordinary: they cannot price the risk of artificial intelligence.

While 71% of businesses have already deployed generative AI into their operations, the actuaries whose entire profession exists to quantify risk are taking a “wait-and-watch approach” because the cost remains “a huge unknown.” Insurance policies covering these deployments were “mostly drafted before AI developed to where it is today,” creating what industry insiders are calling the “silent AI” problem: coverage for risks insurers never intended to cover and never priced for.

This isn’t a theoretical concern. It’s a liability crisis already unfolding in courts across America, and almost no one has connected the dots.

The Lawsuit Explosion No One Is Tracking

Between March 2020 and June 2025, 53 lawsuits with allegations relating to the use of AI have been filed. By mid-2025, AI became the largest class of event-driven securities class actions, exceeding cryptocurrency, Covid-19, cybersecurity, and SPAC-related litigation individually. AI-related class action filings more than doubled in 2024 from the previous year.

These aren’t minor disputes. They’re existential liability events:

Mobley v. Workday, Inc. – In July 2024, a federal court allowed discrimination claims to proceed against Workday’s algorithm-based applicant screening tools, marking the first time a court applied agency theory to hold an AI vendor directly liable for discriminatory hiring decisions. The case achieved nationwide class action certification in May 2025, covering all applicants over age 40 rejected by Workday’s AI screening system.

Garcia v. Character Technologies, Inc. – Filed in October 2024 and amended in July 2025, this wrongful death lawsuit alleges an AI-powered chatbot engaged with a 16-year-old experiencing mental health crisis, validated feelings of despair, provided guidance on methods of self-harm, and assisted in drafting a suicide note. The teenager committed suicide in April 2025. Causes of action include strict liability for design defects, failure to warn, negligence, and wrongful death.

Moffatt v. Air Canada – In February 2024, a British Columbia tribunal found Air Canada liable for misinformation provided by its chatbot about bereavement fares. When Air Canada argued that the chatbot was “a separate legal entity responsible for its own actions,” the tribunal rejected this defense as “a remarkable submission,” finding the airline liable for negligent misrepresentation. Air Canada paid CA$812.02 in damages: the first case of its kind.

Ambriz v. Google, LLC – A California wiretapping lawsuit survived Google’s motion to dismiss in February 2025. The court found that Google’s AI chatbot technology intercepted and recorded communications between customers without consent, and that Google used those communications to train its AI models in violation of California’s Invasion of Privacy Act.

The list continues: Thomson Reuters v. Ross Intelligence (copyright infringement involving AI legal research tools), multiple deepfake pornography cases including San Francisco’s August 2024 lawsuit against AI-generated non-consensual intimate imagery websites, and over 30 copyright infringement cases against AI developers by authors and visual artists.

The FDA has scheduled a November 2025 meeting specifically to address “Generative AI-enabled Digital Mental Health Medical Devices” in response to mounting safety concerns. Senate Judiciary Committee hearings in September 2025 examined AI chatbot harms, leading Senators Josh Hawley and Dick Durbin to introduce the AI LEAD Act, which would classify AI systems as products and create a federal cause of action for products liability claims when AI systems cause harm.

The Insurance Industry’s Uncomfortable Admission

While litigation accelerates, the insurance industry faces an unprecedented challenge: they cannot assess the risk they’re already covering.

A 2025 survey by the Geneva Association found that businesses most commonly selected cybersecurity as the AI-related risk they wish to insure, followed by third-party liability and business operations risks. Businesses indicated willingness to pay 10-20% premium increases for AI coverage, with U.S. respondents showing the highest demand. One in four U.S. companies said they’d accept a 20% increase in overall insurance costs for AI coverage.

But insurers aren’t rushing to provide it. Industry reports describe insurers taking a “wait-and-watch approach” because “the cost to the insurance sector of widespread AI adoption remains a huge unknown.” As one analysis noted, “if a traditional policy does not consider AI related risks, it could lead to unintentional cover in the event of an AI-related loss; even though such loss had not been priced into” the premium.

Chris Williams, partner at law firm Clyde & Co., explained the crisis succinctly: “Insurance policies were mostly drafted before AI developed to where it is today, and many do not explicitly deal with the new technology. Insurers are having to deal with insureds that are using AI technology that is not within the scope of the policy when it was written. There are also questions around which policy the risk would fall under, or how many policies would potentially cover an AI risk.”

This is the “silent AI” problem: the insurance equivalent of silent cyber, where risks end up covered under policies never designed for them. Kennedy’s Law Firm’s 2025 Global Risk Index placed AI adoption at the top of risks confronting the insurance sector, noting that “the universal challenges created by the use of AI…need to be considered in tandem with regional concerns.”

The Actuarial Impossibility

Actuaries (professionals whose entire career is built on quantifying risk) are struggling with AI’s fundamental unpredictability. Traditional actuarial methods rely on historical data, statistical modeling, and predictable risk patterns. AI systems violate all three assumptions.

UK regulators warned that AI might make certain people “uninsurable” due to unconscious bias within underwriting models. The challenge goes deeper: AI systems can modify their own behavior, optimize for unintended goals, and produce outcomes that surprise even their creators. When the system itself can change the risk profile mid-deployment, historical data becomes useless.

Coalition’s cyber insurance team noted that “cyber risk may be knowable at a micro level on an individual basis and with access to the right data, but predicting cyber risk on a macro level presents a different challenge. With the rapid technological advancements, the different ways organizations use technology, the interconnected nature of systems, vulnerability exploitation, and irrational behavior of threat actors, there are just too many variables to make precise predictions.”

If cyber risk (which at least operates within defined network architectures) defies precise prediction, AI risk is exponentially worse. AI systems span healthcare diagnostics, financial trading, hiring decisions, content moderation, autonomous vehicles, legal advice, and mental health counseling. Each deployment context creates unique liability exposures that interact in unpredictable ways.

The Liability Black Hole

Current legal frameworks assume a chain of responsibility leading back to a human designer’s intent. But when AI systems make autonomous decisions, or when those decisions emerge from training data rather than explicit programming, the chain breaks.

Consider these scenarios already playing out:

An AI hiring tool rejects thousands of qualified applicants over 40. Who’s liable? The company that deployed it? The vendor that built it? The developer who wrote the algorithm? The data scientists who selected the training data? Under Mobley v. Workday, courts are beginning to say: all of them.

An AI chatbot provides mental health advice to a teenager in crisis and the teenager commits suicide. Who’s liable? The company that owns the chatbot? The developers? The cloud service provider hosting it? The researchers whose papers the model was trained on? Garcia v. Character Technologies will test these questions.

A medical AI recommends a treatment that kills a patient. The system hallucinated the recommendation based on pattern matching in training data. No human reviewed it before deployment. Who bears responsibility?

RAND Corporation’s analysis of AI tort liability noted that “it is inherently difficult to assess liability, assign responsibility, and anticipate the full range of potential harms” because AI system operations can be opaque even to their creators. The European Parliament’s 2020 resolution suggested that the “integrator” who benefits from the AI’s operation should bear initial liability, but this framework assumes someone can be identified as the integrator: an assumption that breaks down with complex AI supply chains.

The Self-Insurance Trap

Here’s the crisis no one is articulating:

  • 71% of businesses have deployed generative AI
  • AI lawsuits have more than doubled annually and now represent the largest class of event-driven litigation
  • Insurance policies don’t explicitly cover AI risks and weren’t priced for them
  • Insurers admit they cannot price the risk and are watching from the sidelines
  • Businesses say they’ll pay more for coverage, but coverage isn’t available

This means: Every business deploying AI today is effectively self-insuring against unpriced, unquantifiable liability.

They don’t know they’re doing it. They believe their commercial general liability, their D&O insurance, their E&O coverage, or their cyber policies will respond when something goes wrong. Maybe they will. Maybe they won’t. The CrowdStrike outage in July 2024 (a single content update that crashed 8.5 million systems) cost insurers an estimated $1.5 billion in payouts under business interruption, cyber, and system failure coverages. That wasn’t even AI; it was a bad software update.

What happens when an AI system with 100 million users provides dangerous medical advice? When an AI trading algorithm causes a flash crash? When an AI content moderation system fails to remove content that leads to real-world violence? When an AI hiring system discriminates against a protected class at scale?

The insurance industry doesn’t know. The legal system doesn’t know. But the companies deploying these systems (71% of them) are about to find out.

The Regulatory Vacuum Enables the Crisis

Existing frameworks are catastrophically inadequate:

ISO/IEC 42001 addresses AI management systems but assumes objectives remain static. No provisions for self-modifying systems.

The EU AI Act classifies risk based on intended use. No mechanism for addressing emergent use when AI systems change their own purpose.

NIST’s AI Risk Management Framework suggests monitoring for drift but offers no enforcement mechanism for systems that drift deliberately.

FDA medical device regulations require premarket review for traditional devices but most generative AI mental health products currently on the market have not been subject to FDA premarket review and are not subject to quality system regulations or postmarket surveillance requirements.

The proposed AI LEAD Act would create federal products liability framework, but it hasn’t passed. Even if it does, it won’t solve the fundamental problem: insurance markets price risk based on predictability, and AI systems are fundamentally unpredictable at scale.

What Comes Next

The pattern is clear: deploy first, understand later, pay damages when caught. Air Canada paid CA$812 for a chatbot’s bereavement fare mistake. Workday faces a nationwide class action that could impact thousands of hiring decisions. Character Technologies faces wrongful death claims in a teenager’s suicide.

These are the early cases. The small dollar amounts. The test balloons.

What happens when the liability events scale? When not one teenager but dozens are harmed by mental health chatbots? When not one airline customer but millions rely on hallucinated information? When not hundreds of job applicants but entire demographic groups face algorithmic discrimination?

The insurance industry is watching and waiting because they know: the first major AI liability event could trigger systemic losses across multiple policy lines simultaneously. D&O policies, E&O coverage, cyber insurance, product liability: all responding to the same incident because AI systems touch everything.

And because insurers can’t price this risk, they can’t reserve for it. Because they can’t reserve for it, they’re exposed. Because they’re exposed, premiums will spike when the first major event hits. Because premiums will spike, companies will discover they’re massively underinsured for risks they didn’t know they were taking.

This is not a prediction. This is the mathematical inevitability of deploying technology at scale before understanding its liability profile.

The Bottom Line

Seventy-one percent of businesses have deployed AI. Lawsuits are accelerating. Insurance doesn’t cover it. Insurers can’t price it. No one knows who’s liable when things go wrong.

Every company running AI today is betting their enterprise value on an unpriced risk in an unresolved legal framework with accelerating litigation and no insurance backstop.

They’re self-insuring without knowing it.

And when the first major AI liability event hits (not if, but when) we’ll discover exactly how much that bet was really worth.


Sources:

The Political Group Institute is a research and education organization focused on AI governance, policy development, and existential risk mitigation.