Log In

Reset Password

Insurers begin covering AI mistakes as adoption spreads

Growing risk: Mosaic offers insurance against underperformance by a new wave of AI tools (Photograph by Chinatopix/AP)

A Bermudian-based insurer says a new form of cover designed to protect companies from artificial intelligence systems that fail to perform as promised reflects a growing category of risk as businesses rely more heavily on automated decision tools.

Mosaic Insurance recently partnered with Munich Re’s aiSure to offer up to €15 million (about $16 million) in coverage for financial losses linked to defined failures in AI model performance. The product pays out when a system misses a pre-agreed accuracy target, rather than after a cyberattack or technology outage.

Dennis Bertram, head of AI underwriting at Mosaic, told The Royal Gazette that the trigger is based on measurable performance guarantees agreed between an AI provider and its customer.

“The performance threshold is not a one-size-fits-all number,” Mr Bertram said. “It is defined individually for each risk, based on the specific AI model and the guarantee the insured has issued to their customer.”

Examples might include a property valuation model guaranteeing accuracy within 10 per cent of the final sale price, a quality-control system promising to detect 95 per cent of defects, or a fraud detection tool committing to identify at least 98 per cent of fraudulent transactions, he said.

Those performance thresholds are written directly into the policy and monitored using existing data that compares the model’s predictions with real-world outcomes.

Mr Bertram said the concept of insuring model underperformance was not entirely new.

Dennis Bertram, head of AI underwriting at Mosaic (Photograph supplied)

“This type of contractual liability coverage for model underperformance has existed since 2018 through Munich Re, though back then it was not branded as AI,” he said, adding that the product concept already has claims experience behind it.

What has changed, he said, is the rapid expansion of AI across sectors like fraud detection, lending decisions and automated valuations, which has increased demand for protection if systems fail to meet promised performance levels.

The development comes as the Bermuda Monetary Authority is consulting on the responsible use of artificial intelligence across the island’s financial services sector.

In a recent discussion paper, the regulator warned that poorly designed AI systems could amplify bias, create cybersecurity risks and lead to “automation bias”, where companies rely too heavily on machine outputs without sufficient human oversight.

The BMA has called for stronger governance, transparency and board-level understanding of AI risks as adoption grows.

Meanwhile, the Mosaic product shows how insurers are beginning to treat AI reliability itself as a financial risk that can be transferred to the insurance market.

Underwriting requires historical data comparing a model’s predictions with actual outcomes. This allows insurers to figure out whether the accuracy levels the model is promising are realistic before offering coverage.

Mr Bertram said regulators are comfortable with the produce because the structure operates within established regulatory frameworks. Ultimately, it indemnifies contractual payment obligations — a concept already widely used in insurance.

“At its core, AI is software, and software errors have been insurable for a long time,” he said. “What is new is the specific application to AI performance risk.”

Royal Gazette has implemented platform upgrades, requiring users to utilize their Royal Gazette Account Login to comment on Disqus for enhanced security. To create an account, click here.

You must be Registered or to post comment or to vote.

Published March 06, 2026 at 7:49 am (Updated March 06, 2026 at 7:48 am)

Insurers begin covering AI mistakes as adoption spreads

Users agree to adhere to our Online User Conduct for commenting and user who violate the Terms of Service will be banned.