Contracts to manage AI risk
This is the first of a two-part article on how artificial intelligence contracts can be used to manage the development and use risks associated with such transformative technology.
All transformational technology is fraught with risk. However, when it comes to AI, Elon Musk said “we are communing with the devil”.
As a commercial lawyer, I like to think that fables about contracts with the devil exist because contracts are the ultimate risk management tool.
Regulators around the world, including the Bermuda Monetary Authority, fully appreciate the importance of using contracts to mitigate, if not avoid, the risks associated with the development and use of transformative technologies.
The BMA’s 2025 Business Plan signalled future policies concerning the risk-managed use of AI by its registrants.
That plan stated that the BMA was “undertaking a review of the Insurance Code of Conduct and the Operational Cybersecurity Code of Conduct to consider the merits of integrating specific guidelines on the use of AI and machine-learning systems”.
As with the BMA’s regulatory requirements for outsourcing and cybersecurity governance, I fully expect that the evolution of those current regulations will include additional risk management guidance involving AI contracts that are relied upon by Bermuda’s financial service sector.
In that regard, the recent emergence of model contracts for all sectors to manage the many risks of AI has been striking.
Among the variations of AI model contracts that I have consulted, there are two that stand out.
In 2023, Britain’s Society for Computers and Law published a 59-page White Paper titled Artificial Intelligence Contractual Clauses, and recently the Digital Transformation Agency of the Australian Government published [AI] Model Clauses, Version 2.0. Both are excellent.
Both organisations take a pragmatic approach to crafting contractual provisions that specifically address the commercial and legal risks of AI development, commercialisation and use. There is nothing abstractly academic about that guidance.
The commercial risks associated with all transformative technology, including AI, include the risks that:
• The technology doesn’t perform the way that the vendor promised it would
• Due diligence is difficult to undertake on products and vendors that are new to the market
• The solution’s operation may not be compatible, interoperable or easily integrated with legacy systems
• The solution’s performance reliability is yet to be proven
To address those “new-to-market” risks, both the SCL and DTA recommend that AI contracts include terms that:
• Define the operational and functional specifications of the solution in precise and empirically verifiable terms
• Require either a vendor-led AI demonstration or an operational demonstration within the customer’s infrastructure
• Require acceptance testing as a precondition to contract effectiveness and any licence fee payments
• Stipulate a warranty (of reasonable duration) concerning the solution’s “on spec” operation and that requires expedited defect remediation
Where AI is offered as a service rather than as licensed software, the contract should also address the usual risks that are associated with:
• The different variations of cloud or distributive computing
• Any jurisdictional export control restrictions
• Compliance with all privacy laws, including export restrictions
• The service provider’s compliance with all applicable law, including outsourcing and cybersecurity regulations
• Subcontracting restrictions
• A prohibition on the re-export of data to other jurisdictions
Since many AI solutions are powerful search agents that function as scrapers and “crawler bots”, two of the most prominent and serious AI risks to contractually address are the misappropriation of personal (and often confidential) information that the AI solution accesses, views and copies or uses, and the unlicensed reprography and misappropriation of third-party intellectual property.
As intelligent as AI may appear, it may be unable to identify data and content that is the property of others.
Based on the AI copyright infringement cases that are now before the courts in the US and Britain, AI contracts should include broadly drafted third-party non-infringement covenants as well as indemnities to protect users from such third-party liability. That approach to manage the risk of intellectual property infringement is required for all content or data that AI finds, fetches and brings back to the doorstep.
More specifically, the SCL and DTA suggest that AI contracts include covenants to:
• Ensure that the AI provides only original work
• Ensure that AI does not merely customise, enhance or create derivative works of someone else’s property
• Address whether the service vendor owns the AI or the AI otherwise relies on “open source” software
• Provide that neither the use nor operation of the AI will breach any third-party rights, including any contractual, privacy, intellectual property or statutory rights
Next week, in part two, I will identify additional development and use risks that AI brings, and the contractual terms that are necessary to address those risks.
• Duncan Card is a partner at Appleby who specialises in information technology and outsourcing contracts, privacy law and cybersecurity compliance in Bermuda. A copy of this column can be obtained on the Appleby website atwww.applebyglobal.com. This column should not be used as a substitute for professional legal advice. Before proceeding with any matters discussed here, consult a lawyer