Log In

Reset Password

Contracts to manage AI risk, part two

The future is now: a student is taught how to ask ChatGPT for help during a summer class at Roosevelt Middle School on June 25 in River Forest, Illinois (Photograph by Nam Huh/AP)

In part one of this two-part series about artificial intelligence contracts, I discussed the ways that contracts can mitigate, if not avoid, many of the risks associated with the development and use of transformative technology like AI.

In addition to the intellectual property infringement risks I have described, AI’s current use is raising concerns about the veracity, reliability and completeness of AI’s output.

Therefore, in addition to contracts including acceptance testing provisions, AI contracts should include covenants that address the quality of the AI’s output, usually as service level specifications.

As well, many AI contracts now include “adult supervision” clauses that require ongoing human oversight, verification and quality assurances for AI solutions that AI promises.

The model AI contract published by the Digital Transformation Agency of the Australian Government, “Artificial Intelligence (AI) Model Clauses, v. 2.0”, recommends more than a dozen human oversight provisions for consideration.

A dimension of risk that AI and cybersecurity governance share, is that both are subject to a fast-moving legal and regulatory landscape.

Since evolving AI laws and regulations may directly and materially affect how AI is developed and used, AI contracts should include change management provisions to allow the parties to discuss the contract’s terms and conditions in response to any such law reform, including how the contract may have to be amended to address those unknown future, but expected, legal developments.

As risk managers know, AI operations are not highly transparent.

Therefore, for reasons related to potential litigation and the need for service performance monitoring and regulatory compliance, the UK’s Society for Computers and Law, in a white paper titled, “Artificial Intelligence Contractual Clauses”, devotes considerable attention to recommending that all AI contracts require the AI solution to produce a transparent, reliable, complete and accurate record of the AI’s operations and activities.

Such AI operational record transparency is often referred to as “logging by design”, and AI contracts often stipulate the precise types of AI operations that must be tracked and recorded, including when the AI fails to operate in compliance with the governing contract.

Another potential risk for enterprises that use AI to gain important competitive advantages, is that they may not own the results of what the AI has created, learnt or compiled.

Given the creative and self-improvement abilities of AI, unless the enterprise owns the AI that it is using, the contract needs to address who owns the AI-created works, including any advanced data analytics or software improvements that AI may create for itself.

For the most part, AI product or service vendors insist on owning those “sweat of the software brow” labour results.

However, where an AI solution or application has been created or customised to a customer’s bespoke operational specifications and contains important competitive commercial advantages, the ownership of those works may be negotiated otherwise.

Even where the customer does not contractually own the results of the AI solution’s endeavours — for example, the advanced data analytics that the AI created — then the customer should contractually stipulate that:

• Such works constitute the commercially confidential information of the customer despite the vendor’s ownership of same

• The customer shall have the sole and exclusive, perpetual, royalty-free, personal, non-transferable and non-sublicensable right to use same for the purposes of its business without any territorial or other restriction

Customers of service providers that rely on AI to perform their services should consider that most of the model contracts provisions that the SCL and the DTA have recommended are entirely applicable to those governing AI service agreements.

The supply-chain use of AI presents almost as many risks to customers as the direct use of AI does, except that in the latter case, the customer arguably has more control over the terms and conditions of the governing AI solution agreement.

Given the fast-moving regulation of AI applications worldwide, there is a growing risk that some of the features and functions of the AI that customers are using have been banned or otherwise prohibited in parts of the world.

Consequently, the DTA recommends that all AI contracts include a representation and warranty that no part or aspect of the AI solution contains any operations that constitute practices, AI products, applications, software code or web services that have been banned, prohibited or otherwise restricted from use that would have a detrimental impact on the user.

A simple schedule to the relevant contract can disclose any exceptions that are acceptable to the parties.

One of the fastest developing imperatives for companies to critically review their AI contracts arises where AI is being used for job application automation.

Numerous human rights cases have alleged that some AI solutions have been programmed with inherent discriminatory biases that skew its operations for applicant evaluation, decisions on candidate scoring and ranking and other qualitative judgments in contravention of certain candidates’ human rights protections.

Hopefully, the prescriptions offered in this two-part series will help organisations to manage, if not avoid, such material risks during their adoption and reliance upon transformative technology like AI.

Duncan Card is a partner at Appleby who specialises in IT and outsourcing contracts, privacy law and cybersecurity compliance in Bermuda. A copy of this column can be obtained on the Appleby website atwww.applebyglobal.com. This column should not be used as a substitute for professional legal advice. Before proceeding with any matters discussed here, consult a lawyer

Royal Gazette has implemented platform upgrades, requiring users to utilize their Royal Gazette Account Login to comment on Disqus for enhanced security. To create an account, click here.

You must be Registered or to post comment or to vote.

Published July 24, 2025 at 7:57 am (Updated July 24, 2025 at 7:27 am)

Contracts to manage AI risk, part two

Users agree to adhere to our Online User Conduct for commenting and user who violate the Terms of Service will be banned.