2 minute read

AI: Avoiding and managing disputes

By Douglas Skilton, Partner Dispute Resolution Team, Thomson Snell & Passmore

Whilst AI has the potential to enhance productivity for organisations of all sizes, acquiring and using a system is not without risk.

Disputes with suppliers of AI

What recourse will you have if an AI system you buy does not perform as expected? Most suppliers will try to reduce their risk, particularly the uncertainty around the terms that might be implied by law (such as under the Sale of Goods Act 1979), by imposing their own written terms and conditions. However, there may still be scope for disagreement about how the terms should be interpreted.

Is the specification clear, what are the outcomes and KPIs for measuring whether the AI meets the requirements? There may be questions over whether defects are in the AI system itself, the data used to train it, generate the output, or applications to which it is connected. This has the potential to create a blame game.

Disputes with service providers who use AI

Liability disputes might arise for businesses using AI to deliver their services. Unless properly excluded, the Supply of Goods and Services Act 1982 implies an obligation to perform a service with reasonable care and skill. The question of whether it was reasonable to use

AI when providing the service might be contentious.

A duty to avoid being negligent can apply in addition to contractual obligations, or even where there is no contract in place. Simply relying on AI may be risky, without human oversight to review the results. However, even this might not provide a complete answer - what if the human supervisor fails to spot an error? What is reasonable to expect them to be able to spot?

Reducing and managing the risks

Some of the considerations that might mitigate against the risks include:

• Ensuring the specification or scope of work are clear, sufficiently detailed and tailored, together with the method of delivery, and KPIs.

• Anticipating the types of liability that may arise and allocating risks, or at least understanding the allocation. The same applies for warranties on performance and outputs, and the imposition (or acceptance) of exclusions or limitations of liability.

• Will training be required so users fully understand how the system works and what its limitations are? Aligned to that, ensuring processes and procedures are in place for checking and monitoring the system will be important.

• Is a dispute resolution process required that imposes an obligation on the parties to try to resolve disagreements through prescribed channels before litigation?

Obviously there are many other factors that need to be considered, and it is important for those involved in the acquisition or use of AI, to be alive to the potential risks, as well as the intended upsides. It is essential for all stakeholders to have an understanding of the technology they are dealing with, and the legal landscape that surrounds it.

Contact info@ts-p.co.uk

This article is from: