As Consultant General Counsel for an AI and data science company, I’ve seen how AI supports decision-making across sectors from telecoms to finance. In this blog I share insights on the legal aspects you should consider if AI plays a role in business.
A quick note on terminology: I’m using “AI” here to refer to technologies, applications and systems which involve a degree of machine-learning, are autonomous and/or self-training, such as Generative Artificial Intelligence (which includes Large Language Models (LLMs) like OpenAI’s, ChatGPT and Dall-E or Google’s BERT and Deep Dream or Midjourney).
There are two main areas where AI is pushing the boundaries of IP rights and laws, especially copyright:
These questions are challenging because typically copyright laws are based on original content created by a human and not by a computer or a robot, however clever it might be (or become). It’s no surprise, then, that intellectual property offices and law courts of various countries are already grappling with these questions and that this area of law is evolving quickly.
Two key risks here are:
By using predictive analysis based on existing data, there is a risk that AI systems could perpetuate bias and discriminatory decision-making.
Different AI-specific laws are being introduced by certain countries and regions (including the EU and the UK), which are at different legislative stages (consultation in the UK, draft form in the EU). These indicate likely divergent directions, depending on the particular political and cultural climate. This makes business planning and strategy difficult and also raises challenges around applicable law and jurisdiction for cross-border business activities.
Law courts are struggling already to apply existing laws to AI. This is because our laws are based on human interests, motives, endeavours and responsibilities. Even under corporate law, companies are treated as individuals, so that they may be shoe-horned into this existing legal framework (albeit often uncomfortably). AI presents an existential challenge to some long-held legal principles. For example, should an AI-powered robot, with decision making autonomy, be treated more like a person “employed” by it (for which a company/owner may be vicariously liable) or a computer program supplied by it (where the company/owner can try to limit or exclude liability)?
Good contracts can help provide some clarity and certainty. In an uncertain legal landscape, it’s more important than ever to set out and apportion risks and responsibilities in the form of a contract. Whether, and what form of contract, will depend on the circumstances. It could be one negotiated between two business parties; or set out in some business-to-consumer T&Cs; or take the form of some rules and restrictions in a Terms of Service from a content provider to the world-at-large.
So, how to deal with the legal challenges in such a contract? Here are some thoughts:
Using that same distinction between inputs and outputs:
Contractual confidentiality clauses can help protect and govern inputs and outputs which are commercially sensitive, especially where existing IP rights and data protection regulation may not be certain or flexible enough to cover the use of AI. NDAs and confidentiality clauses are often considered “standard” and reviewed only lightly, but you may want to give them more thought to ensure they fit with and support your IP and data clauses.
If a contract’s governing law – or applicable law – changes during the life of the contract (and it inevitably will, if AI is involved), who takes responsibility for any compliance impacts to contractual performance and for any increase in costs? It’s almost impossible to predict how the law will evolve, in every country or region, so try to discuss and include in your contract a relevant and robust change control procedure.
Even if the main purpose of your contract is not AI, what happens if the provider decides to delegate some elements of the service provision to an AI system? Does your assignment and subcontracting clause cover this? Is your contract flexible enough to cover this; or is it too flexible (for example, an outcomes-based SLA that permits the replacement of human elements of the service with AI).
On the other hand, both clients and providers may want to actively be thinking about how a careful and controlled use of AI could improve the accuracy, quality and efficiency of certain business processes or other activities, allowing human resources to be better used elsewhere, and to maintain or improve a competitive edge.
As noted below, using AI could mean unintended consequences so review your “standard” templates, including limitation and exclusions of liability clauses, with this in mind.
Outside of (but supporting) your contract, the arrival of mainstream AI could also be a good prompt to review your other risk-management measures and controls. For example, reviewing relevant internal and external policies (IP, Data Protection, IT Security, HR), your insurance policies (PI and cyber) and your risk register.
We can’t hope to predict everything in our contract to cover how the use of AI might develop. When we think about what we can control now, however, the most important thing to get right is the contract. Maybe spend a bit longer on it that usual and recognise that usually reliable standard clauses (in particular the IP and data sections) might need a bit more thought and care. Focus on the here and now, but keep an eye on the horizon too.
Iain Simmons is Consultant General Counsel at ExploreAI, an Artificial Intelligence and Data Science consulting and applications development business. If you’d like to discuss anything in this article or need help with the legal aspects of using AI in your business, contact Iain, Senior Consultant at Abor Law on iain.simmons@arbor.law