Navigating the Intersection of AI and Business Law: What You Need to Know

As Consultant General Counsel for an AI and data science company, I’ve seen how AI supports decision-making across sectors from telecoms to finance. In this blog I share insights on the legal aspects you should consider if AI plays a role in business.

Terminology 101

A quick note on terminology: I’m using “AI” here to refer to technologies, applications and systems which involve a degree of machine-learning, are autonomous and/or self-training, such as Generative Artificial Intelligence (which includes Large Language Models (LLMs) like OpenAI’s, ChatGPT and Dall-E or Google’s BERT and Deep Dream or Midjourney).

The Legal Challenges You’ll Encounter

Intellectual Property:

There are two main areas where AI is pushing the boundaries of IP rights and laws, especially copyright:

  • Inputs: when AI trains itself on source data and content (in the case of Generative AI, from data, websites, articles and other copyright works on the open internet), is this infringing IP rights in the source material?
  • Outputs: when the AI generates new content (in the case of Generative AI, this could be pictures, music or text), does this itself create a new copyright work?

These questions are challenging because typically copyright laws are based on original content created by a human and not by a computer or a robot, however clever it might be (or become). It’s no surprise, then, that intellectual property offices and law courts of various countries are already grappling with these questions and that this area of law is evolving quickly.

Data Protection and Privacy:

Two key risks here are:

  • When LLMs trawl through huge amounts of data, this might constitute “processing” of personal data (aka PII) over which individual data subjects from some countries/regions will have enforceable legal rights. The potential sanctions and fines for getting this wrong can be severe, and AI is definitely in the sights of regulators.
  • AI systems are capable of being used to efficiently assess risks and autonomously make decisions. This presents its own data protection compliance risk, particularly, where such decisions could have an adverse impact on living individuals, even if unintentionally.


By using predictive analysis based on existing data, there is a risk that AI systems could perpetuate bias and discriminatory decision-making.

Regulatory Uncertainties

Different AI-specific laws are being introduced by certain countries and regions (including the EU and the UK), which are at different legislative stages (consultation in the UK, draft form in the EU).  These indicate likely divergent directions, depending on the particular political and cultural climate.  This makes business planning and strategy difficult and also raises challenges around applicable law and jurisdiction for cross-border business activities.

Litigation and liability

Law courts are struggling already to apply existing laws to AI.  This is because our laws are based on human interests, motives, endeavours and responsibilities.  Even under corporate law, companies are treated as individuals, so that they may be shoe-horned into this existing legal framework (albeit often uncomfortably).  AI presents an existential challenge to some long-held legal principles. For example, should an AI-powered robot, with decision making autonomy, be treated more like a person “employed” by it (for which a company/owner may be vicariously liable) or a computer program supplied by it (where the company/owner can try to limit or exclude liability)?

If these are some of the legal challenges, what are some potential solutions?

Contract clarity

Good contracts can help provide some clarity and certainty. In an uncertain legal landscape, it’s more important than ever to set out and apportion risks and responsibilities in the form of a contract.   Whether, and what form of contract, will depend on the circumstances. It could be one negotiated between two business parties; or set out in some business-to-consumer T&Cs; or take the form of some rules and restrictions in a Terms of Service from a content provider to the world-at-large.

So, how to deal with the legal challenges in such a contract? Here are some thoughts:

IP Considerations

Using that same distinction between inputs and outputs:

  • Inputs: The owner of the source data (let’s call them, the “client”) will want to retain full ownership of, and rights to their pre-existing IP rights, including to any data sets fed into the AI system. The provider or owner of the AI system (let’s call them the “provider”) will want to retain all proprietary and pre-existing IP rights in and around the AI system and how it is being deployed.  This is relatively clear for the AI system itself where this is a pre-existing product or application. But it becomes somewhat trickier where the AI system and the provider’s methodologies, algorithms, know how (etc.) are being improved by, and trained on the client’s data sets.  Also, the more efficient and autonomous the AI, potentially the less control the provider will have over it.   The best course is to set down some red lines and key principles over the use of the data and even for the client to filter-out any data that they are not comfortable with inputting into the AI from the start.
  • Outputs: The client owns the data so will also want to own the modified data, which seems reasonable. Until the provider points out that some of the modified data also has a value to the AI system itself, in terms of continuing to learn and improve. It can also get complicated if provider and client teams will be working collaboratively.  The best advice and approach here is to remain focused on what’s important to each party now and in the near future and then ensure the contract has the right cross-licences, rights and restrictions (and clear definitions!) in place.  It will help if the client and provider already have an IP policy or strategy which encompasses AI.

Data Protection and Privacy

  • A key factor in who owns what (including data outputs), will be the responsibility to comply with data privacy laws and reputational issues. If the data being fed into the AI system is not personal data/PII (or if it can be anonymised and aggregated), you’re in luck.  The most important aspects will be controlling the rights and restrictions around the input and output data, in parallel with the IP clauses.
  • If it’s personal data/PII you’ll need to consider whose data it is and whether you have the requisite rights and consents for the proposed use, in addition to applicable law and legal requirements, data processing and transfer agreements (etc.). At a minimum, you should do a Privacy Impact Assessment (or equivalent) to manage and control any data privacy impacts to the data subjects.
  • A rights-holder who wishes to prevent LLMs such as ChatGPT from using bot crawlers to scrape and trawl for content (for self-training and/or content generation) should make it clear in their Terms of Service what is and is not permissible, and then use available technology and functionality to block any unauthorised access.


Contractual confidentiality clauses can help protect and govern inputs and outputs which are commercially sensitive, especially where existing IP rights and data protection regulation may not be certain or flexible enough to cover the use of AI.  NDAs and confidentiality clauses are often considered “standard” and reviewed only lightly, but you may want to give them more thought to ensure they fit with and support your IP and data clauses.

Legal Adaptability

If a contract’s governing law – or applicable law – changes during the life of the contract (and it inevitably will, if AI is involved), who takes responsibility for any compliance impacts to contractual performance and for any increase in costs?  It’s almost impossible to predict how the law will evolve, in every country or region, so try to discuss and include in your contract a relevant and robust change control procedure.


Even if the main purpose of your contract is not AI, what happens if the provider decides to delegate some elements of the service provision to an AI system?  Does your assignment and subcontracting clause cover this?  Is your contract flexible enough to cover this; or is it too flexible (for example, an outcomes-based SLA that permits the replacement of human elements of the service with AI).

Continuous improvement

On the other hand, both clients and providers may want to actively be thinking about how a careful and controlled use of AI could improve the accuracy, quality and efficiency of certain business processes or other activities, allowing human resources to be better used elsewhere, and to maintain or improve a competitive edge.


As noted below, using AI could mean unintended consequences so review your “standard” templates, including limitation and exclusions of liability clauses, with this in mind.


Outside of (but supporting) your contract, the arrival of mainstream AI could also be a good prompt to review your other risk-management measures and controls.  For example, reviewing relevant internal and external policies (IP, Data Protection, IT Security, HR), your insurance policies (PI and cyber) and your risk register.

We can’t hope to predict everything in our contract to cover how the use of AI might develop.  When we think about what we can control now, however, the most important thing to get right is the contract. Maybe spend a bit longer on it that usual and recognise that usually reliable standard clauses (in particular the IP and data sections) might need a bit more thought and care.  Focus on the here and now, but keep an eye on the horizon too.

Iain Simmons is Consultant General Counsel at ExploreAI, an Artificial Intelligence and Data Science consulting and applications development business.  If you’d like to discuss anything in this article or need help with the legal aspects of using AI in your business, contact Iain, Senior Consultant at Abor Law on