What is AI regulation in the UK?

In August 2024, the Artificial Intelligence (AI) Act came into force across the European Union (EU).

As the UK now sits outside of the EU, it can set its own specific legislation regarding AI. In this follow up to our blog on the EU Act, Arbor Law’s Founding Director Ed Rea looks at the regulatory landscape for AI in the UK.

On 17th July 2024, the King’s Speech set out the priorities of the new UK Government, led by Sir Keir Starmer. The Speech included plans relating to the development and regulation of AI, as King Charles announced that the new Government, “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.”
Labour v Conservative approach to AI

The Labour Government’s plan and timescale for AI is still unclear, bar the focus on regulating models used for generative AI. However, such vagueness is still a significant policy change from the previous Conservative-led administration, which aimed to take more of a light touch approach to legislating AI by having existing sector specific regulatory structures identify and address gaps in regulation.

Responding to the King’s Speech, former prime minister, Rishi Sunak said: “We are third only to the US and China in the size of our fast-growing technology sector, and we lead the world when it comes to AI safety. We should all in this House be careful not to endanger this country’s leading position in this field, which will drive growth and prosperity for decades to come.”

This approach was supported by Microsoft, which reported in May that delaying AI’s roll out in the UK could reduce the country’s economic impact in 2034 by more than £150 billion.

A UK AI bill

Whilst no specific AI bill was announced as part of the King’s Speech, media reports since then have suggested that it is still on the horizon. According to the Financial Times, senior Labour ministers have met recently with leading technology companies and referenced that an AI bill was on the horizon and would focus, “exclusively on two things: making existing voluntary agreements between companies and the government legally binding, and turning the UK’s new AI Safety Institute into an arm’s length government body.”

Building on past foundations

The AI Safety Institute (AISI) was launched by Rishi Sunak’s previous government in 2023 as a directorate of the UK Department for Science, Innovation, and Technology. It’s aim will be to rigorously research and test AI models for risks and vulnerabilities.

In November 2023 the UK hosted the AI Safety Summit at Bletchley Park, where tech businesses including OpenAI, Google DeepMind, Amazon, Microsoft and Meta signed an agreement, which was not legally binding, with governments including the UK, US and Singapore. The agreement enabled governments that were signatories to risk-test new models prior to their release to the market.

In May 2024 the UK (with the Republic of Korea) co-hosted the AI Summit where tech companies made various voluntary commitments, including commitments to:

  • Effectively identify, assess and manage risks when developing and deploying their frontierAI models and systems;
  • Be accountable for safely developing and deploying their frontierAI models and systems; and
  • Ensure their approaches to frontierAI safety are appropriately transparent to external actors, including governments.

According to the Financial Times, government officials in the UK want to turn these voluntary agreements into law to, “ensure that companies already signed up to the agreements cannot renege on their obligations if it becomes commercially expedient to do so”.

The FT’s sources believed that the substance of a new AI bill is expected to be unveiled in weeks and a consultation undertaken for approximately two months. The media outlet also claimed that the AISI could help set global standards for AI but that some AI regulation will be addressed outside of the bill, such as using intellectual property to train AI models without payment or permission.

Next steps

There will be an element of uncertainty about the UK’s intentions for regulation of AI until any official announcement or consultation launch. The announcement in July of an “AI Opportunities Action Plan” to identify how the new technology can drive economic growth has started some of the process. We can expect further announcements soon.

How Arbor Law Can Help

Once published, companies should proactively assess the UK’s AI bill’s applicability to their operations and implement necessary changes to ensure compliance. Arbor Law offers expert legal advice to help navigate the complexities of the AI regulation, manage AI use in your organisation, and develop effective compliance strategies.

Contact us for more information

About Ed Rea

Ed Rea is a distinguished commercial technology lawyer recognised for his extensive expertise in technology-related transactions and relationships. Specialising in major IT and Telecoms infrastructure and construction projects, as well as digital distribution and connectivity matters, Ed’s profound understanding of these sectors enables him to offer invaluable advice to telecoms and technology businesses.

Ed is also a co-founder of Arbor Law.