EU Sets Out World's First
AI Regulation Framework

On August 1, the European Commission’s Artificial Intelligence (AI) Act came into force, marking the first comprehensive AI law in the world.

Arbor Law’s founding Director, Ed Rea, examines the implications of the Act for businesses operating within the European Union that use AI in their products and services.

Initially proposed in April 2021 and endorsed by the European Parliament and the Council in December 2023, the AI Act aims to mitigate potential risks associated with AI by establishing a regulatory and legal framework for AI systems in Europe.
Scope and Risk Categories

The AI Act applies to every company supplying AI systems within the EU and providing output from an AI system in the EU. Systems and practices are categorised into four levels of risk, with stricter rules applying for higher risks:

  1. Minimal Risk: AI systems like spam filters and AI-enabled video games face no mandatory obligations under the AI Act. Companies may voluntarily adopt additional codes of conduct.
  2. Limited Risk: Systems like chatbots must comply with transparency requirements, tailored to the nature of the AI system.
  3. High Risk: AI systems, including AI-based medical software and recruitment tools, must meet stringent requirements, including robust risk mitigation systems, high-quality datasets, clear user information, and human oversight.
  4. Unacceptable Risk: The AI Act prohibits certain AI practices, including subliminal techniques, exploitation of vulnerabilities, untargeted web scraping of images for facial identification databases, and emotion inference systems in the workplace or educational institutions. These practices are banned outright.
Implications for Existing AI Usage

The EU estimates that about 85% of current AI systems used within the EU will fall into the minimal risk category.

Under the AI Act, all providers of general-purpose AI (GPAI) will be required to:

  • Maintain required technical documentation and information.
  • Have a process to respect EU copyright law Respect EU copyright law.
  • Make public a detailed summary of their training data to enable copyright holders to see how their work has been used.

Providers of GPAI established outside the EU must also appoint an authorised representative in the EU.

The AI Act introduces a transparency requirement for AI systems, including GPAI systems and high-risk or limited-risk AI systems, intended to interact directly with humans or generate human-viewed content. This wide-reaching obligation requires providers to ensure individuals are informed when they are interacting with an AI system and that AI-generated output is labelled as such.

The AI Act may lead tech companies to offer different or limited versions of their products in the EU due to potential penalties for regulatory violations. For instance, Meta has already restricted the availability of its AI model in Europe due to regulatory concerns, although it has said that this was not necessarily due to the AI Act.

Timelines and Penalties

The new requirements take effect in a staggered manner:

  • Prohibited AI systems: February 2, 2025
  • GPAI requirements and penalties: August 2, 2025
  • High-risk AI systems: August 2, 2027
  • Transparency requirements: August 2, 2027
  • Codes of practice: Nine months after the Act’s entry into force

Non-compliance with the AI Act can result in significant fines:

  • up to €35 million or 7% of global annual turnover (whichever is higher) for breaches related to prohibited AI practices; or
  • up to €7.5 million or 1% of global annual turnover (whichever is higher) for providing incorrect, incomplete, or misleading information.
How Arbor Law Can Help

Companies should proactively assess the AI Act’s applicability to their operations and implement necessary changes to ensure compliance. Arbor Law offers expert legal advice to help navigate the complexities of the AI Act, manage AI use in your organisation, and develop effective compliance strategies.

Contact us for more information

Further help
About Ed Rea

Ed Rea is a distinguished commercial technology lawyer recognised for his extensive expertise in technology-related transactions and relationships. Specialising in major IT and Telecoms infrastructure and construction projects, as well as digital distribution and connectivity matters, Ed’s profound understanding of these sectors enables him to offer invaluable advice to telecoms and technology businesses.

Ed is also a co-founder of Arbor Law.