In a decisive move on 1st July 2025, the US Senate voted overwhelmingly (99-1) to remove a controversial provision from the One Big Beautiful Bill Act, which would have imposed a 10-year moratorium on state-level AI regulation. Major technology companies had supported this provision, arguing that fragmented regulation could stifle AI innovation and create compliance burdens. Whilst a federal approach would have offered businesses greater legal certainty and simplified compliance, particularly for international organisations navigating multiple jurisdictions, Senate opponents contended it would weaken protections for consumer rights, privacy, and children’s safety.
Meanwhile, the European Commission firmly maintained its regulatory timeline despite mounting pressure from large technology companies seeking delays. The EU AI Act’s general-purpose AI model obligations and penalty provisions will take effect as scheduled on 2nd August 2025. You can read about the EU AI Act in my earlier blog, here.
The European Commission also published the final General-Purpose AI Code of Practice on 10 July to support compliance with EU AI Act provisions taking effect on 2nd August 2025. Although voluntary, the Commission encourages general-purpose AI model providers to adopt the Code, offering signatories “reduced administrative burden” and “more legal certainty” compared to those that choose to show compliance in other ways.
The GPAI Code of Practice has three chapters: “Transparency” and “Copyright,” both of which address all providers of general-purpose AI models, and “Safety and Security”, which is only relevant to a small number of providers of the most advanced models. Within the “Transparency” chapter is a model documentation form for providers to document compliance with the AI Act’s transparency requirements.
However, the Code leaves some issues unresolved, including model contractual clauses, downstream liability clarification, and training data licensing mandates. These omissions may limit the usefulness of the Code for smaller developers and open-source communities as a standalone compliance tool. Further Commission guidelines on key concepts related to general-purpose AI models are expected to be published soon with guidelines on how to comply with the provisions on high-risk AI systems, including practical examples of high-risk versus non-high-risk systems expected in early 2026.
In contrast to the US and EU, the UK remains legislatively inactive following the prolonged passage of the Data (Use and Access) Bill, where AI and copyright debates featured prominently. With Parliament due to go in recess on 31 July, Peter Kyle, the Secretary of State for Science, Innovation and Technology has confirmed that comprehensive AI regulation legislation will be postponed until the next parliamentary session.
With the King’s Speech expected in May 2026, significant UK AI legislation remains distant, this despite a March 2025 national survey from the Ada Lovelace Institute showing that almost three quarters (72%) of the UK public believe that laws and regulation would increase their comfort with AI. However, according to reports in the Financial Times this week, the government has shelved plans to introduce the bill this session, with officials suggesting it may be replaced by a broader AI-focused bill.
For more background on the UK’s potential approach to regulating AI, you can read my earlier blog, here.
Arbor Law offers expert legal advice to help navigate the complexities of the AI regulation, manage AI use in your organisation, and develop effective compliance strategies. Contact us for more information
Ed is a senior commercial technology lawyer and co-founder of Arbor Law. Widely recognised for his deep expertise in technology-driven transactions and strategic partnerships, Ed specialises in complex IT and telecoms infrastructure projects, digital distribution, and connectivity. His in-depth sector knowledge makes him a trusted adviser to telecoms and technology businesses navigating high-value commercial arrangements.