Tech Law Update

Technology Law Round-up

Commercial Technology and Data Lawyer and Arbor co-founder Ed Rea summarises some recent developments, trends and hot topics in Technology law in the following roundup.

Proposed EU Framework regulating AI – Setting the standards or strangling a new industry at birth?

The European Commission has announced a proposed legal framework for regulating artificial intelligence (AI) systems (“AI Regulation”) aimed at establishing a balance between providing safety and encouraging innovation within the EU.

The draft framework proposes a risk-based approach pursuant to which AI activity is categorised between unacceptable, high, limited and minimal categories based on risk of harm to the health and safety of individuals and the extent to which the activity has an adverse impact on the rights and protections of individuals enshrined in the EU Charter of Fundamental Rights.

Failure to comply with the proposed AI Regulation will result in significant fines of up to €30,000,000 or, for companies, up to 6% of total worldwide annual turnover. The EU Commission additionally proposes to establish a European AI Board comprising representatives from both the Commission and Member States.

The European Commission has stated that the proposed AI regulation “is spearheading the development of new global norms to make sure AI can be trusted.” Critics of the proposed regulation argue that the EU is “strangling an emerging industry of AI at birth”, that “more attention is being paid to the risks of machine learning than its opportunities” and that Europe is regulating itself into AI oblivion.

What is clear that AI as it currently exists, is almost entirely built around the harvesting of vast amounts of personal data. In analysing this information, machines are able to make decisions on their own, raising multiple issues around privacy and morality.

UK Jurisdiction taskforce launches Digital Dispute Resolution Rules to enable rapid resolution of blockchain and crypto legal disputes.

Lawtech UK’s UK Jurisdiction Taskforce has published a final version of its new Digital Dispute Resolution Rules. The rules, which were drafted following public and private consultation with technical experts, commercial parties, financial services and lawyers, are designed to enable faster and more cost effective resolution of legal disputes involving digital technologies such as blockchain, cryptocurrency assets and smart contracts and to foster confidence amongst business adopting these technologies. The rules encourage digital technology issues to be handled by arbitrators with technical expertise, rather than litigated in court. Until now, there has been little consistency in how legal disputes relating to these types of technologies should be resolved, leading to lengthier and more costly processes. The rules will apply only if companies incorporate them into their contracts. The rules are available for download here https://technation.io/lawtech-uk-resources/

Impact of Google v. Oracle: Google’s recent Big Win in the US Supreme Court

Can an organisation claim a use copyright law to prevent others from using its application programming interfaces (APIs), and if so, then what might this mean for future software innovation and firms’ rights to these innovations? In May 2012, the Court of the Northern District of California held that APIs are not subject to copyright protection. In this case of Oracle vs. Google, the court was asked to determine whether Oracle could claim a copyright on Java APIs it owned and whether Google infringed these rights. The court denied copyright protection to APIs following which Oracle appealed the ruling to the U.S. Court of Appeal, which reversed the judge’s denial of copyright protection to APIs and set the matter for trial.

Years of legal battles led to the U.S. Supreme Court’s decision in Google v. Oracle in April 2021 which held that Google’s copying of APIs amounted to fair use as a matter of law. The much-watched journey has divided programmers, developers, computer scientists, tech firms, and academics worldwide. On the one hand, some see Java code as a necessary public good to be shared by all. Otherers have predicted that the ruling “will have a chilling effect on incentives for future software innovation” by denying developers a right to protect the copyright in their code.

The Court’s decision in favour of Google is likely to be relied upon by other companies using Java SE code. For those customers using Java SE in a similar fashion to Google’s use, the US fair use doctrine is likely to give them protection against alleged “unlicensed” use of Java and the payment of license fees. The question now is whether the ruling will make waves in other areas where fair use and copyright are in play, including music and art cases.

Deepfakes

AI-generated fake videos are becoming more common and convincing. Have you seen Mark Zuckerberg brag about having “total control of billions of people’s stolen data”, or witnessed Jon Snow’s moving apology for the dismal ending to Game of Thrones? If so, you are likely to have seen a deepfake. The 21st century’s answer to Photoshopping, deepfakes use a form of AI technology called deep learning to make images of fake events, hence the name deepfake.

Deepfakes can pose a variety of different harms, to a variety of stakeholders. Broadly speaking, the risks faced by an individual person are likely to be those surrounding sexual offences, harassment, defamation, reputational harm, and fraud. The risks faced by a business or specific class of individuals are likely to include reputational and brand damage, as well as fraud or harm to commercial interests. The risks posed to societies and communities more generally include the erosion of trust in institutions, to include governments and the press, as well as potential threats to elections and other aspects of the political infrastructure.

UK legislation has not directly established an ‘image right’ (also known as personality rights or rights of publicity), nor have English courts recognised one at common law. Anyone seeking to remove or challenge an unwanted deepfake must rely upon one or more separate but overlapping causes of action to protect their likeness which will include the laws of privacy and data protection, publicity and brand protection, intellectual property and reputation and dignity (including harassment and defamation).

Most major ‘big tech’ companies have officially banned deepfakes appearing on their sites. Platforms which have banned the posting of user-generated or shared deepfakes, or some specific types and categories of deepfakes, include many of the major providers such as Facebook, Instagram, Reddit, Twitter, YouTube, Tiktok. A wide range of potential technological controls are currently under development.

Given the impact of deepfakes risks the creation of a ‘zero-trust society’, where people cannot, or are unable to distinguish truth from falsehood, there is a very strong case for new legislation to be enacted or for existing laws to be amended to cover deepfakes. That said, any regulatory controls will inevitably have to consider conflicting rights of free speech, difficulties of enforcement against anonymous creators and problems associated with the cross-border nature of the internet.

Ed Rea is a Commercial Technology and Data lawyer and co-founder of Arbor Law. Contact Ed on ed.rea@arbor.law