Deepfakes and Generative AI Legal Issues

Advances in generative AI technology are forcing law-makers around the world to act quickly to try and keep up.

In this blog Ed Rea, founding partner at Arbor Law looks at new and emerging regulation being introduced to address growing concerns around the use of highly sophisticated digital fabrications – from deepfakes being used to influence elections throughout 2024 or fool employees into paying out millions of dollars to criminal gangs.

Police in Hong Kong recently announced that a finance worker at an unnamed multinational business had been tricked into paying out more than $20m to fraudsters. The criminals used deepfake technology to appear as the company’s chief financial officer in a video conference call with the finance employee. Several other employees, some of whom were recognised by the finance worker, appeared on the call too but all of them were digital fakes.

The Hong Kong case is one of several examples where criminals are believed to have used deepfake technology to manipulate publicly available video and other footage to scam people, usually out of money. Earlier in 2024, a set of AI-generated pornographic images of singer Taylor Swift circulated on social media. They were viewed millions of times before being removed from social platforms. Such examples underline rising concern about the use of sophisticated deepfake technology for illegal means, with a 2023 YouGov poll showing that 85 percent of Americans said that they are “very concerned” or “somewhat concerned” about the spread of misleading video and audio deepfakes.

In February 2024 several tech companies at the Munich Security Conference signed an agreement to adopt a common framework to respond to deepfakes generated with the intention to mislead voters. However, these measures are voluntary, making the agreement effectively toothless. Other tech solutions including digital watermarks, embedded codes and digital detection algorithms are being worked on by researchers, but no approach is close to completion. For now, the emphasis is on using legislation to counter the threat of deepfakes.

My Arbor Law colleague Clara Westbrook has already examined the legal, compliance and security issues to be aware of in relation to ChatGPT.  Here, I look at recent legislation relating to the rising use of deepfakes.

Deepfake law: US

While there is currently no federal law in the United States that bans deepfakes, 10 states have independently passed laws criminalising them.

In Georgia, Hawaii, Texas, and Virginia, there are legal provisions that classify non-consensual deepfake pornography as a criminal offense. Meanwhile, California and Illinois empower victims with the ability to take legal action against individuals who generate images using their identities. Both Minnesota and New York have enacted legislation covering both aspects and Minnesota’s law extends to combatting the use of deepfakes in political contexts.

Deepfake law: UK

The 2023 UK Online Safety Act prohibits the dissemination of explicit images or videos that have been digitally altered in a manner that intentionally or recklessly inflicts distress upon an individual. However, the amendments do not criminalise the creation of other forms of AI-generated content without the subject’s consent. In such cases, only those whose likeness has been exploited to cause harm can pursue legal recourse, relying on defamation, privacy, harassment, data protection, intellectual property, or other applicable criminal laws. Establishing liability in these instances can be complex and challenging.

Deepfake law: EU

In the EU, deepfakes will be regulated by the AI Act – the world’s first comprehensive AI law, which was finalised and endorsed by all 27 EU Member States on February 2.

Under the AI Act, deepfakes are defined as “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful” (Art. 3(44bI) EU AI Act). The maximum penalty for non-compliance with the prohibitions stated in Art. 5 of the Act is either an administrative fine of up to 35 million Euros or 7% of worldwide annual turnover (whichever is highest).

How Arbor Law can help you navigate the use of AI technology

AI can provide some clear and tangible benefits, but with those benefits can also come significant challenges.

You can address them by expert support and an effective data strategy, together with clear accountabilities and processes that detail how data is managed within your business.

At Arbor Law, our senior lawyers can help you navigate the complex and dynamic world of data protection. Receive legal advice on how best to manage the use of AI in your organisation and build effective strategies to ensure you are compliant. Contact us to find out more information.