If you have not yet heard of ChatGPT…
Where have you been hiding?
With its generative capabilities, ChatGPT offers users the ability to generate responses to prompts by drawing on a vast database of internet data.
As exciting as this technology may be, it also brings along a host of legal, compliance, and security challenges that cannot be ignored.
Arbor Law’s Data Protection specialist Clara Westbrook discusses the legal, compliance and security issues you need to be aware of as you harness the ever evolving power of ChatGPT in this article.
ChatGPT is a Generative Artificial Intelligence (GAI) tool. Generating output in response to user prompts, the tool works via a huge database of internet data.
This database is known as training data. Some of this data may be publicly available, private, inaccurate, incorrect, out of date, and/or no longer relevant. Due to this possibility, the outputs may be false, irrelevant, or prejudiced.
Use of such incorrect outputs may result in lawsuits and/or other action against those using such outputs.
There is a recent example of a case where a lawyer in the US used ChatGPT to conduct the research for a case he was working on. After using the ChatGPT output, it transpired that the case references and information were non-existent – resulting in the lawyer being barred from practice.
It is likely that we will see more incidents of this type as companies and/or their employees increasingly use ChatGPT and other GAI tools.
The digital landscape is extremely dynamic with technology and legal developments evolving at pace. To this end, the EU AI law is proceeding through the EU legislative process and is due to come into force in early 2024.
The Italian Data Protection Authority banned the use of ChatGPT in Italy for a period of time, due to the tool not meeting satisfaction criteria related to the processing of personal data. Additionally, the authority queried whether data subjects could exercise their rights under data protection law such as the rights of access, deletion and correction. There were also no measures in place to prevent children from using the tool.
However, even though there is a new European law on the horizon, there is a myriad of existing legislation which applies to the use of GAI, including:
Both the GAI developer and the user could breach copyright laws if training datasets contain copyrighted data.
Another point to note is: can the output generated via a GAI tool be copyrighted?
The answer to this is likely to be negative, given that such protection only applies to an original work produced by a human. The consequence of this is that if something new is developed using GAI, it may not benefit from copyright protection. This is because it could be replicated by others without any copyright issues for the copier.
It is also likely that new law in this area, similar to user terms and conditions of some existing GAI tools, will require the output to include a clear statement that it was generated by AI.
The developer, if using personal data in the training datasets, needs to ensure that the personal data is collected and used in accordance with applicable data protection law. This includes having a lawful basis for the collection and use of such data.
Moreover, the user, if a data controller in terms of output data containing personal data, will be required to comply with such laws. Matters such as assessing risks using data privacy impact assessments (DPIAs) need to be carried out as a first step in data protection compliance.
Considerations must be given as to the lawful basis which can be relied upon and whether the provision of individual rights such as the rights to data deletion, access and rectification can be adequately met.
Training data may include confidential data. Therefore, both the developer and the user, to the extent that the user outputs include confidential information, may be in breach of contract and/or the law of confidentiality.
However, another consideration is that if the data has been made public, it may have lost its confidential nature, and therefore the right to confidentiality may cease to exist. This could occur if, when adding a prompt into ChatGPT, information which is confidential is included in the prompt.
The large scale collection of data from the internet, including websites, may also mean that data has been collected and/or used in contravention of the website terms and conditions. This may give rise to a breach of contract by the developer. Additionally, the user could face liability if output data is used by him in contravention of such terms and conditions.
If the GAI tool has been built with inaccurate, incorrect or old data, the developer could be liable for product liability if harm is caused by outputs from such a tool.
An example is a tool used to diagnose ailments and offer advice regarding medication. If the information used to train the tool was incorrect, inaccurate and/or old, the user could suffer harm. For example, by receiving an inaccurate diagnosis and inappropriate treatment. The developer of such an AI tool could face a liability claim for a defective product.
Given the above, to assess the possible use of ChatGPT within your organisation, we suggest:
AI certainly comes with great benefits, but with those also come significant challenges.
You can address them by expert support and an effective data strategy, together with clear accountabilities and processes that detail how data is managed within your business.
At Arbor Law, our senior lawyers can help you navigate the complex and dynamic world of data protection. Receive legal advice on how best to manage the use of AI in your organisation and build effective strategies to ensure you are compliant. Contact us to find out more information.