ChatGPT: Legal, Compliance, and Security Issues for your organisation
Part 2

In part one of this mini blog series, Arbor Law’s Data Protection specialist Clara Westbrook discussed the legal, compliance and security issues you need to be aware of as you harness the ever evolving power of ChatGPT.

Let’s turn now to the laws surrounding the use of AI and its regulation.

AI Laws

At present there is no dedicated AI regulation but various laws, as touched upon above, are applicable to both the development and use of GAI.

European AI Regulation

There is currently an EU AI Regulation going through the European legislative process.

This is likely to come into force in early 2024. This law will be a regulation – which means that it will be directly applicable without the need for national implementing legislation. 

The draft framework proposes a risk-based approach pursuant. AI activity is categorised between unacceptable, high, limited, and minimal categories, based on risk of harm to the health and safety of individuals. It is also categorised as such due to the extent to which the activity has an adverse impact on the rights and protections of individuals enshrined in the EU Charter of Fundamental Rights.

At one end of the scale, the AI regulation prohibits AI systems that pose an unacceptable risk to fundamental rights under the charter. AI systems identified as high risk will be subject to numerous onerous obligations before they can be placed on the market or used in the EU.

This includes implementing risk management systems which must be maintained for the lifecycle of the AI system, the implementation of data governance, human oversight, and requirements regarding accuracy, robustness and cybersecurity.

AI technology falling into the minimal-risk category will not be subject to restrictions on top of any relevant existing legislation. Failure to comply with the proposed regulation will result in significant fines of up to €30,000,000 or, for companies, up to 6% of total worldwide annual turnover. The EU Commission additionally proposes to establish a European AI Board comprising representatives from both the Commission and Member States.

UK

In March 2023, after some delay, the UK government published its White Paper – ‘A pro-innovation approach to AI regulation‘, which sets out a framework for the UK’s approach to regulating AI.

The government decided not to legislate to create a single function to govern the regulation of AI. It elected to support existing regulators to develop a sector-focused, principles based approach. Regulators including the ICO, the CMA, the FCA, Ofcom, the Health and Safety Executive, and the Human Rights Commission will be required to consider the following five principles to build trust and provide clarity for innovation:

  • Safety, Security and Robustness
  • Transparency and Explainability
  • Fairness
  • Accountability and Governance
  • Contestability and Redress

UK regulators will publish non-statutory guidance over the next year, which will also include practical tools like risk assessment templates, and standards.

The guidance will need to be pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative, and underpinned by the following four core elements of the government’s AI framework:

  • Defining AI based on its unique characteristics to support regulator coordination.
  • Adopting a context-specific approach.
  • Providing a set of cross-sectoral principles to guide regulator responses to AI risks and opportunities. The government expects to introduce a statutory duty on regulators to have due regard to the five AI principles, following an initial period.
  • Delivering new central government functions to support regulators in delivering the AI regulatory framework, including by horizon scanning and supporting an iterative regulatory approach.

The government also supports the findings of the Vallance Review published earlier in March, which looked at the approach to regulating emerging and digital technologies.

With regard to AI, Sir Patrick Vallance recommended:

  • The government works with regulators to develop a multi-regulator sandbox. This is to be operational within six months, supported by the Digital Regulatory Cooperation Forum or DRCF (comprising the ICO, CMA, Ofcom and the FCA).
  • The government should announce a clear policy position on the relationship between intellectual property law and generative AI to provide confidence to innovators and investors.

While providing for a regulatory sandbox, the AI White Paper does not set out further policy on the relationship between IP and generative AI. However, we understand the Intellectual Property Office is working on a code of practice, expected to be ready by Summer 2023.

Next Steps

It’s important to review the possible use of ChatGPT within your organisation.

To help you do this, we’ve added the next steps from part one of ChatGPT: Legal, Compliance, and Security Issues, together with additional information regarding regulations.

  • Monitoring the development of the upcoming EU AI law, together with guidance published by the UK government and the Information commissioner. This is evolving at pace, so it may be a good idea to nominate an individual within the company to be the key point person.
  • Conducting a scoping exercise to understand whether you are currently using GAI. if so, documenting the tools and carrying out data privacy impact assessments for each one.
  • Understanding whether your employees are using ChatGPT or other GAI tools to conduct company business.
  • If you permit the use of GAI for company business, develop a GAI policy for employees outlining in what circumstance it can be used, how and for what type of work.
  • Checking your supply chain for GAI use, for example, check whether any of your suppliers are using your company data in GAI tools to perform services for you.
  • If you are considering using any GAI tools, conduct a review of the tool and if personal data may be involved, conduct a data privacy impact assessment.

How Arbor Law can help you navigate ChatGPT use

While AI undeniably offers substantial advantages, it also presents hurdles.

It’s important to have expert support and an effective data strategy in place to maintain clear accountabilities and processes that cover how your business manages data.

At Arbor Law, our senior lawyers can help you navigate the complex and dynamic world of data protection. Receive legal advice on how best to manage the use of AI in your organisation and build effective strategies to ensure you are compliant.

Contact us to find out more information.