How do I develop an AI acceptable use policy for my business?

This week marks the world’s second AI Safety Summit taking place in Seoul. The summit will discuss the opportunities and capabilities, but also the potential harms and risks(1), presented by advanced AI systems. The Seoul summit follows-on from last year’s AI Safety Summit held in the UK and the resulting Bletchley Declaration which was signed by the US, UK, EU and China (amongst others).

 

This second summit is very timely, following recent releases of more powerful AI systems and models, such as OpenAI’s Chat GPT-4o, Google’s Astra and Meta’s Llama 3, all of which have the potential to bring general purpose AI into our daily private lives as well as our working lives.

With more and more AI tools and applications coming to market, many organisations are already looking at the official use of AI within their business operations (2), as well facing the challenge of” shadow use” of AI (the use of AI by employees, to help them boost productivity or streamline processes, outside of governance and without the formal approval or oversight from management or IT departments).

In this environment, businesses need to move quickly to ensure they have clear and appropriate policies, procedures and guidelines in place to benefit from this new technology whilst minimising any risks to their businesses.

Now seems like the ideal time for companies and institutions to review their AI Use Policies (or to put one in place, if they don’t already have one). But what is best practice for your AI Use Policy and what should it cover?

To help answer these questions, Arbor Law’s Iain Simmons outlines what business leaders and inhouse counsels need to consider when developing an AI Use Policy.

What is an AI Use Policy?

An AI Use Policy is designed to ensure that any AI technology used by your business is done so in a safe, reliable, and appropriate manner that minimises risks. It should be developed to inform and guide your employees on how AI can be used within your business.

How do I develop an AI Use Policy?

This checklist and guidance are designed to accompany and/or support the drafting, deployment and operation of an AI Use Policy by an organisation. It can be used as the basis of an AI Acceptable Use Policy Template.

Introduction

In the AI Use Policy’s introduction and purpose section(s), it is always helpful to set-the-scene:

  • What is the overall context and purpose of the Policy? For example:
    • recognise that Artificial Intelligence (AI) tools can be transformative, such as automating straightforward tasks, increasing people’s productivity, enabling faster processing and analysis of large amounts of data, and supporting more effective and efficient decision-making;
    • however, AI tools also present risks and challenges, such as to intellectual property, data security and data protection.
  • If possible and appropriate, consider applying the (new) use of AI to the culture of the organisation and its people, for example:
    • The company intends to use AI in a human-centric way, respectful of confidentiality, privacy and third party rights;
    • AI use can and should support (rather than replace) personnel in their work and enable people to do more efficient, creative, rewarding and valuable work; and
    • People (our employees and our customers) should remain at the heart of everything the company does.
  • What is the scope of the policy, for example that it:
    • Applies to all staff, including employees, consultants and contractors;
    • Applies to all work-related tasks (including data analysis, content generation, research and development, coding, and producing any materials and documentation for use within the company’s operations) and/or to use of AI tools on company-owned and provided equipment and devices;
    • Applies to AI tools and any software which contain AI functionality.
  • Are there any other applicable and directly relevant company policies? If so, consider listing them, such as:
    • IT use / Acceptable Use Policies;
    • Data / IT Security Policies;
    • Data Protection & Records Retention.
  • How often your Policy will be reviewed and refreshed including clear version control and ownership, and review the policy frequently (at least annually) because AI technology and applicable laws and regulations are developing so rapidly.
  • Are personnel required to specifically read, acknowledge and sign the policy before they can use AI tools within the company, or does the policy fit into the company’s wider compliance framework? Does the company have a zero-tolerance view of “shadow AI” which could pose significant risks? Is it necessary to state the consequences of any non-compliance?
  • Will the policy be supported and enforced by other measures, such as training or user coaching, Data Loss Prevention (DLP) controls to detect or prevent AI tool usage?
Approving AI tools
  • List any pre-approved AI tools (e.g. OpenAI’s ChatGPT, Google Bard): note also that some common AI tools might be based on a pre-approved tool so consider also listing those for clarity (such as a browser powered by a pre-approved tool, an example being Microsoft Edge with Copilot, powered by ChatGPT). Conversely, a commonly used browser might have an extension which is not pre-approved by your organisation (e.g. the Sider extension to Google Chrome). For each pre-approved AI tool, you may want or need to set out any restrictions around who (which teams or functions) with the Company are authorised to use the tool, for what business purpose(s) and any other limitations, guidance or cautions (for example, what it must not be used for and where further guidance/approvals are required).
  • Be clear if this is guidance or a rule: if a rule, state definitively that if an AI tool has not been approved then it cannot be used for the company’s business or operations.
  • What is the process for approving other or new AI tools? For example, an email address or intranet template form providing some helpful guidance and structure to requesters.
  • Consider setting out the relevant evaluation criteria in the policy. A high-level minimum standard such as “the AI tool should be legally compliant, transparent/accountable, trustworthy, safe, secure and ethical”; then, if appropriate, other more granular criteria (otherwise, deal with this as a separate SOP/checklist for the approver team/organisation).
  • The criteria should be appropriate to the type of organisation and the most likely usage, for example:
    • Intended use – because this can drive the level of risk and the relative importance of the evaluation criteria, below (for example, compare a sales team using an AI tool to create some content for a pitch document with software engineers using an AI tool for code generation and optimisation);
    • Type of AI tool (for example, is it generative, predictive or extractive; is it a Narrow AI tool trained to perform a limited task on a limited body of data and knowledge; or is it a more open, general AI tool with a wider community level of access; does it have any autonomy or agency?);
    • Data security and confidentiality – reference the company’s own policies and standards and also consider if the particular AI tool is capable of re-using/re-generating the company’s commercially sensitive information in a recognisable form;
    • Intellectual property – both inputs and outputs and note, in particular, any assertion by the AI tool/vendor that it owns IP rights should raise a red flag;
    • Data protection and privacy – again, reference the company’s own data privacy policies and standards, ensure the tool is legally compliant and here it is best practice to conduct a DPIA (Data Protection Impact Assessment) if any personal data/PII is (or could be) processed in the AI tool;
    • Human-centric – does the AI tool have built-in human safety factors and ethical guidelines (it’s perhaps helpful to note here that many of the major AI developers have committed to sign-up to AI-specific codes of practice and testing regimes (3);
    • Other legal & regulatory compliance factors – this will vary depending on the nature of the company’s business, and there may be AI-specific regulatory rules or guidance that need to be reviewed and followed;
    • Other compliance policies – are any other compliance/governance policies & procedures applicable to the AI tool and proposed use;
    • Vendor evaluation – assess whether the vendor/tool is reputable and evaluate all provider(s) of the AI tool, not only the vendor but also the original developer (if the vendor is a reseller or managed service provider);
    • Contract terms – review the relevant T&Cs/terms of service and privacy policy;
    • Risk-Benefit analysis – balance potential benefits and opportunities against the potential risks and impacts of any misuse, including malicious use, malfunctions, harmful bias and underrepresentation, loss of control, misinformation or deep fakes, security and privacy breaches;
    • RAG rating and risk management – consider risk-rating the tool, to drive an appropriate level of ongoing monitoring and review AI tool usage against industry trends, legal changes and regulatory oversight, as well as appropriate mitigations and controls.
Rules of use (some suggested do’s and don’ts)
  • For inputs:
    • Don’t input any company confidential and commercially sensitive information without a careful assessment and/or specific guidance from the approver (or Legal team);
    • Don’t input any of your company’s valuable and/or proprietary intellectual property (such as proprietary source code that is not open-source);
    • Don’t input any third party owned intellectual property content (such as images or copyrighted text) if you do not have the right to use that content for that purpose – if in doubt, seek guidance from the approver (or Legal team);
    • Don’t input any personal data/PII (of your customer or co-worker, for example);
    • Don’t use AI tools in a way that could inadvertently perpetuate or reinforce bias, discrimination or prejudice – be aware that AI tools such as Large Language Models (LLMs) work by training themselves on large amounts of pre-existing data, which then make predictions on the most likely outcomes to generate outputs;
    • Do only use AI tools in compliance with the company’s data security policies (so no sharing of login credentials or passwords, for example);
    • Do consider that the prompts and content which you input into an external AI tool could re-surface someday, in some form, as a future output of that AI tool in a way that you cannot control – this is particularly the case for more open Generative AI tools and if this raises a doubt or concern, seek further guidance.
  • For AI outputs:
    • Do make sure there is a “human-in-the-loop” when using any AI tool – at each material stage of its use and with overall human oversight;
    • Do check and verify the accuracy of all AI outputs before you use them more widely, share or publish them – AI generated content can be prone to “hallucinations” so use your critical thinking and, wherever possible, cross-check against an alternative reliable source;
    • Don’t just assume that AI generated content will be perfect or appropriate in every case – always use your (human) judgement and make sure that you (or a colleague) reviews and, if necessary, carefully edits any AI generated content before sharing or publishing it;
    • Do consider if further assessment is needed for AI use relating to people and their personal data, whether directly or indirectly – make sure you seek additional approvals and legal advice and note that a Data Protection Impact Assessment (DPIA) may be needed before such use;
    • Do also make sure that you (or a colleague) have the final decision when using AI to help make a decision which could impact any living person (for example, employees/applicants, or customers) – check applicable Privacy Policy/fair processing notices and make sure AI usage is compliant with GDPR / DPA & not discriminatory;
    • Don’t assume that your company will own intellectual property in, or be able to control and prevent others from using any content produced by a Generative AI Tool (such as ChatGPT, Midjourney, Sora) and if that’s going to be an issue, don’t use the AI tool to generate the content – as of early 2024, in most jurisdictions, no new copyright protection exists in AI generated outputs (because they are not created by a human) and there is also active litigation on whether the use of third party owned IP by AI tools to train their models is itself legitimate;
    • Do clearly identify AI generated content, images etc, as such, even for internal use – note that some (but not all) AI tools will automatically apply a water-mark identifying AI generated content.

The above checklist is intended as a guide to help establish rules and guidelines for employees regarding the permissible use of AI in the workplace. A comprehensive and unambiguous policy is essential to safeguard your business against various business risks from intellectual property infringement and the unauthorised disclosure of confidential information to publishing potentially damaging or embarrassing errors. This approach is designed to ensure your business can benefit from the rich rewards of AI whilst remaining suitably protected and operating within legal and regulatory boundaries.

Guidelines for AI System Developers (if applicable)

For completeness, additional and more onerous guidelines may apply to developers of advanced AI tools or systems, particularly generative AI and/or general purpose or “high risk” AI systems – some high level examples include (4):

  • mandatory testing of AI tools which could pose any threat to national security, economic security or health & safety;
  • full documentation to support transparency (and potential disclosure to relevant regulatory bodies)
  • risk assessments and risk management policies (including privacy policies);
  • robust security controls;
  • for outputs, use of content authentication watermarks;
  • for inputs, jurisdiction variations around copyright/IP relevant to the use of materials for training (for example, exceptions relating to fair use in the USA, fair dealing in the UK and text and data mining (TDM) in the EU).

Further help

As with any new policy development, it is recommended that you take legal advice to ensure that your business does not inadvertently fail to comply with its regulatory duties. For advice and support on drafting and implementing AI Use Policies in your business, you can contact me to discuss how we can assist further.

About Iain Simmons

Iain is a senior commercial lawyer, with experience across multiple sectors and jurisdictions, specialising in technology, media and telecoms (TMT). Iain has held senior in-house, Head of Legal and General Counsel positions at a number of platform, technology and data businesses, from start-ups/scale-ups to multi-nationals. He has gained a breadth of legal experience, including contract/consumer law, corporate law and governance, M&A, IP, AI/technology, Data Privacy/GDPR and dispute management & resolution.

iain.simmons@arbor.law

References & Useful Links
(1) See the International Scientific Report on the Safety of Advanced AI (Interim Report), May 2024 (released in advance of the AI Seoul Summit, 21-22 May 2024)
(2) Almost three quarters (74%) of companies have started testing generative AI technologies and the majority (65%) are already using them internally. See State of Ethics and Trust in Technology, Deloitte, 2023 
(3) As announced at the UK’s Bletchley Park 2023 AI Safety Summit
(4) Sources include: President Biden’s Executive Order on AI Development, 20 October 2023; the G7’s 2023 Hiroshima Summit’s Guiding Principles and Code of Conduct for Organisations Developing Advanced AI Systems; the EU’s 2024 AI Act; UK Government’s 2023 White Paper “A pro-innovation approach to AI regulation).