When AI Breaks Privilege: Lessons from a US Federal Court

In the recent US case of United States v. Heppner (SDNY), a federal judge ruled that dozens of documents generated by a criminal defendant using a non-enterprise consumer version of Anthropic’s Claude were not protected by attorney-client privilege. Although this is a US decision, there are clear lessons for UK practitioners and in-house teams.

What happened?

A financial services executive charged with securities and wire fraud used Claude to research legal issues. Prior to his arrest, he input information obtained from his defence counsel into the AI tool, generated various documents, and later shared those documents with his lawyers. When the FBI seized the materials during a search of his home, his legal team asserted attorney-client privilege. The court rejected his claim.

Why privilege failed

The judge held that privilege protects communications (i) between a client and a lawyer (ii) intended to be confidential, and (iii) for the purpose of obtaining or giving legal advice. The AI-generated documents failed the first two limbs because Claude is not a lawyer and because the communications were not confidential. Importantly, the Claude privacy policy permitted use of inputs and outputs for model training and reserved rights of disclosure, including to regulators or authorities. The court also made an important observation that “Non-privileged communications are not somehow alchemically transformed into privileged ones simply because they are later shared with counsel.”

Key takeaways

  • Privilege attaches to lawyer-client communications not AI interactions. Using an AI tool to analyse legal issues does not create privilege.
  • Privilege may be waived. Inputting privileged information into a public AI tool may amount to disclosure to a third party, undermining confidentiality.
  • Do not assume AI tools are confidential environments. Most consumer AI tools permit broad data use rights under their terms of service.
  • AI-generated documents may be discoverable. Organisations should assume that content created via public AI tools could later be disclosable in litigation or regulatory investigations.
  • Policy and training are critical. Staff should not input confidential or legally sensitive information into public AI tools without clear guidance and appropriate safeguards.

What about the UK?

While privilege doctrine differs in formulation between US and English law, the core requirement of confidentiality is fundamental in both jurisdictions. If confidential information is voluntarily shared with a third party, privilege risks being lost. Therefore, the outcome in a UK court is likely be similar in principle. For employers and professional services firms, this reinforces the need for:

  • Clear AI usage policies
  • Controls around consumer vs enterprise tools
  • Staff training on privilege and confidentiality risks
  • Technical safeguards where AI tools are deployed

AI tools are incredible productivity tools, but they are not your lawyer and communications with AI are not automatically confidential.

Further help

Please get in touch for practical guidance on managing privilege in the age of AI.

About Ed Rea

Ed is a co-founder and director of Arbor Law, and a senior commercial technology and digital infrastructure lawyer. He has significant experience advising on the contracts and relationships that govern how technology and digital infrastructure is built, bought, sold and delivered – particularly where the detail is complex and the commercial risks are significant.