What Your Engagement Letter Should Say About AI (and What It Shouldn't)
Courts are stripping privilege from AI-assisted legal work. Major firms are rewriting engagement letters in response. Here's what belongs in yours and what creates more liability than it prevents.
In February 2026, a federal judge in Manhattan ruled that a defendant’s conversations with consumer AI tools weren’t protected by attorney-client privilege. The reasoning: the platforms’ terms of service allowed data retention, training, and third-party disclosure. No privilege. No work product protection.
That was the Heppner case. Two months later, more than a dozen major U.S. law firms have added AI-specific language to their client engagement agreements. Reuters reported that firms are explicitly telling clients to “proceed with caution” with any AI tool that touches legal matters. Some agreements now state that sharing a lawyer’s advice with a chatbot could erase privilege entirely.
This is moving faster than most firms expected. If your firm’s engagement letter doesn’t address AI, the large firms yours competes with for clients already updated theirs.
Why engagement letters need AI language now
Three things changed in early 2026 that make this urgent.
The privilege question got answered. Before Heppner, it was theoretical. Now there’s a federal ruling that says consumer AI use can destroy privilege. The court drew a clear line between consumer platforms (where the vendor’s terms allow data retention and disclosure) and enterprise platforms (where contractual protections prohibit it). That distinction matters for both your firm and your clients.
Clients are using AI without telling you. Attorneys aren’t the only ones putting case information into ChatGPT. Clients are doing it too. They’re summarizing legal advice, running what-if scenarios on their disputes, or just asking a chatbot to explain what their lawyer told them. Every one of those prompts is potentially discoverable. The Heppner court ordered the defendant to turn over his AI conversations to prosecutors.
Sanctions are escalating. An Oregon federal judge imposed $110,000 in fines against two attorneys who filed briefs with 15 fake cases and 8 fabricated quotations generated by AI. A 6th Circuit case produced $30,000 in sanctions. A 5th Circuit court fined an attorney who used enterprise tools (vLex and CoCounsel) but didn’t verify the output. The numbers keep going up.
What belongs in your engagement letter
There are two sides to this: what your firm discloses about its own AI use, and what you tell clients about theirs.
Your firm’s AI disclosure
ABA Formal Opinion 512 (July 2024) says that client consent for AI use must be “informed and specific.” Boilerplate language about “technology tools” isn’t enough. If your firm uses AI for research, drafting, or document review, the engagement letter should say so plainly.
A reasonable disclosure covers three things:
- What you use. Name the category of tools (AI-assisted legal research, document drafting). You don’t need to list every product, but the client should understand that AI is part of your workflow.
- How you use it. AI-generated output is reviewed by a licensed attorney before it’s relied on or communicated to the client. This isn’t a new obligation. It’s what you should already be doing.
- How the data is protected. The tools operate under enterprise agreements that prohibit the vendor from retaining, training on, or disclosing client data. This is the line Heppner drew: contractual protections are what separate enterprise AI from consumer AI in the court’s analysis.
Client AI warnings
This is the part most firms are adding now. Your clients need to know that putting case information into consumer AI tools creates real risk.
At minimum, your engagement letter should:
- Warn that AI conversations may be discoverable. The Heppner ruling established this. Information entered into AI tools can be ordered disclosed in litigation.
- Explain the privilege risk. Sharing your legal advice with a chatbot may waive attorney-client privilege, because consumer platforms’ terms of service don’t protect confidentiality the way an attorney-client relationship does.
- Recommend against entering case-specific details. Client names, facts of the matter, legal strategy, anything that could identify the case. If it would be a problem on the front page of the newspaper, it’s a problem in a ChatGPT prompt.
What your engagement letter shouldn’t say
Some firms are going too far. There’s a difference between helpful disclosure and creating new liability.
Don’t promise to disclose every use of AI. Courts haven’t settled on uniform disclosure requirements. Some judges require it in filings; most don’t. Promising blanket disclosure in your engagement letter creates an obligation that may be impossible to track and could expose the firm if any use goes unlogged. Disclose your general approach. Don’t promise to itemize every instance.
Don’t guarantee AI accuracy. Your engagement letter should make clear that AI output is reviewed by attorneys, not that AI tools are accurate. These tools hallucinate. The Oregon case is proof of what happens when verification fails. The commitment is to human review, not to AI reliability.
Don’t prohibit client AI use outright. You can’t control what your clients do outside of your representation. Warning them about risk is appropriate. Banning them from using AI tools is overreach and unenforceable. Frame it as guidance, not a prohibition.
The enterprise vs. consumer distinction matters
This is the thread that runs through every recent case. Heppner turned on the fact that consumer Claude’s terms of service allowed data retention and disclosure. Morgan v. V2X (D. Colo., March 2026) went further, holding that AI-assisted work product can be protected if the platform has contractual safeguards: no training on client data, no third-party sharing, deletion on request.
Your engagement letter should reflect this distinction. Your firm uses enterprise tools with data protections. Your clients should know that consumer tools don’t have those protections.
This is also where your AI acceptable use policy and your engagement letter work together. The AUP governs what your firm does internally. The engagement letter tells clients what to expect and what to avoid.
What to do this week
If your firm hasn’t touched its engagement letter since before February 2026, it’s time.
- Review your current language. If there’s no mention of AI, you’re behind 12+ major firms that have already updated theirs.
- Add a firm-side disclosure. Brief, factual, covering what you use and how data is protected.
- Add a client-side warning. Cover discoverability, privilege risk, and the recommendation to avoid entering case details into consumer AI tools.
- Review your AI acceptable use policy. If you don’t have one, that’s the first step. The engagement letter is the client-facing piece; the AUP is the internal piece.
The firms reading the ABA Journal and Reuters coverage are the same firms whose clients are wondering whether their lawyers have this figured out. Having a clear, reasonable answer is better than having to come up with one on the spot.
Artech Solutions works with law firms and professional services companies across the Des Moines metro on IT security, compliance, and AI governance. If your firm needs help evaluating AI tools or building an acceptable use policy, reach out.