AI acceptable use policies: why every law firm needs one now
Law firms need AI acceptable use policies to protect client data and meet ethical obligations. Learn what to include and why it matters.
Here’s a question worth asking at your next partners’ meeting: do you know which AI tools your attorneys and staff are using right now?
If the answer is “not exactly,” you’re not alone. Most law firms we talk to are in the same position. Associates are experimenting with ChatGPT. Paralegals are using AI summarization tools. Someone in marketing tried an AI writing assistant. None of it is governed by any formal policy.
That gap between adoption and governance is where the risk lives. For law firms, where confidentiality isn’t just good practice but an ethical obligation, that risk is higher than in most industries.
An AI acceptable use policy closes that gap. It gives your firm clear rules for how AI tools can and can’t be used, protects your clients’ data, and helps you meet your professional obligations.
Why this matters more for law firms
Every business should think about AI governance, but law firms face pressures that other industries don’t.
Ethical obligations are already here
ABA Formal Opinion 512, issued in July 2024, made it clear that lawyers’ duty of competence under Model Rule 1.1 extends to AI tools. If you’re using AI in your practice, you have an obligation to understand how it works, where the data goes, and what the limitations are. The duty of supervision under Rules 5.1 and 5.3 means partners are responsible for how associates and staff use these tools, too.
About half of U.S. states have now issued some form of formal AI ethics guidance for attorneys. Iowa hasn’t published its own opinion yet, but the Iowa State Bar Association’s 2026 AI Training Series, particularly Session 6 on “AI Policy & Vendor Selection for Law Firms” on May 15, signals this is an active area of focus.
There’s now a federal ruling on AI and privilege
In February 2026, we got the first federal ruling directly addressing what happens when AI tools and privilege collide.
In United States v. Heppner (S.D.N.Y., Feb. 17, 2026), Judge Jed Rakoff held that documents a defendant prepared using the consumer version of Anthropic’s Claude chatbot were protected by neither attorney-client privilege nor work product doctrine. The reasoning: Heppner used a publicly available AI platform whose terms of service allowed data retention, model training, and disclosure to third parties. He did so without attorney direction or supervision. The court found no reasonable expectation of confidentiality, no attorney-client relationship with the chatbot, and no basis for work product protection. To be clear, this applies to paid personal tiers like ChatGPT Plus or Claude Pro, not just free accounts.
The part that matters most for law firms: it’s not just about attorneys using AI. Heppner was a client who independently dumped privileged materials into a consumer AI tool. That means your engagement letters and fee agreements need to address this risk too, not just your firm’s internal AI usage, but the possibility that a client could inadvertently waive privilege by pasting case documents into ChatGPT or Claude on their own.
The Heppner court was careful to note that enterprise AI platforms with contractual no-training commitments and data isolation, like Microsoft 365 Copilot, ChatGPT Enterprise, or Claude Team, present a “materially different” analysis. The critical factors are confidentiality protections, attorney direction, and vendor structure. A policy that distinguishes between consumer-grade tools and true enterprise deployments isn’t just cautious. It’s the line between protected and discoverable.
Courts are paying attention beyond privilege
Heppner isn’t the only AI-related courtroom development. In March 2026, the 6th U.S. Circuit Court of Appeals sanctioned two Tennessee attorneys $30,000 for filing briefs containing more than two dozen fabricated or misrepresented AI-generated case citations. That’s an appellate court, not a trial judge issuing a warning. Courts at every level have now imposed sanctions, required disclosure of AI usage, and issued standing orders about AI-generated filings. A policy that requires verification of AI-generated legal research protects your firm from sanctions and the kind of professional embarrassment that’s hard to come back from.
What should be in your policy
An effective AI acceptable use policy doesn’t need to be a 30-page document. It needs to be clear, practical, and enforceable.
Approved tools
Specify which AI tools are authorized for use at the firm. This should include tools provided and managed by the firm (e.g., Microsoft Copilot, legal-specific AI platforms), tools that are conditionally approved with restrictions (e.g., ChatGPT with a paid enterprise account, no client data), and tools that are explicitly prohibited (any consumer AI tool for work involving client data, whether free or paid).
The goal isn’t to ban AI. It’s to channel usage toward tools the firm has vetted for security, confidentiality, and reliability.
Prohibited uses
Be explicit about what’s not allowed. Entering client-identifiable information into non-approved tools. Using AI to generate legal citations without independent verification. Submitting AI-generated work product as final without attorney review. Using AI for tasks involving privileged communications without understanding the tool’s data handling.
Data handling rules
This is where confidentiality gets concrete. Define what types of data can be used with each approved tool. Require that client-identifiable information be anonymized or redacted before use in general-purpose AI. Specify data residency requirements: where does the data go, is it stored, is it used for training? Require that AI tool vendors have appropriate data processing agreements in place.
Disclosure requirements
Your policy should address when AI usage must be disclosed: to clients when AI is used in work product on their matters, to courts per applicable local rules or standing orders, and internally for quality control and supervision purposes. Disclosure norms are still evolving, but getting ahead of them protects your firm from being caught off-guard when a court or client asks.
Review and verification
All AI-generated legal research must be independently verified against primary sources. AI-drafted documents must be reviewed by a licensed attorney before finalization. Establish a process for flagging and reporting AI errors. Periodically review AI tool outputs for quality and accuracy.
Training and acknowledgment
A policy only works if people know about it. Require all attorneys and staff to review the policy and acknowledge it in writing. Include AI usage guidelines in new employee onboarding. Provide periodic training; the ISBA’s AI Training Series is a good external resource for this in 2026. Designate someone responsible for keeping the policy current as tools and guidance evolve.
Common mistakes
Making it too restrictive. If your policy bans all AI usage, people will use it anyway and just hide it. A policy that acknowledges the value of AI while setting guardrails is far more likely to be followed.
Making it too vague. “Use AI responsibly” isn’t a policy. Name specific tools, specific data types, and specific workflows. Attorneys need to know exactly what they can and can’t do.
Writing it once and forgetting it. AI tools and capabilities are changing fast. Your policy should include a review cycle, quarterly or semiannually, to stay current with new tools, new risks, and new guidance from the ABA and state bars.
Ignoring staff. Attorneys aren’t the only ones using AI. Paralegals, legal assistants, marketing teams, and administrative staff may all be using AI tools. Your policy should cover everyone.
Getting started
You don’t need to solve everything at once. Start by finding out what’s already being used. Do an informal survey of attorneys and staff; you’ll almost certainly discover tools and uses you didn’t know about. Identify your biggest risks by looking at where client data is most likely to end up in an AI tool. Draft a simple policy covering approved tools, prohibited uses, and the verification requirement for AI-generated legal work. Get buy-in from the managing partner, not IT. This is a professional responsibility issue. And plan to revisit at least twice a year as the landscape evolves.
If you’re looking for structured guidance, the ISBA’s Session 6 on May 15, “AI Policy & Vendor Selection for Law Firms,” is specifically designed to help Iowa firms work through these questions.
Get it on paper
The tools are going to keep changing. The ethical obligations aren’t. An AI acceptable use policy gives your firm a consistent way to handle both, and it tells your clients you’re taking this seriously.
Need help developing an AI acceptable use policy for your firm? Artech Solutions works with Iowa law firms to build practical AI governance frameworks that protect client data and align with professional obligations. Get in touch to start the conversation.