When Your Meeting AI Becomes a Witness: AI Notetakers and Attorney-Client Privilege

AI meeting notetakers can put attorney-client privilege at risk. Learn how law firms should evaluate AI transcription tools and protect confidential communications.

You’re on a Teams call with a client, working through case strategy. Fifteen minutes in, you notice a small purple icon at the top of the screen. It reads: “Transcription has started.”

Someone on the call turned on AI transcription. Maybe it was you, by accident. Maybe it was your associate. Maybe it was enabled by default in your Microsoft 365 environment and nobody ever turned it off. Either way, everything said in this meeting is now being recorded, processed, and stored somewhere.

The questions nobody asked before hitting “record”: stored by whom, under what terms, and what does that mean for privilege?

How AI meeting tools end up in your firm

If you haven’t deliberately chosen and deployed an AI meeting tool, there’s a good chance one is already running. Teams transcription is often enabled by default in M365 environments. Many attorneys don’t realize their meetings are being transcribed and stored until someone points it out.

Beyond Teams, tools like Otter.ai, Fireflies, Fathom, and Zoom AI Companion are easy for individual users to install on their own. An associate signs up for a free trial. A paralegal starts using one to take notes during depositions. A client’s in-house team joins a call with their own AI assistant already recording. None of this goes through IT review. Nobody checks it against the firm’s confidentiality obligations.

This is the pattern we see over and over at Iowa firms: AI tools arrive through the side door, one user at a time, with no policy and no oversight.

The privilege problem: what the terms of service actually say

Most general-purpose AI notetakers are built for corporate sales teams, product managers, and startup founders. They weren’t designed with attorney-client privilege in mind, and their terms of service reflect that.

Read the fine print on a typical consumer AI notetaker and you’ll find some combination of the following: the vendor retains meeting data beyond the session itself, content may be used to train or improve AI models, data may be processed by third-party subprocessors, and storage may occur in jurisdictions with different data protection standards.

Any one of those provisions creates a potential privilege problem. Combined, they create a situation where confidential client communications are being shared with unknown third parties under terms that no attorney would agree to if they read them carefully.

Bob Ambrogi highlighted this gap in a March 2026 LawNext piece comparing general AI notetakers with purpose-built legal tools like Querious. Courts are starting to notice the disconnect between what these tools promise in their marketing and what their contracts actually permit. That disconnect lands on the law firm.

What the courts are saying about AI and privilege

If the terms-of-service problem sounds abstract, recent federal rulings make it concrete.

In United States v. Heppner (S.D.N.Y., Feb. 2026), Judge Rakoff issued the first federal ruling on AI and privilege. A defendant used the consumer version of Anthropic’s Claude for legal research, and the court found that the work was protected by neither attorney-client privilege nor work product doctrine. The reasoning centered on the platform’s terms of service: because the consumer tool’s terms allowed data retention, use for model training, and disclosure to third parties, there was no reasonable expectation of confidentiality.

Judge Rakoff did note that enterprise AI platforms with contractual no-training commitments and data isolation present a “materially different” analysis. That distinction matters.

A month later, Morgan v. V2X, Inc. (D. Colo., Mar. 30, 2026) tested Heppner’s boundaries. The court extended strong work-product protection to a pro se plaintiff’s AI-related litigation materials. It distinguished Heppner on the civil vs. criminal context and the fact that a pro se litigant is both party and advocate. But here’s the part that matters for law firms: the court imposed a modified protective order requiring that any AI platform used must contractually bar training on confidential data, prohibit third-party sharing, and allow deletion on request.

Now apply that logic to an AI notetaker recording a privileged attorney-client conversation. If the notetaker’s terms permit data retention, training, or third-party sharing, the Heppner framework suggests no reasonable expectation of confidentiality, and no privilege. Even in the more favorable Morgan framework, the court demanded contractual AI safeguards as a condition of protection.

Both rulings point to the same thing: the vendor’s contract terms determine whether your AI use is defensible. Consumer-grade vs. enterprise-grade isn’t an abstract distinction. It’s the line between privileged and discoverable.

This matters more than most firms realize. Legal-specific meeting intelligence tools (Querious is one example) are built with privilege protection in mind. They commit contractually to not training on client data, provide data residency controls, and include explicit privilege protections in their agreements.

That’s a very different data handling model than a consumer AI notetaker that treats your client strategy session the same way it treats a product standup.

The “Vibe Lawyering Creates Privilege and Confidentiality Concerns” piece in Law.com (March 30, 2026) puts it well: attorneys who adopt AI tools without understanding the data handling implications are creating privilege risk they may not recognize until opposing counsel raises it in discovery.

The trust data backs this up. A March 2026 Paragon Legal study found that two-thirds of legal professionals have had to override or correct AI-generated work. Only one in five place high trust in AI output. The tools are useful, but they need informed, deliberate adoption.

What your firm should do now

ABA Formal Opinion 512 (July 2024) already established that the duty of competence extends to AI tools and the duty of supervision covers how staff use them. Meeting AI falls within that obligation.

Here’s a practical starting point.

Start by auditing what’s already running. Find out which AI meeting tools are active in your environment. Check Teams transcription settings. Ask attorneys and staff what they’ve installed. You’ll almost certainly find tools you didn’t know about.

Then review each tool’s terms of service against your privilege requirements. Read the vendor’s data handling terms. Look for provisions on data retention, model training, third-party processing, and data residency. If the terms don’t protect confidentiality, the tool shouldn’t be used for privileged communications.

This also means verifying AI outputs, even from enterprise tools. Enterprise AI platforms reduce privilege risk, but they don’t eliminate the duty to verify what they produce. In early April 2026, the 5th Circuit sanctioned an attorney who used Thomson Reuters’ CoCounsel, an enterprise legal AI tool, because she didn’t verify the output. The 6th Circuit imposed $30,000 in sanctions for AI-generated fake citations. Two New Orleans city attorneys resigned after filing unverified ChatGPT output. Courts are losing patience, and “I used an enterprise tool” is not a defense for skipping verification.

You’ll also need a firm-wide policy. Your AI acceptable use policy should cover meeting transcription and recording tools specifically. Which tools are approved? Which are prohibited? What types of meetings can be recorded? Client-facing meetings need different rules than internal ones.

Get your IT team or MSP involved. This isn’t just a policy question. Someone needs to review your M365 tenant settings, check what’s enabled by default, and lock down the tools that don’t meet your standards. If your firm doesn’t have internal IT capacity for this, your managed services provider should be part of the conversation.

Where meeting transcription adds genuine value, look at tools built for legal use cases with appropriate contractual protections. The cost difference is usually modest compared to the privilege risk of using a consumer tool.

And think beyond meeting tools. Practice management platforms are starting to embed autonomous AI agents. Clio Work launched agentic capabilities in April 2026, moving from simple AI assistance to autonomous task execution within the practice management system. Other platforms will follow. The privilege and governance questions raised by meeting AI will soon apply to every AI-enabled system touching confidential client data. Getting this right now, starting with meeting tools, puts you in a better position for what’s coming.

The recording is already running

Nobody set out to create this problem. Meeting AI is quietly recording privileged conversations in law firms across the country, and in most cases, nobody made a conscious decision to allow it.

The fix isn’t complicated. Audit, review, set policy, enforce it. But it does require someone to take the first step.


Concerned about AI meeting tools in your firm? Artech Solutions helps Iowa law firms audit their M365 configurations and build AI governance policies that hold up. Let’s talk about it.