The Bot in the Room
- Holly Hartman
- Mar 19
- 8 min read
Updated: Mar 28
What Your AI Notetaker Is Doing With Your Most Sensitive Conversations

FWS Ethical AI Series — Shadow AI in Your Tech Stack, Part 1
By Holly Hartman | Future Workforce Systems
There was a time when sensitive conversations had a natural life cycle. The meeting ended, the notes were shredded, and the room held its secrets. Strategy discussions, HR concerns, legal deliberations, early M&A conversations — they lived and died in the room.
That era is over. And most leaders don't know it yet.
Today, an uninvited participant joins your most sensitive meetings — one that takes perfect notes, generates a summary, and instantly syncs that content to Salesforce, Slack, Notion, and email. Nobody hired them through procurement. Nobody ran them through legal. And in most organizations, nobody even knows they're there.
That's your AI notetaker. And everything you would have shredded after that meeting is now captured, stored, and possibly training someone else's AI model.
This is Part 1 of the FWS Shadow AI in Your Tech Stack series — a practical guide for enterprise leaders on the AI tools already operating inside your organization, outside your governance. We're starting here because notetakers are the most pervasive, most overlooked, and most immediately actionable form of shadow AI in the enterprise today.
They're Already Everywhere

According to a 2026 survey by Fellow, 75% of professionals now use an AI notetaker in their work meetings. Not a fringe tool. Not an experiment. Core workplace infrastructure — adopted faster than most organizations have had a policy conversation about it.
A Metrigy study from 2025–26 found that 40% of companies have formally deployed AI meeting assistants, with another 42% planning to within the year. That's a near-total market penetration in less than 24 months.
The tools driving this adoption include:
Standalone tools employees bring in themselves: Otter.ai, Fireflies.ai, Fathom, Read.ai, tl;dv, Grain, Notta, MeetGeek
Platform-embedded tools that activated inside already-approved software: Zoom AI Companion, Microsoft Copilot meeting summaries, Notion AI, Slack AI, Google Meet transcription
That second category is where the real governance gap lives. These tools didn't require a new purchase or a new login. They activated inside platforms your team already uses — often via an admin toggle, a user setting, or a vendor update — without triggering any security or procurement review.
And the standalone tools? Most are connecting through personal Google or Microsoft calendar accounts, meaning bots auto-join corporate meetings, client calls, and HR conversations with zero IT visibility and zero policy coverage.
Let's Be Fair — They're Genuinely Useful

Before we go further, let's acknowledge what's real: AI notetakers solve a genuine problem.
A 2026 analysis by Sonix found that 62% of users save more than four hours per week from AI transcription — roughly a full month of working hours recaptured per knowledge worker, per year. Companies using AI meeting transcription report a 25% reduction in meeting time and a 30% increase in meeting productivity.
The qualitative gains are real too: participants engage more actively when they're not taking notes. Action items are captured accurately. Long-running projects maintain continuity. Onboarding improves when new team members can search through meeting history rather than relying on tribal knowledge.
A well-governed AI notetaker can return 3–5 hours per week to mid-level managers and create a high-fidelity institutional memory. The goal of this post is not to argue for banning them. It's to argue that you need to know what they're actually doing — because right now, most leaders don't.
What's Actually Happening to Your Data
Here is the data journey most leaders have never seen mapped out:
Audio is captured — either by a bot joining as a meeting participant or recorded locally on the user's device
Audio is uploaded to the vendor's cloud for processing
An LLM generates a transcript, summary, and action items
Outputs are synced automatically to connected platforms — Salesforce, HubSpot, Slack, Notion, email
All of this is stored in the vendor's cloud, often indefinitely
That's potentially five or six copies of your most sensitive conversations living in systems outside your walls — and that's before anyone has forwarded the summary email or pasted the transcript into another AI tool.
The personal account problem makes this significantly worse. When employees use personal notetaker accounts for work meetings, there is no central admin console, no SSO, no ability to enforce retention policies, and no off-boarding controls. When that employee leaves your organization, their personal account — and every transcript from every meeting they ever recorded — goes with them. Or stays in the vendor's cloud. Either way, you have no visibility and no recourse.
There's also a downstream data gap nobody talks about: even enterprise contracts that include data deletion provisions almost never result in organizations auditing whether the synced copies in Slack, in Salesforce, and in email threads were also removed. That data lives on.
The industry-specific risks are significant:
Industry Risk Snapshot
|
The Training Data Default Nobody Is Talking About
Here is the most important thing in this post, and the thing almost no one in the C-suite knows: most AI standalone notetakers default to using your conversation data to train their models. Your board meeting, your acquisition discussions, your performance review conversations — they may already be part of an external model's training set.
The 2025 Otter.ai class-action lawsuit filed in California federal court alleged exactly this: that Otter records users without proper consent in two-party consent states and uses those recordings to train its AI models without adequate disclosure. The complaint argues that even 'de-identified' voice and transcript data can be re-identified — and that using meeting data for model training constitutes non-consensual surveillance. This is active litigation, not a hypothetical.
Harvard University moved in February 2025 to restrict AI meeting assistant use in university business, specifically citing training data and consent concerns. Universities and enterprises across regulated industries are following suit.
Here is where the major platforms stand as of 2026:
Platform | Trains on Data? | Notes |
🚩 Yes / Review | Default opt-in for free/basic. 2025 class-action over undisclosed training. Enterprise opt-out possible but requires negotiation. | |
🚩 Yes / Review | Previously opted users in by default. Policies updated post-backlash but criticized as vague. University bans followed. | |
Notta (free tier) | 🚩 Yes / Review | Free tier defaults to training on transcripts. Paid/enterprise versions offer opt-out. |
Fathom | ✅ No | No training on user data. Positioned as privacy-first. |
✅ No | Explicit commitment: meeting content 'never used to train any AI models.' Third-party vendors have zero data retention. | |
Zoom AI Companion | ✅ No | Zoom states no training on customer data. Admin-controlled with enterprise visibility. |
Microsoft Copilot | ✅ No | Microsoft policy: customer data not used to train public models. Enterprise admin center controls. |
tl;dv | ✅ No | Trust Center commits to no training. Isolated per-customer. SOC 2 enforced. |
Grain | ✅ No | No training on data. Enterprise-focused with data isolation. |
The best practice — right now, before you finish reading this — is to assume every notetaker tool in your organization is training on your data until you verify otherwise. Look for toggles labeled "data privacy," "model improvement," or "product improvement" in account or admin settings. Then mandate enterprise accounts with contractual "no training" clauses for any tool you approve going forward.
The Legal Reality Most Organizations Are Ignoring
In the United States, the federal Wiretap Act requires only one-party consent — meaning one participant can record if they consent to it. But multiple states require all-party consent, including California, Florida, Illinois, Maryland, Massachusetts, Pennsylvania, and Washington. If even one participant is calling in from an all-party consent state, you may need consent from everyone on the call — including the client or job candidate who had no idea an AI bot joined the meeting.
Under GDPR, recording constitutes processing of personal data. Routing that data to a U.S.-based vendor's cloud without a valid legal basis or adequate safeguards can trigger enforcement. European privacy experts have begun framing AI notetakers not just as recording tools but as continuous workplace surveillance infrastructure — because a tool configured to join every virtual interaction can analyze participation, sentiment, and behavior patterns over time.
The Otter.ai lawsuit is the first major litigation in this space, but it won't be the last. The legal infrastructure is catching up to the technology faster than most enterprise legal teams realize.
Three Sets of Questions You Should Be Asking Right Now
Governance starts with asking the right questions. Use these with your leadership team, your staff, and your vendors.
Questions to Ask Yourself as a Leader
Do I know which AI notetakers are joining my organization's meetings right now — including ones employees connected through personal calendars?
Have we reviewed where those transcripts are stored and who owns them contractually?
Do we treat board meetings, HR conversations, legal matters, and M&A discussions differently from general team meetings?
Are we using personal accounts or enterprise-licensed tools — and do we actually know the difference in risk?
Have legal, HR, and compliance been looped in on this conversation?
Questions to Ask Your Teams
Are you using a personal notetaker account or a company-sanctioned tool?
Do you announce to all participants — including external guests — that AI is recording before the meeting starts?
Do you know where your meeting summaries are being sent after the call ends?
Have you ever pasted a meeting transcript into another AI tool for follow-up analysis or drafting?
Do you know whether the notetaker tool you're using is training on your conversations?
Questions to Ask Your Vendors
Where is our data stored, in what country, and on whose infrastructure?
Is our conversation data used to train your models — by default, or at all?
Where exactly is that setting located, and what is the out-of-the-box default?
What happens to our data if we cancel our subscription or an employee leaves?
Do you have a BAA, DPA, or SOC 2 Type II available — and does it cover all your AI subprocessors?
Who are your AI subprocessors, and what are their data retention policies?
Governance, Not Prohibition
Banning AI notetakers is the wrong answer. Research consistently shows that bans drive AI adoption underground — employees simply migrate to personal accounts and unapproved tools, leaving you with the same risk and zero visibility.
A 2026 analysis of 22 million enterprise AI prompts found that prohibition doesn't eliminate shadow AI — it just makes it invisible.
The right answer is to treat AI notetakers for what they actually are: recording infrastructure. Not a productivity app. Not a browser extension. Recording infrastructure — with all the governance, legal review, and policy rigor that implies.
Here are three things any leader can do this week:
Inventory: Ask IT to identify which AI notetakers are currently connected to your organization's calendar systems. This is your starting point — you cannot govern what you cannot see.
Audit settings: For every notetaker tool currently in use, locate the training data settings and verify the default. Assume opt-in until you confirm otherwise.
Define your no-record zones: Decide by policy which meetings — board, HR, legal, M&A, union — are notetaker-free. Document it. Communicate it. Enforce it.
Being AI-ready doesn't mean using every tool available. It means knowing which tools you're actually using — and governing them with the same intentionality you'd apply to any other system that records, stores, and processes your organization's most sensitive information.
The shred pile used to protect you. Now you need governance to do that job.
This is Part 1 of the Shadow AI in Your Tech Stack series.
Next up: the AI features hiding inside your CRM and how your sales data is moving in ways your team never authorized.
Holly Hartman is the founder of Future Workforce Systems (FWS), an AI governance and workforce readiness consultancy. FWS helps mid-to-large enterprises move from AI-anxious to AI-ready through ethical AI frameworks, governance strategy, and practical implementation support. Learn more at futureworkforcesystems.com.



Comments