top of page

The AI Already Inside Your CRM Knows More Than Your Sales Team Does

FWS Ethical AI Series — Shadow AI in Your Tech Stack, Part 2


By Holly Hartman  |  Future Workforce Systems


Your sales rep closes their laptop after a big client call. They didn't log the meeting. They didn't update the deal stage. They didn't send the follow-up email.

They didn't have to.


Something already did it for them. It read their emails, updated the opportunity record, contacted the prospect with a personalized message, and notified their manager — all without a single explicit approval.


That’s not a future scenario. That’s Salesforce Agentforce running inside the CRM your company bought three years ago — configured to act first and notify later, not ask for approval on every step.


And if your organization uses Microsoft Dynamics 365 Copilot, there’s a reasonable chance your customer data — deals, contacts, emails, pipeline details — started routing through Anthropic’s AI infrastructure in December 2025. The change was communicated primarily through release notes and admin documentation rather than direct, per-tenant alerts — which means many admins may not have realized their data was now flowing through Anthropic by default.


The AI isn't coming for your CRM. It's already in it. This is Part 2 of the FWS Shadow AI in Your Tech Stack series — and this one is personal to your revenue, your client relationships, and your contracts.



Your CRM Isn't What You Approved Anymore


When most organizations purchased their CRM, they were buying a system of record — a structured place for sales teams to log activities, track deals, and manage contacts. That tool required manual input. It didn't read your emails. It didn't make decisions. It waited for humans to tell it what to do.


That tool no longer exists. The platform still has the same logo, the same contract line item, and the same login screen. But what's running inside it today is categorically different.


Here is what the major platforms are now doing inside organizations that never explicitly approved these capabilities:

  • Salesforce Einstein/Agentforce: Predictive lead and opportunity scoring, generative email drafts, autonomous deal management, refund processing, prospect outreach, and multi-agent task execution — all via the Atlas reasoning engine


  • Microsoft Dynamics 365 Copilot: Natural language CRM queries, AI-generated work order updates, suggested email replies, and since December 2025, default processing through Anthropic Claude Sonnet


  • HubSpot Breeze: Copilot chat across CRM data, predictive lead scoring, dynamic segmentation, and AI content generation — available to all licensed users without per-feature opt-in


  • Zoho Zia: Revenue and churn predictions, data enrichment, anomaly detection, and email and call insights — embedded since 2024 via toggles


  • Pipedrive Pulse: AI engagement scoring, deal prioritization feeds, and deal summaries — included in standard pricing with no separate opt-in required


None of these required a new purchase. They arrived via platform updates, license activations, and admin toggles — bypassing the procurement and security review processes that would normally govern a new technology deployment.


According to 2025 CRM industry data, 81% of organizations now use AI-powered CRMs. Of those, 70% have integrated AI features and 65% are running generative AI specifically.


Most approved a CRM. Almost none approved an AI agent with access to every deal, email, contact, and customer communication in the system.





Let's Be Fair — The Business Case Is Real


Before we go further, let's acknowledge what's driving this adoption: CRM AI works.

According to HubSpot's 2025 research, users of AI-powered CRM features close deals 48% faster and are 83% more likely to exceed their sales goals. Sales productivity increases by 34% and sales cycles shorten by 30% in well-deployed environments. The $8.71 ROI per $1 of CRM spend is real — but only when the tools are properly configured and governed.


Here is the number that should give every C-suite leader pause: a 2026 PwC analysis found that 56% of CEOs report zero ROI from AI despite 81% adoption.


That gap is not a technology failure. It's a governance failure. Leaders who deploy CRM AI without knowing what it's doing — what data it's accessing, what actions it's taking, what it's sending to whom — are not getting the value. They're getting the liability.


The goal of this post is not to argue against CRM AI. It's to argue that you need to know exactly what yours is doing before your next client call, your next audit, or your next data incident makes you find out the hard way.



Act First, Notify Later — The Agentforce Reality


Most leaders, when they hear "AI agent," picture something that makes suggestions. A prompt. A recommendation. A draft that a human reviews before anything happens.


That is not how Agentforce works.


Agentforce uses what Salesforce calls the Atlas reasoning engine's "React Loop" — an autonomous reasoning system that breaks down complex goals into sequenced actions and executes them without per-step human approval. In most configurations, the effective posture is act-then-notify, not ask-then-act.


Here is what that looks like in practice — without a human approving each step:

  • A sales agent detects a stalled deal → emails the prospect with personalized nurture content → logs the activity in the CRM → updates the deal stage to Marketing Qualified → schedules a follow-up task for the rep


  • A service agent receives a support ticket → queries the customer's full history in Data Cloud → executes a Flow to issue a refund → updates the case status → notifies the billing system via MuleSoft


  • An operations agent identifies resource gaps → creates a project plan in Salesforce Projects → assigns team members → tracks progress autonomously



Every one of those actions happened without a human reviewing and approving that specific step. The rep gets a notification after the fact. The client received an email that nobody read before it was sent.


The access footprint compounds this risk. Agentforce can reach full CRM objects, contacts, opportunities, activities, plus Data Cloud, connected applications via MuleSoft and APIs, and in some configurations can interact with system-level metadata and logs that most sales reps themselves never see.


And through Model Context Protocol integration, Agentforce supports multi-agent interoperability, a sales agent delegates to a pricing agent which delegates to a contract agent, creating invisible data flows across organizational boundaries that no single human fully understands or oversees.


In September 2025, the ForcedLeak vulnerability (CVSS 9.4) demonstrated exactly why this matters from a security standpoint: attackers used prompt injection via Web-to-Lead forms to hijack Agentforce agents and exfiltrate CRM data autonomously. The attack surface existed and expanded precisely because agents chain actions without human review at each step.


The governance question this raises is not technical. It's organizational: if an agent sends the wrong email to a client, issues an unauthorized refund, or updates a deal record incorrectly — who approved it? Who is accountable? What is the audit trail? And does your client know their interaction was handled by an autonomous system?



The Feature Nobody Knows Is Running — Einstein Activity Capture


If you use Salesforce with an Einstein license and your sales team has connected their Outlook or Gmail accounts, there is a high probability that a feature called Einstein Activity Capture is running in your organization right now. Most sales reps don't know it's on. Most sales leaders don't know what it captures. And most IT teams haven't reviewed it since it activated.



Here is what it does:

Einstein Activity Capture reads emails from a connected work account and automatically logs them into Salesforce as an Activity, Task, or Event — without any manual entry by the rep. It captures the full email body, sender and recipient information, timestamps, calendar events, meeting attendees, and metadata about attachments.


Once an admin has enabled Einstein Activity Capture for the org and a rep connects their email account, it runs continuously in the background, reps generally don’t flip a separate on switch per email. There is no per-message opt-in.


And here is what almost no one in leadership realizes: it captures all emails from the connected account — not just sales emails. There is no content filter. No category exclusion. No way for an employee to mark a thread as private.

 

What Einstein Activity Capture Is Logging Right Now


  • HR correspondence sent from or received on a work email address

  • Legal discussions and attorney communications flowing through a work account

  • Early-stage M&A conversations involving external parties

  • Sensitive negotiation threads with clients or vendors

  • Personal replies sent from a work email address

  • Performance and compensation discussions conducted over email

  • Any email a sales rep assumed was private because they didn't log it manually

 

The data flows from Gmail or Outlook into Einstein's cloud processing infrastructure, where it is processed by Einstein LLMs for relationship mapping and deduplication before being stored in Salesforce Activities. This means your email content passes through AI processing before it ever reaches your CRM record.


When you disconnect Einstein Activity Capture, the historical data does not automatically delete. Every email that was captured during the active period remains in Salesforce and in Einstein's data store until explicitly removed — a gap that most organizations never close because nobody knew to look for it.


Trailblazer community forums contain numerous user reports of "surprise data sync" — employees discovering that sensitive email threads they never intended to log in Salesforce had been captured and were now visible to their managers and colleagues. Salesforce's 2025 privacy reviews flagged the lack of granular controls as a concern, but the feature remains auto-enabled by default.



The Default Settings Your IT Team Probably Hasn't Reviewed


Here is where the four major CRM platforms stand on defaults, settings, and data handling as of 2026:

 

Platform

Status

Key Setting

What Leaders Need to Know

Salesforce Agentforce

🚩 Review

Setup → Einstein/

Agentforce pages

Act-then-notify default. Einstein Trust Layer PII masking is configurable but OFF by default. Agents start with zero permissions but expand via Setup — most orgs don't audit what's been enabled.

Microsoft Dynamics 365 Copilot

🚩 Review

Sales Admin → Copilot AI → Model Selection

ON by default for new tenants. Dec 2025: quietly switched to Anthropic Claude Sonnet as default LLM — customer CRM data now routes through Anthropic unless admin changed it back. No tenant notification was sent.

HubSpot Breeze

✅ Safe

Trust Center →

AI Privacy (easy)

Explicit commitment: 'We don't use your HubSpot data to train our models.' Applies across all tiers. Zero-retention agreements with OpenAI and Anthropic subprocessors. This is what good looks like.

Zoho Zia

✅ Safe

CRM Settings → toggles

Toggle-activated with no explicit public training policy. Requires direct vendor inquiry before deployment in regulated environments.

 

The Microsoft Dynamics situation deserves a specific callout. In December 2025, Microsoft silently switched the default language model for Dynamics 365 Copilot to Anthropic Claude Sonnet for North American and EU tenants.


This means any Copilot interaction — a natural language search, a summary request, a suggested reply — now sends your customer CRM data to Anthropic's infrastructure for processing. Anthropic maintains a zero-retention policy, meaning data is deleted post-response. But the default change happened without per-tenant notification, and GDPR Schrems II concerns apply for EU organizations sending data to US-based infrastructure without explicit SCCs in place.


To switch back: Admin Center → Sales Settings → Copilot AI → Model Selection. It takes approximately 30 minutes to propagate.



The Risk Nobody Is Talking About — Your Customer's Data


Almost every conversation about shadow AI in CRM platforms focuses on organizational risk — your data, your employees, your security posture. That framing misses the most significant exposure.


CRM systems hold predominantly your customers' data. Their contact information. Their deal history. Their communications with your team. Their pipeline details. Their PII. When AI features inside your CRM process that data, you don't just have an internal governance problem. You have a customer trust problem, a contractual problem, and potentially a regulatory problem.


Most enterprise Master Service Agreements — the contracts you signed with your clients — include language prohibiting "subprocessing without consent" or "sharing data with third parties without approval." AI features running inside your CRM may be out of step with what those agreements permit — especially if no one has revisited them since AI capabilities were added to the platform.


The legal framework compounds this:

  • Under GDPR, AI processing of customer PII requires an explicit legal basis — consent, contract, or legitimate interest. Cross-border transfer to US-based AI vendors requires Standard Contractual Clauses. Processing without either is potentially unlawful, regardless of whether your CRM vendor technically permits it in their terms.

  • Under CCPA, if AI vendors receive customer PII for any form of processing or analysis, opt-out rights may apply — and your customers may have rights you haven't disclosed to them.

  • The 2025 European Data Protection Board guidance specifically flagged "AI subprocessors without a completed Data Protection Impact Assessment" as high-risk, which describes the situation at most organizations running CRM AI today.


The disclosure gap is the most immediate liability. Most companies do not tell their customers that their data is being processed by AI inside the CRM. B2B contracts are increasingly including explicit "AI disclosure" clauses post-2025, and companies that haven't updated their agreements are exposed.


The reputational frame matters too. If your enterprise clients discovered that their pipeline data, email communications, and contact information were being processed by Agentforce or routed through Anthropic without their knowledge — would that violate their trust? In many relationships, yes. Would it create contractual questions? In a significant number of cases, very likely.

 

The Question Your Legal Team Needs to Answer This Week


  • Pull three of your most significant enterprise client MSAs

  • Search for language around 'subprocessing,' 'third-party AI,' 'data processing,' and 'consent'

  • Compare that language against the AI features currently active in your CRM

  • If there is a gap — and there almost certainly is — that is a conversation you want to have proactively, not reactively


 

Three Sets of Questions to Ask Right Now


Questions to Ask Yourself as a Leader

  • Do I know which AI features are currently active inside our CRM — not just what we purchased, but what activated via updates or license changes in the last 24 months?

  • Has anyone reviewed our CRM contract and data processing agreement for AI processing clauses since 2023?

  • Does our Salesforce instance have Einstein Trust Layer PII masking configured — or is it still at default?

  • Are we running Agentforce agents? If so, do we have human review checkpoints before any client-facing autonomous action?

  • Do our customer MSAs permit AI subprocessing of their data — and have we checked recently?

  • If we're on Microsoft Dynamics, has someone verified which LLM is processing our data today?


Questions to Ask Your Sales and Operations Teams

  • Do you know that Einstein Activity Capture may be logging every email from your connected work account into Salesforce — including emails you never intended to be there?

  • Has an Agentforce agent ever sent a communication to a client or prospect without you explicitly approving that specific message before it was sent?

  • Have you ever found a deal update, contact change, or activity log in the CRM that you didn't create yourself?

  • Do you know which AI features are currently active in your CRM dashboard — and which ones are making decisions on your behalf?

  • Are you aware of any client who has asked about AI use in how their account is managed?


Questions to Ask Your CRM Vendor

  • Which AI features are currently active in our instance — by default and by configuration — and what changed in the last 12 months?

  • Is Einstein Activity Capture running in our org? If so, what email accounts are connected and what is captured?

  • Is Einstein Trust Layer PII masking enabled in our org, and what is the factory default?

  • What LLM is currently processing our data — and did that change at any point without our explicit approval?

  • Does our data — or our customers' data — get used to train any models, yours or your subprocessors'?

  • Who are your AI subprocessors, what data do they receive, and what are their retention policies?

  • What notification process do you follow when default settings or data processing arrangements change?



Governance Starts With Knowing What's Already Running


The most important shift in this post is not technical. It's a mindset shift.

The CRM governance conversation used to be about access controls — who can see what data, who can edit which records. That conversation is still necessary. But it is no longer sufficient.


The new governance conversation is about autonomous action: what is the AI allowed to do without a human approving each step, on behalf of your organization, touching your customers' data, under your contractual obligations.


From guardrails to governance means not waiting for the audit, the client complaint, or the breach notification to understand what your CRM AI is actually doing.


Here are three things any leader can do this week:

  1. Ask your Salesforce admin to pull a full list of active Einstein and Agentforce features in your org — including anything that has been enabled in the last 24 months

  2. Check whether Einstein Activity Capture is running and review which email accounts are connected. If it is running, find out what data has been captured and whether it includes any threads that should not be in your CRM

  3. If you are on Microsoft Dynamics 365, ask your admin to verify which LLM is currently processing your Copilot queries and confirm whether the December 2025 Anthropic default was changed back

 

Being AI-ready doesn't mean having the most features activated. It means knowing which features are running, what they're doing with your data and your customers' data, and having the governance infrastructure to manage what happens when they act without asking first.


The shred pile used to protect you. Governance is the only thing that does that job now.


This is Part 2 of the Shadow AI in Your Tech Stack series. Part 1 covered AI notetakers. Part 3 coming soon: The AI features hiding inside your Microsoft 365 suite — and why 'we have Copilot' is not the same as 'we have governance.'

 

Holly Hartman is the founder of Future Workforce Systems (FWS), an AI governance and workforce readiness consultancy. FWS helps mid-to-large enterprises move from AI-anxious to AI-ready — and from guardrails to governance — always through an Ethical AI lens. Because how we adopt AI matters as much as whether we adopt it. Learn more at futureworkforcesystems.com.



Recent Posts

See All

Comments


FWS Logo Transparent

Other Questions or Inquiries:

Email: contact@futureworkforcesystems.com

Company

© 2026 Future Workforce Systems · Holly Hartman. All rights reserved. 
These tools are for personal use and professional development only.
Reproduction, redistribution, or use in paid offerings without written consent is not permitted.


To license or adapt tools for your team or program, contact us: contact@futureworkforcesystems.com

|

|

bottom of page