top of page

The Shadow AI Hiding in Your Browser

What Your Team Installed in 30 Seconds Is Reading Everything on Their Screen


FWS Ethical AI Series — Shadow AI in Your Tech Stack, Part 4

By Holly Hartman | Future Workforce Systems



There is a category of AI tool that almost never comes up in governance conversations. It doesn’t require a vendor contract. It doesn’t appear in your SaaS inventory. It doesn’t trigger a procurement review. And it can see everything your employee sees in their browser — every email they open, every document they draft, every client record they pull up, every form they fill out.


It takes about 30 seconds to install. Most employees don’t think twice about it.

That’s your browser extension problem. And in 2026, it has become one of the most urgent and least-governed AI surfaces in the enterprise.


This is Part 4 of the FWS Shadow AI in Your Tech Stack series. Parts 1, 2, and 3 covered AI notetakers, CRM AI features, and Microsoft 365 Copilot. This one is different. Those tools required at least some organizational decision to deploy.


Browser extensions often don’t. They live at the edge of your governance — installed by individuals, invisible to IT, and carrying permissions that would alarm most legal and compliance teams if they ever read them.



The Scale of the Problem Nobody Is Watching


The numbers from the LayerX 2025 Enterprise Browser Extension Security Report should be required reading for every CISO, general counsel, and risk officer in enterprise organizations:

  • 99% of employees have browser extensions installed

  • 52% have more than 10 extensions

  • 53% of enterprise users’ extensions can access sensitive data — cookies, passwords, page contents, and browsing information

  • More than 20% of enterprise users have at least one GenAI extension

  • 51% of extensions have not been updated in over a year

  • 26% are sideloaded — meaning they bypassed the official store review entirely


Let that land. More than half of your employees’ browser extensions can access sensitive data right now. And more than one in five employees already has an AI tool running in their browser that IT almost certainly didn’t approve.


This is not a fringe risk. It is baseline enterprise exposure — and most organizations have no visibility into it at all.



What Browser Extensions Can Actually See


To understand why this matters, you need to understand how browser extensions work at a permissions level. When an employee installs an extension, it declares permissions in its manifest — a list of what it is allowed to access. Chrome’s permission model includes access such as:


  • tabs — exposes privileged tab data including URLs, titles, and navigation

  • cookies — provides access to browser cookies, including session tokens

  • history — exposes the full browsing history

  • clipboardRead — can read anything the user has copied to their clipboard

  • clipboardWrite — can write to the clipboard

  • webRequest — can observe or alter network requests

  • scripting — can inject JavaScript into web pages

  • desktopCapture — can capture screen content

  • tabCapture — can capture tab audio and video


The critical distinction is between what an extension requests and what it actually uses. A well-designed extension may request broad access but only use a small subset of it. The risk case is the reverse: an extension that requests broad permissions and then uses them to read sensitive page content, session cookies, form data, or clipboard content — whether by design or because a bad actor has compromised it.


And here is the part that should stop every leader in their tracks: AI browser extensions are valuable precisely because they can observe what the user is doing. An AI writing assistant embedded in your browser helps because it can see what you’re drafting. An AI summarizer works because it can read the page. An AI sidebar assistant responds to your questions because it has context about what’s on your screen. The capability that makes these tools useful is the same capability that creates the governance risk.



The Tools Your Team Is Already Using


The most widely discussed AI browser extensions in enterprise environments in 2025 and 2026 include tools your employees are almost certainly already using. Here is a practical risk overview:


Tool

Common Use

Risk Notes

Grammarly

Writing, email

Accesses user content, usage data, and account info. Privacy policy updated March 2026 — uses information to train AI models unless controls are adjusted. Review enterprise settings carefully.

Microsoft Copilot in Edge

Summarize, draft, search

Processes browsing context and page info when permission granted. Enterprise data protection applies when signed in with Entra ID. More governed than third-party extensions — but still processes browser context and is increasingly agentic in capability.

Sider

AI sidebar, page Q&A

Broad permission patterns common in this category. Page content and browser context accessed for summarization and chat. Frequently marketed as a browser-side AI copilot.

Monica

AI sidebar, browser operator

Browser context and page content for assistant features. ‘Browser Operator’ framing signals action-oriented use — meaning it can take actions, not just read content.

Compose AI

Autocomplete, drafting

Embedded in drafting workflows. Accesses typed text and page/form context. Sensitive text exposure more likely because it operates inside the writing process.

QuillBot

Rewrite, paraphrase

Text content submitted for rewriting, page content when used inline. Flagged in current privacy-risk reporting alongside Grammarly.

ChatGPT helpers

Summarize, rewrite, chat

Capabilities vary widely by vendor. Policy should be extension-specific, not brand-specific. Page content, selected text, prompts, and browser context depending on specific extension and permissions.


The pattern across all of these tools is consistent: the more useful they are, the more they need to see. The governance question is not whether these tools are good or bad. It is whether your organization has made a conscious, documented decision about which ones are operating in your environment and under what conditions.


The Incidents That Prove This Is Not Theoretical

In early 2026, security researchers uncovered a coordinated campaign of more than 30 malicious Chrome extensions impersonating AI assistants — ChatGPT-style sidebars, Google Gemini-branded helpers, and AI writing tools — distributed through the official Chrome Web Store. Combined install counts reached hundreds of thousands of users. These extensions stole Gmail credentials, chat histories, and sensitive data, then transmitted it to attacker-controlled infrastructure. Google confirmed removal of all extensions after researchers exposed them. (Fox News, Dark Reading, Paubox)


A separate 2026 report from Astrix Security identified two Chrome Web Store extensions appearing to be legitimate AI sidebar tools that were secretly harvesting users’ ChatGPT and DeepSeek conversation histories — capturing data every 30 minutes and exfiltrating it to external infrastructure. One carried Google’s ‘Featured’ badge. Combined installs: approximately 900,000 users. Google confirmed both extensions were removed following the report.


Microsoft Defender research published in March 2026 described a family of malicious AI-assistant extensions collecting LLM chat histories and browsing data across ChatGPT and DeepSeek. These extensions reached approximately 900,000 users and were active across more than 20,000 enterprise tenants — because those enterprises allowed employees to install AI helper tools without IT review or approval. Google removed the extensions following Microsoft’s disclosure.


A coordinated campaign called AiFrame involved 32 browser extensions advertised as AI assistants for summarization, chat, and Gmail assistance. Over 260,000 installs. The extensions collected credentials, personal data, and email content using remote iframes to hide their data-harvesting logic from store review. Google confirmed removal after security researchers correlated domains, certificates, and code across all 32 listings.


The thread connecting all of these incidents is the same: AI-branded extensions carry implicit trust. Employees install them without scrutiny. Organizations don’t inventory them. And by the time the exposure is discovered, the data has already left the building.



Why This Is Harder to Govern Than Other Shadow AI

The governance gap in browser extensions is structurally different from the gaps in notetakers, CRM AI, or Microsoft Copilot. Those tools required at least some organizational touchpoint — a vendor relationship, a license purchase, an admin toggle. Browser extensions bypass all of that.


A SaaS application is usually visible in SSO logs, vendor management systems, and security reviews. A browser extension can be added in seconds from the Chrome Web Store, appear only in the individual’s browser profile, and inherit the user’s active session and permissions — with zero footprint in the systems IT uses to monitor enterprise software.


The governance gap widens significantly when extensions have agentic features. Reading page content is a passive risk. But extensions that can click, fill, submit, or modify workflows — the ‘browser operator’ category growing rapidly in 2026, including first-party tools like Microsoft Copilot in Edge — shift the risk from data exfiltration to unauthorized action. That is exponentially harder to detect through ordinary DLP or CASB tooling.


And the sideloading problem makes inventory nearly impossible without active controls. An extension installed outside the official store — which 26% of enterprise extensions are, according to LayerX — bypasses even the minimal vetting the store provides. It exists entirely outside any governance framework until IT actively looks for it.



What Regulated Industries Need to Know Right Now

For finance, healthcare, legal, and HR teams, the browser extension risk is not abstract. These teams handle the most sensitive data in your organization through browser-based systems — and browser extensions can see all of it.

Industry

Specific Risk

Finance

Account numbers, customer records, payment data, advisory communications, and internal reports are all potentially visible to an extension with page-content or clipboard access. Session cookies create hijacking risk. Any extension active during financial workflow sessions should be treated as having access to everything on screen.

Healthcare

Patient-facing portals, EHR web interfaces, claims systems, and clinical documentation workflows are extension-sensitive zones. An extension that reads page content or clipboard data may capture protected health information. HIPAA analysis should treat browser extensions like any other software with PHI access.

Legal

Contracts, privileged communications, case files, and investigation materials often live in web apps and cloud documents. An extension that reads page content or clipboard data can route privileged material into a third-party AI service, creating privilege, confidentiality, and retention concerns that most legal teams have not yet analyzed.

HR

Personnel files, compensation data, performance reviews, and investigation records handled through browser-based HR systems are all potentially visible to extensions with broad permissions. EEOC and state-level employment law implications of AI involvement in HR workflows make this a particular area of exposure that governance frameworks rarely address.



The Governance Framework:

Detect, Block, Govern

The fastest way to reduce AI extension risk combines three layers: continuous inventory, permission-based blocking, and policy governance.


Detect

Start with a full extension inventory across managed browsers. Flag anything with broad permissions. Chrome’s admin extension permissions controls give IT visibility into what extensions can access across your fleet. Prioritize extensions that are:

  • Sideloaded outside the official store

  • Recently installed by a user rather than pushed by IT

  • Requesting ‘read and change data on all websites’ host access

  • AI-branded or positioned as a copilot, assistant, rewrite, summarize, or agent tool

  • Infrequently updated or published by an unverifiable publisher


Treat every extension as a software supply-chain component with a data path. A text assistant that rewrites prose in a fixed field is much lower risk than a browser operator that reads page content, accesses cookies, and can submit actions.


Block

Chrome Enterprise supports administrative control of extensions based on permissions. As of Chrome 147/148, Google introduced blocking extensions by third-party risk scores in the Admin Console — which helps catch newly popular AI extensions before a manual review completes. Recommended blocking rules:

  • Block all sideloaded extensions unless there is a documented security exception

  • Block any extension requesting cookies, history, clipboardRead, webRequest, tabCapture, or desktopCapture unless security approves it

  • Block unvetted AI extensions in departments handling regulated data

  • Require publisher verification, recent updates, and documented business justification for any GenAI extension


For higher-risk departments, consider separate managed browser profiles with tighter extension policies. The goal is to prevent consumer-installed AI tools from sharing the same browsing session as regulated business workflows.


Govern

Make extension review part of your SaaS intake and AI governance process — not a separate ‘browser’ problem. Extensions can be more invasive than many SaaS applications, so the same review logic applies: purpose, data access, storage location, vendor trust, and offboarding controls.


A strong governance model includes:

  • A pre-approved extension catalog maintained by IT and security

  • Monthly or quarterly recertification of installed extensions

  • Automatic removal of unapproved extensions from managed devices

  • Logging of install events, version changes, and permission changes

  • Separate policies for IT-managed devices versus BYOD browsers


One critical caution: static allowlists and point-in-time reviews are increasingly ineffective because extension ecosystems change quickly. An extension that was benign when approved can receive a silent update that adds permissions or changes behavior. Continuous monitoring — not annual audits — is the appropriate posture.




The Agentic Escalation: From Reading to Acting


Everything above describes the current risk landscape. Here is where it is heading.

The browser extension market is moving rapidly from passive AI tools — tools that observe and suggest — to agentic tools that take action. Monica’s ‘Browser Operator,’ Sider’s evolving capabilities, Microsoft Copilot in Edge’s increasingly agentic features, and a growing category of AI agents that can navigate, click, fill, and submit on behalf of users represent a fundamentally different risk category.


When an extension can only read page content, the risk is data exposure. When an extension can take actions in the browser, the risk expands to unauthorized transactions, unauthorized form submissions, and unauthorized system interactions — all potentially triggered by AI without human review of each individual action.


Most enterprise governance frameworks have not caught up to this distinction. They may have acceptable use policies for AI tools and data handling policies for SaaS applications. Almost none have policies governing what actions an AI agent is permitted to take in a browser session on behalf of an employee.


If you haven’t governed AI browser extensions as reading tools yet, you are not ready for AI browser agents. The governance gap that exists today will be significantly harder to close once agentic browser tools are embedded across your workforce.


Three Questions and Three Actions for This Week

Questions to Ask Your Leadership Team

  • Do we know which browser extensions are currently installed across our managed devices — and do we have any visibility into extensions on unmanaged or BYOD devices used for work?

  • Have we defined which departments or workflows are ‘extension-sensitive zones’ where AI browser tools should require explicit approval?

  • Has our legal team reviewed whether our current browser extension exposure creates compliance risk under HIPAA, GDPR, state privacy laws, or our client confidentiality obligations?


Questions to Ask Your IT and Security Teams

  • What extension inventory and monitoring capabilities do we currently have across managed browsers?

  • Are we using Chrome Enterprise policies to control extension permissions — and have we updated those policies to address the AI extension category specifically?

  • Do we have visibility into sideloaded extensions — and are we blocking sideloading by default?


Three Things Any Leader Can Do This Week

  • Run an extension inventory. Ask IT to pull a full list of extensions installed across managed devices. If you don’t have that capability, that itself is the finding you need to act on.

  • Define your high-risk zones. Identify which teams — finance, legal, HR, healthcare, executive — are handling sensitive data in browser-based systems and should have the most restrictive extension policies.

  • Add browser extensions to your AI governance scope. Your AI acceptable use policy, your governance committee charter, and your vendor review process should all explicitly address browser extensions. They are AI tools. They should be governed like AI tools.


Governance Is the Organization’s Job

The browser extension problem is not going to be solved by the Chrome Web Store, by AI vendors, or by Google’s increasingly active enforcement against malicious extensions. Those are important guardrails. They are not governance.


Governance is knowing what is installed in your environment, understanding what permissions those tools have, making conscious decisions about which ones are appropriate for which workflows, and enforcing those decisions consistently.


The organizations getting AI governance right aren’t the ones with the most restrictive policies. They’re the ones where leadership asked the right questions before the incident — not after the audit finding, the data breach, or the discovery that a ‘Featured’ extension had been quietly harvesting conversations across 20,000 enterprise tenants for months.


Your browser is not a governance-free zone. It is where your employees spend most of their working day — and in 2026, it is where AI has the most direct access to your most sensitive information.


From guardrails to governance means owning that surface. Not eventually. Now.


This is Part 4 of the Shadow AI in Your Tech Stack series. Part 1 covered AI notetakers. Part 2 covered CRM AI features. Part 3 covered Microsoft 365 Copilot. Part 5 coming soon: the shadow AI hiding in your HR and recruiting platforms.



Holly Hartman is the founder of Future Workforce Systems (FWS), an AI governance and workforce readiness consultancy. FWS helps mid-to-large enterprises move from AI-anxious to AI-ready — and from guardrails to governance — always through an Ethical AI lens. Because how we adopt AI matters as much as whether we adopt it. Learn more at futureworkforcesystems.com.

Comments


FWS Logo Transparent

Other Questions or Inquiries:

Email: contact@futureworkforcesystems.com

Company

© 2026 Future Workforce Systems · Holly Hartman. All rights reserved. 
These tools are for personal use and professional development only.
Reproduction, redistribution, or use in paid offerings without written consent is not permitted.


To license or adapt tools for your team or program, contact us: contact@futureworkforcesystems.com

|

|

bottom of page