The 2026 AI wave made extension privacy materially worse, then a little better. A flood of new AI sidebars hit the Chrome Web Store with permission requests that would have looked alarming two years ago; a smaller group of serious tools responded by shipping with narrower permissions and clearer retention policies. This post gives you the four-vector audit checklist we use before installing an AI extension, applies it to a few named examples, and runs the same check on Clicky so you can tell what it accepts and what it refuses.
Why 2026 made extension privacy harder
Three things converged. First, consumer-grade browser AI became genuinely useful — which is why so many new extensions shipped. Second, the path of least resistance for shipping one is to request broad host permissions (<all_urls>) so the content-script injection works everywhere without friction. Third, integrating a language model usually means routing data through a cloud provider, adding one or more hops outside the user’s direct control.
None of those things are inherently bad. But they compound. A typical 2026 AI sidebar extension has: broad host access, continuous content-script injection, a cloud model vendor, a telemetry pipeline, and a persistent memory store. Each layer is a data-flow decision; any one of them can be the problem.
The four data vectors to audit
Before installing any AI Chrome extension, audit it along these four axes. Most privacy issues surface in one of them.
- Permissions. What the extension can read, and on which pages.
- Model routing. Where your page data goes for inference and under what terms.
- Telemetry. What usage data is collected independently of the AI call.
- Memory and retention. What is stored, where, and for how long.
Each has its own audit method, below.
Vector 1 — permissions in the manifest
Every Chrome extension ships a manifest.json that declares what it can do. After installing, you can inspect it at chrome://extensions → Details → “View web permissions.” Before installing, the Chrome Web Store listing’s “Permissions” section translates the manifest into English.
The key distinction is broad vs. narrow:
- Broad —
<all_urls>,https://*/*, ortabs. Chrome Store phrases this as “Read and change all your data on all websites.” The extension can silently read every page you visit, whether or not you invoked it. Often legitimate (ad blockers need this); often abused (AI sidebars that capture your browsing proactively). Requires a specific justification in the privacy policy. - Narrow —
activeTab. Chrome Store phrases this as “Read data on the active tab, only when the user invokes the extension.” The extension gets temporary access to the current tab after an explicit gesture, and loses it on navigation.
A well-behaved AI extension almost never needs <all_urls>. If one requests it, read the justification twice.
Vector 2 — where inference is routed
A privacy-conscious extension names the cloud vendors it talks to. A privacy-sloppy one calls them “third-party AI services.” The difference tells you how much you can audit after the fact.
Things to check:
- Named vendors. “We send the page to Anthropic Claude” is auditable; Anthropic’s commercial terms spell out retention and training policies explicitly. “Third-party AI” is not auditable.
- API tier. Vendors usually offer a consumer API tier (may use data for training) and an enterprise tier (zero retention, no training). An extension that routes through the enterprise tier is materially more private than one that hits the consumer API.
- Region. For European users, whether the inference hits EU or US data centres can matter for GDPR purposes. Some vendors offer EU-pinned endpoints.
- Proxying. Many extensions proxy inference through their own backend instead of calling the model vendor directly. This can be good (consolidates keys, enforces retention policy) or bad (extra hop, extra party that sees your data). The decisive question is what the proxy does with the data on its side.
Vector 3 — telemetry and analytics
Separate from the AI call, most extensions collect product analytics — which features are used, crash traces, usage frequency. This is normal and often innocuous, but it adds up.
What to verify:
- What endpoints the extension calls, unprompted. Open Chrome DevTools, go to the Network tab while the extension is running in the background, and watch. An extension that makes regular beacons outside your active interaction is running telemetry; whether it is aggressive depends on the payload.
- Third-party analytics SDKs. Google Analytics, Mixpanel, Segment, Amplitude, Sentry. Listed in the privacy policy if the vendor is careful; discovered in the manifest and host_permissions list if they are not.
- Opt-out paths. A serious vendor lets you turn telemetry off. If the only answer is “uninstall the extension,” telemetry is load-bearing to the product and you should factor it in.
Vector 4 — memory and retention
This is the newest axis, and it matters more than it used to. Modern AI extensions can keep context across requests — either for continuity (“remember we were looking at the shipping tab”) or for personalisation (“learn my preferences over time”). How that memory is implemented is a privacy choice.
- Session-only. Memory lives in Chrome
sessionstorage, cleared when the browser session ends. Nothing persists server-side. Lowest privacy impact; a small UX cost (you start fresh in the next session). - Local-only, persistent. Memory lives in
localorsyncstorage, persists across sessions, but never leaves the browser. Reasonable trade-off. - Server-side, persistent. Memory is stored on the vendor’s backend, associated with an account. Good for continuity; the biggest privacy cost. You are now trusting the vendor’s server security as well as their policies.
A 2025 LayerX disclosure called “Tainted Memories” demonstrated that persistent server-side memory in ChatGPT Atlas could be poisoned via CSRF — an attacker could plant false instructions in a user’s memory that the assistant would then execute later. This is not a rare, theoretical attack; it is exactly why session-only memory is the conservative default.
The audit — step by step
Put into practice, in the order you should run it:
- Open the Chrome Web Store listing. Scroll to the “Permissions” section. Flag anything broader than
activeTabfor justification. - Read the privacy policy. Search for named model vendors, named analytics providers, and a retention statement. If any of the three are vague, mark as unclear.
- Install in a test profile. Use a fresh Chrome profile specifically for the audit. Keeps your main cookies/extensions clean.
- Watch DevTools for 15 minutes. Open Network tab with “Preserve log” enabled, filter to the extension’s origins, browse normally. Count requests that are not tied to your invocations.
- Run a known-benign test page with sensitive-looking content. Visit it without invoking the extension. If you see any outbound request that includes page content, the extension is capturing proactively.
- Check memory persistence. Restart the browser. Re-invoke the extension. Ask something that references a prior session. If it remembers, memory is persistent — decide if that is what you want.
Clicky, run through the same audit
Running Clicky through the four vectors, by the book.
- Permissions.
activeTab,storage,offscreen. No<all_urls>, notabs, no broad-host permissions. The extension has no ambient access to any page you are not actively using it on. - Model routing. Named vendors, disclosed in the privacy policy: Anthropic for answer generation, Mistral Voxtral for transcription, ElevenLabs for text-to-speech. Requests are proxied through a Fleece AI Cloudflare Worker that does not retain request bodies. Enterprise-tier API terms — no training on user data.
- Telemetry. Usage metering only, for quota enforcement on paid plans. No third-party analytics SDK, no pixel, no Mixpanel, no Segment. Opt-out by uninstalling; there is nothing else to opt out of.
- Memory and retention. Session-only. Short conversation history lives in Chrome
sessionstorage and clears when the browser session ends. Nothing persists server-side after the inference response.
The full specification is in the “Privacy by default” section of the homepage and the linked privacy page. For a deeper look at exactly what the extension captures on each invocation, see our explainer on how AI Chrome extensions see your screen.
Frequently asked questions
Is it even possible to use AI in the browser without sending data to the cloud?
Technically, yes — local language models have improved enough that some use cases can run entirely on-device. In practice, for generalised page understanding at acceptable quality and latency, the cloud is still the norm in 2026. A transparent cloud routing is better than a murky local claim.
Can I tell what data an extension has already sent somewhere?
Partially. Chrome DevTools shows you current traffic. For historical data that has already left your machine, you are relying on the vendor’s retention policy and, in some jurisdictions, your data-access rights under GDPR or similar regulations.
If an extension says it uses “end-to-end encryption,” is my data safe?
Encryption protects the transport. It does not protect what the vendor does with your data at the endpoint. A vendor that processes your page content with a language model has, by definition, decrypted it on their side. The relevant question is what happens next, not what happens in transit.
Does Clicky claim to be the most private AI extension?
No — the field is too large for that claim to be honest. What Clicky does is run through the four vectors above with the most conservative answer on each: narrow permission, named vendors, no third-party analytics, session-only memory. If another extension can match all four, it is in the same ballpark. If it cannot match even one, you now know what to trade off.
Where do I file a privacy concern about an extension?
The Chrome Web Store has a report mechanism on each listing. For GDPR-specific concerns in Europe, the extension’s vendor should publish a data-protection contact. Clicky’s is contact@fleeceai.agency.
Next in our series: push-to-talk vs always-listening AI — the privacy difference that most voice assistant reviews gloss over.