In 2026, almost every browser AI product calls itself a “copilot,” an “agent,” or an “assistant” — often all three in the same paragraph. The words are not interchangeable. They describe meaningfully different products, with different autonomy, different failure modes, and different trust requirements. This post pins each term down with current examples, then tells you which one actually fits which job.
Three words, three products
A browser copilot suggests and waits for confirmation. A browser agent executes a multi-step task autonomously. A browser assistant perceives the page, answers a question, and takes one narrow action per request. That is the whole distinction. Everything else is implementation detail.
The three products trade off along two axes: how much the user has to confirm, and how much the system is trusted to do on its own. Copilots are high-confirmation, low-autonomy. Agents are low-confirmation, high-autonomy. Assistants sit in the middle.
Browser copilot — suggests, then waits
A copilot offers. The user confirms. Each step is a proposal the user has to accept.
Canonical example: Microsoft Copilot in Edge. It summarises a page, drafts a reply, rewrites a paragraph — and nothing it does takes effect until the user clicks. The assistant lives in a sidebar; the page stays under the user’s control. GitHub Copilot — the product that gave the category its name — works the same way in a code editor: suggest, wait, accept or ignore.
Strengths:
- Predictable failure mode. A bad suggestion is just not accepted. Nothing happens without explicit confirmation.
- Low attack surface. Because the system does not execute on its own, prompt injection from page content cannot cause an autonomous action.
- Enterprise-friendly by default. IT teams can deploy a copilot without redesigning approval workflows.
Weaknesses: slow for anything multi-step. A user who needs the AI to “book this flight and add it to my calendar” has to accept six or seven suggestions in sequence, and a copilot cannot chain the steps.
Browser agent — executes on its own
An agent takes a goal and runs. It chooses the next action, executes it in the browser, observes the result, and repeats — dozens of times if necessary — until the goal is complete or it hits a stop condition.
Canonical 2026 examples: Perplexity Comet and ChatGPT Atlas. Both are dedicated browsers with built-in autonomous modes. Comet is pitched as an AI research and task-execution engine; Atlas integrates OpenAI’s agent stack natively. In both, you can ask for something like “find the cheapest flight from Paris to Tokyo in October and book it” and the agent will chain the steps on its own.
Strengths:
- Leverage. A task that would take you twenty clicks takes one sentence.
- Parallelism. A full agent can open multiple tabs, compare options, and summarise — in the time it would take you to open the first one.
Weaknesses, which are genuine and documented:
- Prompt injection from page content. In 2025, Brave researchers disclosed indirect prompt injection in Perplexity Comet, and OpenAI published guidance acknowledging prompt injection may never be fully solved for agents with broad action rights. A malicious page can embed hidden instructions that hijack the agent.
- Opaque failure. When a ten-step plan fails at step seven, it is often unclear what happened. Users have to inspect the sequence after the fact.
- Browser migration cost. Comet and Atlas are full browsers. Adopting one means migrating bookmarks, password managers, extensions, developer profiles.
Browser assistant — the middle
An assistant perceives the page, answers a question, and — if the user asks — takes one narrow, targeted action per request. No multi-step execution without explicit re-prompting. No background work.
Canonical 2026 example: Clicky. Hold Alt, ask where the export button is, and the halo lands on it. Ask the follow-up question — “what does the dropdown next to it do?” — and the halo moves to the dropdown. Each action is a single turn; there is no multi-step plan running behind the scenes.
Strengths:
- Narrow attack surface. A read-only assistant that does not click or submit cannot be hijacked into an unwanted action. Overlay drawing on a selector is not a vector for prompt injection against the user.
- Works inside an existing browser. No migration cost. Install, hold Alt, go.
- Fast turn-around. One perception pass, one answer, one halo. Typically under two seconds, end to end.
Weaknesses: cannot do the “book the flight” class of task. For anything that requires chaining browser actions, a full agent wins on leverage.
Side-by-side comparison
| Dimension | Copilot | Assistant | Agent |
|---|---|---|---|
| Autonomy | Low — suggests only | Medium — one action per ask | High — multi-step plans |
| Confirmation | Every step | Every request | Goal, then trust |
| Prompt injection risk | Minimal | Narrow — no clicking | Broad — known attacks 2025 |
| Browser commitment | Extension in existing browser | Extension in existing browser | Often a new browser |
| Best-at task | Writing, summarising, rewriting | Finding elements, page Q&A | Multi-step workflows |
| Example in 2026 | Microsoft Copilot in Edge | Clicky | Comet, Atlas |
Which one should you actually install?
Pick by the job, not by the marketing.
- You write a lot in the browser. A copilot earns its keep. Rewrites, drafts, suggestions — all without giving up control. Microsoft Copilot in Edge is the frictionless default; several sidebar extensions cover the same territory.
- You spend all day in complex SaaS tools and keep hunting for the right button. An assistant is the right shape. Hold a key, ask, get pointed at the thing. The narrow action surface is the point — read-only help at the moment you need it. Clicky is built for this specifically. See how it works.
- You want the browser itself to run multi-step workflows. A full agent wins on leverage — and the 2025 prompt-injection papers tell you what you are signing up for. Comet and Atlas are the current generation; read their security posture documents before committing. For many users, the agent is overkill for the day-to-day and useful only for specific heavy tasks; a natural split is assistant for the 95% and agent for the other 5%.
For a deeper look at how the three product types actually perceive the page and why that shapes their failure modes, see our explainer on how AI Chrome extensions see your screen.
Frequently asked questions
Is an agent always better than a copilot?
No. An agent is better at multi-step workflows; a copilot is better at single-sentence tasks where you want control over the result. They are optimised for different classes of work.
Can a copilot be upgraded to an agent by giving it more permissions?
Functionally yes, but the failure modes change. A copilot that gains autonomous action also gains the prompt-injection attack surface. Vendors that let users opt into higher autonomy usually separate the two modes explicitly and log the autonomous actions.
Why does Clicky call itself an assistant rather than an agent?
Because it takes one action per user request — point at the element, read the answer — and stops. It does not run a multi-step plan. Calling it an agent would be misleading; the word has a specific meaning in the 2026 browser-AI vocabulary and we want to use it carefully.
Do any of these products learn from my use over time?
Some do, some do not. Atlas has browser-memory features that persist context across sessions. Clicky keeps conversation history strictly in session storage and clears it when the browser session ends. Read the privacy page of any tool you are evaluating.
Next in our series: the Chrome extensions that don’t track you — a practical guide to auditing the AI tools you already have installed.