SLICC Security Model
Last updated: 2026-05-06
What SLICC is, in security terms
SLICC is an experimental browser-native AI agent. It is closer in spirit to running Claude Code with --dangerously-skip-permissions than to an enterprise agent platform. SLICC is built for curious early adopters who want to push the envelope and who understand that a tool with this much reach inside a logged-in browser session can also do harm if it is pointed at the wrong page or invoked carelessly.
If you are looking for a tool with permission flows for every action, audit logs, role-based access, isolated credentials, allow-lists, and a vendor security review you can hand to your CISO, SLICC is not that tool today. It may become that tool later. If it does, the messaging on this site will change. Until then, treat SLICC the way you would treat any open-source developer power tool that runs in your browser session.
The choice to give a language model real reach inside a browser is a tradeoff. We would rather make it an explicit one, with the surface area, the hedges, and the failure modes named in plain terms, than leave these things unnamed.
Trust model
SLICC is a single-operator tool. There is one trusted operator boundary per running SLICC instance — the human at the keyboard. SLICC is not a hostile multi-tenant security boundary; it is not a system where adversarial users share one agent.
This puts SLICC in the same broad category as OpenClaw's personal-assistant model: one trusted operator, potentially many agents under that operator, no per-user isolation inside one running tool. Where the two diverge: SLICC runs inside a browser tab with no host-level shell, no host filesystem, and no inbound messaging surface (no Slack bot, no WhatsApp listener, no always-on public messaging endpoint). Nobody but the operator can put a prompt into the cone. The trust posture is similar to OpenClaw's; the blast radius is narrower in some directions and similar in others.
What SLICC can do, and therefore what an attacker who hijacks SLICC could do
SLICC runs in your browser. It can:
- Read pages you have open and act on them through DOM events and the Chrome DevTools Protocol.
- Make HTTP requests from the browser via the Fetch API. The WASM bash shell's
curlis sugar over Fetch. - Read and write your clipboard, in the same way any JavaScript on a page can.
- Read and write files in its in-browser virtual filesystem (IndexedDB and OPFS — the user's local browser storage, not the host filesystem).
- Take screenshots of tabs it has been granted access to. Browser tab screenshots are taken via CDP without additional approval. Screenshots of the host operating system (via
screencapture) require explicit user permission, as they do not go through CDP. - Mint OAuth tokens for providers you have configured. Each token request requires explicit user approval.
- Run sub-agents ("scoops") on tabs it opened. Each scoop is constrained to the tabs it owns.
- Schedule background work via
crontaskand receive inbound HTTP viawebhook. These arewindow.setIntervaland a registered fetch handler — they live and die with the tab.
That list is what could go wrong. If a malicious page successfully prompt-injects SLICC, the upper bound on damage is the union of the bullets above, applied to whatever sessions you are currently logged into in the browser SLICC is attached to.
Layered controls
"Close the tab" is one control in the model. It is not the whole model. The model has several layers, each of which addresses a different failure mode.
1. Fresh-profile execution (npm and macOS app)
When you run SLICC via npx sliccy or the macOS app, SLICC launches a fresh Chrome profile with no shared cookies and no shared sessions. From the perspective of that browser, you are not logged in to Gmail, Slack, Jira, your CRM, or your bank. Until you actively sign in to a site inside that profile, SLICC has no session access to it. This is the default mode for those distributions and the one we recommend for anyone who has not specifically decided otherwise.
2. Extension-mode tradeoff
The Chrome extension runs in your existing browser profile and has access to whatever you are already logged into there. Treat that browser profile as sensitive state: it is the union of your live sessions, the things SLICC could act on with one prompt-injected misstep. This is a deliberate convenience tradeoff. Some users consciously choose it; some users should not. If you would not be comfortable handing the keyboard to an AI, even one where you can read the full system prompt, while your tabs are open, do not run the extension in your everyday profile. Use the npm or macOS app instead, or install the extension into a Chrome profile dedicated to agent work, with no personal email signed in, no password manager, no banking session.
3. Scoop-to-tab ownership
Each scoop (sub-agent) can only control tabs it opened itself. Scoops cannot reach across to a tab another scoop owns. The cone (the orchestrator) sees more, by design, because the cone is the surface the user is talking to. This bounds the blast radius of a compromised scoop to its own tabs.
4. Tab-bound lifecycle
The browser tab is the process boundary. Close it and the agent loop stops, the WebAssembly shell goes away, scheduled crontask ticks stop firing, registered webhooks stop responding, the agent's working memory is gone. There is no daemon, no service worker that keeps acting on your behalf after you walk away. This is a real property of the architecture, not a marketing line, and it is why the close-the-tab phrasing exists.
5. Secrets that stay out of the model context
API keys, OAuth tokens, and other sensitive values you give SLICC are managed through a secrets layer. The values themselves are not placed into the LLM's context window. The model sees a reference; the runtime substitutes the real value when it makes the actual outbound call. A model that is being prompt-injected cannot reveal a value it never saw in its prompt.
6. Explicit approval for the most sensitive shell verbs
Two shell verbs in particular require explicit, in-product user approval each time they run:
screencapture— taking a screenshot of a tab.oauth-token— minting an OAuth token.
These are the verbs that most directly leak data outward. They do not run unattended.
7. nuke is gated
nuke is the factory reset. It wipes SLICC's virtual filesystem, scoops, and stored state. It is gated by a security code so that a model going off-script cannot trigger a reset on its own. It is meant for developers debugging from a clean slate.
Prompt injection
Prompt injection is the load-bearing risk for any agentic browser tool, and we treat it as such.
System-prompt guardrails are soft guidance. Hard enforcement comes from elsewhere in the architecture: the tab as process boundary, per-invocation user approval on sensitive shell verbs, secrets kept out of the model's context, scoop-to-tab ownership, and model selection. Nobody has solved prompt injection. We have designed so that successful injection has a bounded blast radius.
A note on the threat surface. SLICC can receive inbound HTTP via webhook, but webhook URLs are treated as secrets — they are not publicly discoverable, and the content they deliver is treated as untrusted input. The user is the only party who tells the cone what to do, which removes one common injection vector (a stranger messaging your agent) that haunts other agentic tools. It does not, however, remove the more important vector: any page the agent reads is a potential injection vector. A Jira ticket comment, a Notion doc, a Gmail thread, a webpage you ask the agent to scrape: each of these can carry adversarial instructions that the agent will execute with whatever capabilities you have given it. Treat the list of pages you fully trust as shorter than your instinct says.
A few specifics about how we hedge:
- We are familiar with the "lethal trifecta" framing (untrusted content meeting sensitive tool access meeting outbound reach), and we are familiar with the counter-arguments. Both are correct in different regimes, and both inform the design.
- Model strength matters. Smaller and older models are measurably more susceptible to instruction hijacking and tool misuse than current frontier models, and the gap is large enough to change the practical safety story. The optional Adobe LLM Provider does not allow models weaker than Claude Sonnet 4.6. We are deliberately constraining users toward more injection-resistant models, even at the cost of slower and more expensive tokens. This is opinionated by design.
- The empirical injection rate against Claude Sonnet 4.6 and comparable frontier models, in our own deliberate red-team work and in the user reports we monitor, is low. It is not zero, and probably never will be.
- The "days since the last prompt injection incident" counter on the homepage is a live counter, not a joke. We commit to resetting it to zero when we receive a credible user report of a successful injection. We have not had to reset it. We expect to one day, and when that happens you will see the counter move.
If you have a credible injection report against SLICC, your own or one you have observed, please file it. See the reporting section at the end of this page.
The shell, in less alarming terms
The bash shell SLICC presents to the model is JavaScript-in-a-tab dressed up as a CLI. We chose that shape because frontier models already speak bash fluently and reach for shell idioms naturally; making the in-browser runtime look like bash produces measurably better tool-use behavior than inventing a bespoke API the model has never seen. None of the verbs in the shell give SLICC capabilities that the browser does not already grant any JavaScript running in a page.
Concretely:
curlandwgetmake HTTP requests from the browser via Fetch. They share the browser's CORS posture; they do not bypass any permission the browser would not already grant a page. CORS-loosening browser extensions are among the most popular developer tools in the Chrome Web Store, for the practical reason that the browser's default CORS rules are stricter than many developer workflows need; SLICC is more conservative than that category.pbcopy/pbpasteare the clipboard API. Any JavaScript on any page can read and write the clipboard subject to the same browser permissions.screencaptureis gated on per-invocation user approval. So isoauth-token.webhookregisters an inbound HTTP handler in the tab. It receives data; it does not, on its own, exfiltrate anything.crontaskiswindow.setIntervalwith a friendlier name. It dies with the tab.nukeis gated by a security code.node,python3,git,playwright-cli,sqlite3, and the rest of the alarming-looking list are running inside the browser's WebAssembly sandbox. They cannot read files on your host filesystem outside of an explicit user-initiated upload. They cannot write files to your host filesystem outside of an explicit user-initiated download.
The shell is not a UNIX system. The first time Claude saw it, Claude said "I can't believe it's not a UNIX system." Claude was being polite. It is a clever browser sandbox with a bash-shaped interface.
Domains, downloads, and supply chain
www.sliccy.comis the website.www.sliccy.aiis the API surface.- The macOS
.dmglink redirects to a GitHub release artifact in the public, Apache-2.0ai-ecoverse/sliccrepository. You can verify the artifact by checksum against the release page. - The Chrome extension is published on the Chrome Web Store and reviewed by Google.
- The source is open under Apache-2.0. "Open source" is not the same as "audited," but it does mean you or your security team can read, fork, and audit any line of it.
The ai-ecoverse GitHub organization describes itself, accurately, as the world's trailing AI lab. SLICC is not enterprise software.
API keys and bring-your-own-tokens
SLICC delegates LLM calls to whichever provider you configure: Anthropic, AWS Bedrock, Azure AI Foundry, Cerebras, Google, Groq, OpenAI, OpenRouter, xAI, the Adobe LLM Provider, and others. The keys you enter are stored in your browser's local storage and sent only to the endpoint you configured.
- Provider APIs generally separate inference scopes from billing scopes. Even if a key were exfiltrated, the practical blast radius depends on the scope under which you minted it. We recommend minting the narrowest scope each provider allows and capping the spending limit.
- The underlying agent runtime is
pi-agentby Mario Zechner, which is in production use across a large independent-developer community. We did not invent the provider wire protocol layer ourselves.
If you are not comfortable handing API keys to an in-browser tool, do not configure providers you cannot afford to lose access to. Use a key with a low spending cap.
What SLICC does not do
- It does not run a SLICC-operated server that holds your conversations, your files, your prompts, or your model outputs. There is no SLICC user account, because there is no SLICC backend to log in to.
- It does not auto-update the agent runtime behind your back. Updates ship through GitHub releases, npm, and the Chrome Web Store on the normal channels for those distribution surfaces.
- It does not sell, share, or aggregate user data. The privacy policy at /privacy covers this in full.
How we recommend running SLICC
Based on how we run SLICC ourselves:
- Prefer the fresh-profile distributions (npm or macOS app) over the extension if you are unsure which to pick.
- If you do install the extension, install it into a Chrome profile dedicated to agent work, not the profile where your bank, your password manager, and your personal email are signed in.
- For untrusted content (a webpage you have not vetted, a long PDF, an email thread from outside your organization), delegate the read to a scoop rather than letting the cone read it directly. The scoop's tab boundary, its sandboxed filesystem, and its narrower context all reduce blast radius if the content turns out to carry adversarial instructions. Pull only the summary back to the cone. (OpenClaw recommends an analogous reader-agent pattern for the same reason.)
- Treat skills as code. A skill is a markdown file plus optional shell scripts that the agent will execute on your behalf. Install skills only from sources you trust, the same way you would install any other open-source dependency. Read the
SKILL.mdbefore adding it to the agent's reach. - Use API keys with the narrowest provider scope and the lowest spending cap that still let you get your work done.
- Treat any page you ask SLICC to read as a potential injection vector. The list of pages you should fully trust is shorter than you think.
- If something goes wrong, reset with
nukeand file a report.
Mapped against OWASP LLM Top 10 (2025)
The OWASP Top 10 for LLM Applications 2025 is the most widely cited public framework for risks specific to LLM-based systems. SLICC is an agentic browser tool, not a server-side LLM application, so several items in the list do not apply to its architecture, but that is itself worth saying. Better to make non-coverage explicit than to imply coverage that does not exist.
LLM01 Prompt Injection. The load-bearing risk for SLICC. Covered above in detail. SLICC's primary defenses are not at the prompt layer (where guardrails are soft) but at the architecture layer: the tab as process boundary, per-invocation approval on screencapture and oauth-token, secrets kept out of the model's context, scoop-to-tab ownership, and the model-strength constraint on the optional Adobe LLM Provider.
LLM02 Sensitive Information Disclosure. SLICC does not operate a server, so there is no central store of user conversations, files, or prompts to leak. Sensitive values you give SLICC live in your browser's local storage and are not placed into the LLM's context window. A model that is being prompt-injected cannot reveal a value it never saw in its prompt. The remaining vector is whatever data is in tabs the agent can read, which is the operator's responsibility (see the extension-mode tradeoff above).
LLM03 Supply Chain. SLICC is distributed through three channels, each with verifiable provenance: the Chrome Web Store (Google review), GitHub releases (signed artifacts and checksums in the public Apache-2.0 ai-ecoverse/slicc repository), and npx sliccy from npm. The agent runtime layer is pi-agent, an independent project in production use across a large developer community. Skills installed from third-party registries (ClawHub, Tessl) are trusted code; install only from sources you trust.
LLM04 Data and Model Poisoning. Out of scope for SLICC. SLICC does not train, fine-tune, or curate training data for any of the language models it consumes. Model poisoning risk lives upstream with the model providers (Anthropic, OpenAI, Microsoft / Azure, AWS, Google, and others) and is governed by their respective security postures.
LLM05 Improper Output Handling. When the agent's output drives further action (a tool call, a navigation, a script execution), the runtime treats the output as data to be parsed, not as authoritative truth. Per-invocation approvals on the most sensitive verbs (screencapture, oauth-token) exist precisely to interrupt the model-output-to-side-effect path on the actions that matter most. The reader-scoop pattern described under Recommendations is also a deliberate output-handling control: a scoop summarizing untrusted content cannot extend its conclusions into the cone's tool surface unless the cone explicitly acts on the summary.
LLM06 Excessive Agency. The OWASP item that maps most directly to what SLICC is, by design, doing. SLICC gives a language model meaningful agency inside a browser. The mitigations are listed in full under Layered controls above: scoop-to-tab ownership bounds blast radius, the tab is the process boundary, sensitive verbs require user approval, secrets are out of context, and the operator can interrupt at any point by closing the tab. Excessive agency is the risk SLICC most consciously trades against.
LLM07 System Prompt Leakage. SLICC's system prompts are open source. They are visible in the ai-ecoverse/slicc repository. There are no API keys, secrets, or proprietary instructions embedded in them; treating them as public is correct.
LLM08 Vector and Embedding Weaknesses. Out of scope for the current SLICC release. SLICC does not maintain a vector store or embeddings-based retrieval layer. If that changes, this section will too.
LLM09 Misinformation. SLICC inherits whatever factual reliability and hallucination behavior the chosen language model has. SLICC's architecture does not, on its own, reduce model-level misinformation risk. The mitigation is on the operator: verify model outputs that drive consequential decisions, prefer frontier models with stronger factuality benchmarks, and use nuke to reset if a session has gone off the rails.
LLM10 Unbounded Consumption. SLICC users consume tokens against their own provider keys. The mitigation is operational: mint keys with the narrowest scope your work allows, set spending caps at the provider, and be aware that crontask and webhook can keep running while a tab is open and continue to consume tokens until the tab is closed. Provider-side rate limits are the hard backstop; SLICC does not currently enforce a token-budget cap of its own.
Reporting a security issue
For prompt injection reports, agent misbehavior, supply-chain concerns, or any other security issue, email info@sliccy.com or open a private security advisory at <https: github.com="" ai-ecoverse="" slicc="" security="" advisories="" new="">. We commit to:
- Acknowledging credible reports promptly.
- Resetting the homepage prompt-injection counter to zero on confirmed injection incidents.
- Crediting reporters who would like public credit, on request.
We are a small project. We respond personally. We would rather hear about a problem from you than read about it on Twitter.