SLICC Security Model

Last updated: 2026-05-06

What SLICC is, in security terms

SLICC is an experimental browser-native AI agent. It is closer in spirit to running Claude Code with --dangerously-skip-permissions than to an enterprise agent platform. SLICC is built for curious early adopters who want to push the envelope and who understand that a tool with this much reach inside a logged-in browser session can also do harm if it is pointed at the wrong page or invoked carelessly.

If you are looking for a tool with permission flows for every action, audit logs, role-based access, isolated credentials, allow-lists, and a vendor security review you can hand to your CISO, SLICC is not that tool today. It may become that tool later. If it does, the messaging on this site will change. Until then, treat SLICC the way you would treat any open-source developer power tool that runs in your browser session.

The choice to give a language model real reach inside a browser is a tradeoff. We would rather make it an explicit one, with the surface area, the hedges, and the failure modes named in plain terms, than leave these things unnamed.

Trust model

SLICC is a single-operator tool. There is one trusted operator boundary per running SLICC instance — the human at the keyboard. SLICC is not a hostile multi-tenant security boundary; it is not a system where adversarial users share one agent.

This puts SLICC in the same broad category as OpenClaw's personal-assistant model: one trusted operator, potentially many agents under that operator, no per-user isolation inside one running tool. Where the two diverge: SLICC runs inside a browser tab with no host-level shell, no host filesystem, and no inbound messaging surface (no Slack bot, no WhatsApp listener, no always-on public messaging endpoint). Nobody but the operator can put a prompt into the cone. The trust posture is similar to OpenClaw's; the blast radius is narrower in some directions and similar in others.

What SLICC can do, and therefore what an attacker who hijacks SLICC could do

SLICC runs in your browser. It can:

That list is what could go wrong. If a malicious page successfully prompt-injects SLICC, the upper bound on damage is the union of the bullets above, applied to whatever sessions you are currently logged into in the browser SLICC is attached to.

Layered controls

"Close the tab" is one control in the model. It is not the whole model. The model has several layers, each of which addresses a different failure mode.

1. Fresh-profile execution (npm and macOS app)

When you run SLICC via npx sliccy or the macOS app, SLICC launches a fresh Chrome profile with no shared cookies and no shared sessions. From the perspective of that browser, you are not logged in to Gmail, Slack, Jira, your CRM, or your bank. Until you actively sign in to a site inside that profile, SLICC has no session access to it. This is the default mode for those distributions and the one we recommend for anyone who has not specifically decided otherwise.

2. Extension-mode tradeoff

The Chrome extension runs in your existing browser profile and has access to whatever you are already logged into there. Treat that browser profile as sensitive state: it is the union of your live sessions, the things SLICC could act on with one prompt-injected misstep. This is a deliberate convenience tradeoff. Some users consciously choose it; some users should not. If you would not be comfortable handing the keyboard to an AI, even one where you can read the full system prompt, while your tabs are open, do not run the extension in your everyday profile. Use the npm or macOS app instead, or install the extension into a Chrome profile dedicated to agent work, with no personal email signed in, no password manager, no banking session.

3. Scoop-to-tab ownership

Each scoop (sub-agent) can only control tabs it opened itself. Scoops cannot reach across to a tab another scoop owns. The cone (the orchestrator) sees more, by design, because the cone is the surface the user is talking to. This bounds the blast radius of a compromised scoop to its own tabs.

4. Tab-bound lifecycle

The browser tab is the process boundary. Close it and the agent loop stops, the WebAssembly shell goes away, scheduled crontask ticks stop firing, registered webhooks stop responding, the agent's working memory is gone. There is no daemon, no service worker that keeps acting on your behalf after you walk away. This is a real property of the architecture, not a marketing line, and it is why the close-the-tab phrasing exists.

5. Secrets that stay out of the model context

API keys, OAuth tokens, and other sensitive values you give SLICC are managed through a secrets layer. The values themselves are not placed into the LLM's context window. The model sees a reference; the runtime substitutes the real value when it makes the actual outbound call. A model that is being prompt-injected cannot reveal a value it never saw in its prompt.

6. Explicit approval for the most sensitive shell verbs

Two shell verbs in particular require explicit, in-product user approval each time they run:

These are the verbs that most directly leak data outward. They do not run unattended.

7. nuke is gated

nuke is the factory reset. It wipes SLICC's virtual filesystem, scoops, and stored state. It is gated by a security code so that a model going off-script cannot trigger a reset on its own. It is meant for developers debugging from a clean slate.

Prompt injection

Prompt injection is the load-bearing risk for any agentic browser tool, and we treat it as such.

System-prompt guardrails are soft guidance. Hard enforcement comes from elsewhere in the architecture: the tab as process boundary, per-invocation user approval on sensitive shell verbs, secrets kept out of the model's context, scoop-to-tab ownership, and model selection. Nobody has solved prompt injection. We have designed so that successful injection has a bounded blast radius.

A note on the threat surface. SLICC can receive inbound HTTP via webhook, but webhook URLs are treated as secrets — they are not publicly discoverable, and the content they deliver is treated as untrusted input. The user is the only party who tells the cone what to do, which removes one common injection vector (a stranger messaging your agent) that haunts other agentic tools. It does not, however, remove the more important vector: any page the agent reads is a potential injection vector. A Jira ticket comment, a Notion doc, a Gmail thread, a webpage you ask the agent to scrape: each of these can carry adversarial instructions that the agent will execute with whatever capabilities you have given it. Treat the list of pages you fully trust as shorter than your instinct says.

A few specifics about how we hedge:

If you have a credible injection report against SLICC, your own or one you have observed, please file it. See the reporting section at the end of this page.

The shell, in less alarming terms

The bash shell SLICC presents to the model is JavaScript-in-a-tab dressed up as a CLI. We chose that shape because frontier models already speak bash fluently and reach for shell idioms naturally; making the in-browser runtime look like bash produces measurably better tool-use behavior than inventing a bespoke API the model has never seen. None of the verbs in the shell give SLICC capabilities that the browser does not already grant any JavaScript running in a page.

Concretely:

The shell is not a UNIX system. The first time Claude saw it, Claude said "I can't believe it's not a UNIX system." Claude was being polite. It is a clever browser sandbox with a bash-shaped interface.

Domains, downloads, and supply chain

The ai-ecoverse GitHub organization describes itself, accurately, as the world's trailing AI lab. SLICC is not enterprise software.

API keys and bring-your-own-tokens

SLICC delegates LLM calls to whichever provider you configure: Anthropic, AWS Bedrock, Azure AI Foundry, Cerebras, Google, Groq, OpenAI, OpenRouter, xAI, the Adobe LLM Provider, and others. The keys you enter are stored in your browser's local storage and sent only to the endpoint you configured.

If you are not comfortable handing API keys to an in-browser tool, do not configure providers you cannot afford to lose access to. Use a key with a low spending cap.

What SLICC does not do

How we recommend running SLICC

Based on how we run SLICC ourselves:

Mapped against OWASP LLM Top 10 (2025)

The OWASP Top 10 for LLM Applications 2025 is the most widely cited public framework for risks specific to LLM-based systems. SLICC is an agentic browser tool, not a server-side LLM application, so several items in the list do not apply to its architecture, but that is itself worth saying. Better to make non-coverage explicit than to imply coverage that does not exist.

LLM01 Prompt Injection. The load-bearing risk for SLICC. Covered above in detail. SLICC's primary defenses are not at the prompt layer (where guardrails are soft) but at the architecture layer: the tab as process boundary, per-invocation approval on screencapture and oauth-token, secrets kept out of the model's context, scoop-to-tab ownership, and the model-strength constraint on the optional Adobe LLM Provider.

LLM02 Sensitive Information Disclosure. SLICC does not operate a server, so there is no central store of user conversations, files, or prompts to leak. Sensitive values you give SLICC live in your browser's local storage and are not placed into the LLM's context window. A model that is being prompt-injected cannot reveal a value it never saw in its prompt. The remaining vector is whatever data is in tabs the agent can read, which is the operator's responsibility (see the extension-mode tradeoff above).

LLM03 Supply Chain. SLICC is distributed through three channels, each with verifiable provenance: the Chrome Web Store (Google review), GitHub releases (signed artifacts and checksums in the public Apache-2.0 ai-ecoverse/slicc repository), and npx sliccy from npm. The agent runtime layer is pi-agent, an independent project in production use across a large developer community. Skills installed from third-party registries (ClawHub, Tessl) are trusted code; install only from sources you trust.

LLM04 Data and Model Poisoning. Out of scope for SLICC. SLICC does not train, fine-tune, or curate training data for any of the language models it consumes. Model poisoning risk lives upstream with the model providers (Anthropic, OpenAI, Microsoft / Azure, AWS, Google, and others) and is governed by their respective security postures.

LLM05 Improper Output Handling. When the agent's output drives further action (a tool call, a navigation, a script execution), the runtime treats the output as data to be parsed, not as authoritative truth. Per-invocation approvals on the most sensitive verbs (screencapture, oauth-token) exist precisely to interrupt the model-output-to-side-effect path on the actions that matter most. The reader-scoop pattern described under Recommendations is also a deliberate output-handling control: a scoop summarizing untrusted content cannot extend its conclusions into the cone's tool surface unless the cone explicitly acts on the summary.

LLM06 Excessive Agency. The OWASP item that maps most directly to what SLICC is, by design, doing. SLICC gives a language model meaningful agency inside a browser. The mitigations are listed in full under Layered controls above: scoop-to-tab ownership bounds blast radius, the tab is the process boundary, sensitive verbs require user approval, secrets are out of context, and the operator can interrupt at any point by closing the tab. Excessive agency is the risk SLICC most consciously trades against.

LLM07 System Prompt Leakage. SLICC's system prompts are open source. They are visible in the ai-ecoverse/slicc repository. There are no API keys, secrets, or proprietary instructions embedded in them; treating them as public is correct.

LLM08 Vector and Embedding Weaknesses. Out of scope for the current SLICC release. SLICC does not maintain a vector store or embeddings-based retrieval layer. If that changes, this section will too.

LLM09 Misinformation. SLICC inherits whatever factual reliability and hallucination behavior the chosen language model has. SLICC's architecture does not, on its own, reduce model-level misinformation risk. The mitigation is on the operator: verify model outputs that drive consequential decisions, prefer frontier models with stronger factuality benchmarks, and use nuke to reset if a session has gone off the rails.

LLM10 Unbounded Consumption. SLICC users consume tokens against their own provider keys. The mitigation is operational: mint keys with the narrowest scope your work allows, set spending caps at the provider, and be aware that crontask and webhook can keep running while a tab is open and continue to consume tokens until the tab is closed. Provider-side rate limits are the hard backstop; SLICC does not currently enforce a token-budget cap of its own.

Reporting a security issue

For prompt injection reports, agent misbehavior, supply-chain concerns, or any other security issue, email info@sliccy.com or open a private security advisory at <https: github.com="" ai-ecoverse="" slicc="" security="" advisories="" new="">. We commit to:

We are a small project. We respond personally. We would rather hear about a problem from you than read about it on Twitter.