Your assistant can read files, run commands, browse the web, and control your screen. The permissions model controls which of those actions happen automatically and which ones need your approval.
Every permission check is deterministic — enforced by traditional software, not judged by the AI. The approval buttons you see are hard-coded responses, not natural language interpreted by the model. This means there's no way to prompt-inject past a permission boundary.
Every tool your assistant uses has a risk level:
When a tool needs your approval, you see:
When prompted, you're not limited to a binary yes/no. You can choose:
| Decision | What it does |
|---|---|
| Allow | One-time approval for this specific action |
| Allow for 10 minutes | Auto-approve similar actions for the next 10 minutes |
| Allow for this conversation | Auto-approve similar actions until this conversation ends |
| Always Allow | Create a persistent rule — never ask again for this pattern |
| Don't Allow | Block this specific action |
| Always Deny | Create a persistent rule — always block this pattern |
For “Always Allow” and “Always Deny,” you also choose the scope: this specific file, anything in this directory, this project, or everywhere. These decisions are saved as trust rules and accumulate over time. The more you use your assistant, the fewer prompts you see for actions you've already approved.
Think of your assistant's workspace as a separate computer inside your computer. It's a self-contained environment where the assistant can run freely — creating files, modifying data, running commands — without needing your approval. Anything that happens inside this inner computer stays contained.
Inside the workspace (~/.vellum/workspace/):
Outside the workspace (your host machine):
When the assistant needs to do something outside its workspace, it doesn't reach out directly. Instead, it tells a separate process — one that lives outside the sandbox — to perform the action and report back. That external process is deterministic, traditional software with no AI involved. The AI stays inside the cage at all times.
The sandbox is enforced at the OS level (sandbox-exec on macOS, bubblewrap on Linux). Path traversal attacks (using ../ to escape the workspace) and symlink escapes are blocked.
Not all shell commands are equal. Your assistant parses commands using a tree-sitter parser and classifies them based on what programs they invoke:
Low risk — read-only programs: ls, cat, grep, find, git status, git log, git diff, node, python, jq, tree, du, df, ping, dig, and similar.
Medium risk — programs that modify state: sed, awk, chmod, chown, curl, wget, non-read-only git subcommands (like git commit, git push), and any program not in the known-safe list.
High risk — dangerous programs: sudo, rm, dd, mkfs, reboot, shutdown, kill, iptables, and other system administration tools.
This parsing also generates “action keys” for pattern matching. When you approve git push, the system creates a rule that matches future git push commands without also matching git reset --hard.
Your approval decisions are stored as trust rules in ~/.vellum/protected/trust.json. Each rule has:
Rules are matched using minimatch glob patterns. You can have a broad “allow git everywhere” rule and a narrow “deny git push --force everywhere” rule, and the deny will win because deny beats allow at equal priority.
You can inspect and edit your trust rules directly in the file, or manage them through the Settings > Trust tab.
Tools provided by third-party skills (ones you've installed, not the ones bundled with Vellum) are always prompted by default, regardless of risk level. This prevents a malicious or buggy skill from executing actions without your knowledge.
Bundled skill tools (Browser, Gmail, Calendar, etc.) follow the normal risk-based rules.
Computer use actions — clicking, typing, scrolling, opening apps — have ask rules at the highest priority level. Each action is prompted individually. This means your assistant can't silently control your screen; you approve each step.
You can create “Always Allow” rules for specific computer use patterns if you want the assistant to work more autonomously during screen control sessions.
On top of the assistant's own permission system, macOS has its own layer:
| Permission | What it unlocks | Where to grant it |
|---|---|---|
| Accessibility | Controlling mouse and keyboard | System Settings > Privacy & Security > Accessibility |
| Screen Recording | Seeing your screen content | System Settings > Privacy & Security > Screen Recording |
| Microphone | Voice input | System Settings > Privacy & Security > Microphone |
These are the “can it access this at all” layer. The assistant's Allow/Don't Allow prompts are the “should it access this right now” layer. Both must pass for an action to execute.
When someone messages your assistant through Telegram or Slack and the assistant needs to do something that requires permission, it routes the approval request to you (the guardian) through your active channel.
Approval grants are:
Guardian approvals are always downgraded to one-time grants, even if you select “Always Allow.” This prevents a single cross-channel approval from creating a persistent rule — you'd need to create that rule directly from the desktop app.
When you deny an action:
If you choose “Always Deny,” future attempts to use that tool with a matching pattern are blocked silently — the assistant won't even ask.