Use @file references, allowed-tools, subagents and MCP tools to upgrade slash commands from shortcuts into a team workflow library.
The first two articles covered what a slash command is (a Markdown file) and how the ! prefix pipes shell output into the context. This one goes heavier: how to make a single command orchestrate Claude Code's most powerful capabilities — file references, subagents, MCP tools — while keeping permissions under control.
At this level, a command stops being a "prompt shortcut" and becomes a small, reusable workflow.
@file References: a Cleaner Static Injection Than !cat!cat file.md gets file content into the context, but it goes through the shell, and that has costs: a subprocess every time, extra escaping when paths contain spaces or special characters, and Claude Code treating the result as plain text rather than "a file."
The @ reference is native:
---
description: Review changes against project standards
---
Reference materials:
@.claude/context/coding-standards.md
@.claude/context/security-checklist.md
Review the changes in the diff below and call out every violation of these standards.
!`git diff HEAD`
@path tells Claude Code to attach the file to the conversation. The difference:
| Scenario | Use !cat |
Use @ |
|---|---|---|
Path comes from an argument ($ARGUMENTS) |
✅ Required | ❌ @ doesn't expand variables |
| Fixed standards, templates, conventions | ⚠️ Works but heavy | ✅ Native, no shell |
| Dynamic output (diff, logs, test results) | ✅ Required | ❌ Not possible |
Rule of thumb: @ for static files, ! for dynamic output, !cat $ARGUMENTS for parameterized paths.
allowed-tools: Permission Boundaries for CommandsBy default, a command can reach every tool available in the session. That isn't always what you want — /review is a read-only task, and you don't want it "helpfully" editing a line of code along the way.
Add allowed-tools to the frontmatter and only the listed tools are available while the command runs:
---
description: Read-only code review
allowed-tools: Read, Grep, Glob, Bash(git diff:*), Bash(git log:*)
---
Review the difference between this branch and master. Read only, no edits.
!`git diff master...HEAD`
A few things worth noting:
Bash(git diff:*) is fine-grained authorization — only bash invocations starting with git diff are allowed; git push gets blockedRead / Grep / Glob explicitly grants read-only toolsEdit / Write aren't listed, so the model can't modify code even if it wanted toThe reverse works too — a high-privilege command like /deploy:
---
description: Deploy the current branch
allowed-tools: Bash(kamal deploy:*), Bash(git push:*), Read
---
Hardcoding what a command can do is far more reliable than relying on manual bash-prompt approvals during every run.
Some tasks will blow out your main context — walking dozens of files to find every call site of a function, running codebase-wide stats, scraping a big log for analysis. Done inline, thousands of lines of output stay in the context, and every later turn gets slower and dumber.
The right move is to hand the work to a subagent. It runs in its own context and only brings the conclusion back. A command just needs to say so explicitly:
---
description: Deep research on all usages of a symbol
allowed-tools: Task
---
Use an Explore subagent to deeply research all usages of: $ARGUMENTS
The subagent should cover:
- Every call site (including test files)
- The business scenarios covered
- Whether any equivalent alternative implementations exist
When the subagent returns, give me a summary under 300 words — do not paste raw code snippets.
Trigger /trace SomeClass#some_method and Claude Code spins up an Explore subagent to sweep the codebase in parallel. The main conversation receives only the distilled conclusion. No grep output, no file fragments. Context stays clean.
Pushed further:
---
description: Research three candidate approaches in parallel
allowed-tools: Task
---
Spin up 3 subagents in parallel, each investigating one of these implementation paths:
1. Existing ActiveJob + Sidekiq
2. Solid Queue
3. A lightweight in-house queue
Each agent reports: effort, risk, and how invasive the change is to existing code. Once all three return, I'll compare.
Three agents run concurrently, the main conversation waits once. This is one of the biggest leverage points a command can offer: turning "research tasks that cost a lot of tokens to resolve" into a one-shot trigger.
If the session has MCP servers connected (Linear, GitHub, Sentry, an in-house database proxy, etc.), a command can direct the model to use them:
---
description: Turn a Linear issue into an implementation plan
allowed-tools: mcp__linear__*, Read, Grep
---
Pull the full description and comments for Linear issue $ARGUMENTS.
Combined with the current state of the codebase (use Grep/Read to find the relevant files yourself), produce an implementation todo:
- Which files need changing
- Whether each step should be its own commit or bundled
- Any ambiguities that need a PM conversation first
Don't start writing code. Just the plan.
mcp__linear__* authorizes every Linear MCP tool — the model can fetch issue details, comments, status. The whole command becomes the entry point for a "ticket to implementation plan" workflow.
The catch: MCP tool names in allowed-tools need the full prefix (mcp__<server>__<tool>), or the authorization doesn't apply.
A common misconception: putting /test inside /review's file and expecting it to trigger the test command. It doesn't. Slash commands expand exactly once at the top-level user input. A /xxx inside a command body is just text — the model reads it, but Claude Code won't execute it.
If you want to compose commands, the right approaches are:
Option A: extract shared logic into context files and reference them from each command via @ or !cat
@.claude/context/review-checklist.md
@.claude/context/security-checklist.md
Option B: say "do it the same way /review does" and repeat the key instructions
Not elegant but effective. As long as the prompt is clear, the model follows the same playbook.
Option C: have one command use the Task tool to spawn a subagent, and reuse the same context files inside the subagent's prompt
Real workflow orchestration lives here. The parent command dispatches and summarizes; the subagent does the actual step.
The anti-pattern to avoid: writing a single command hundreds of lines long, trying to do everything in one shot. Maintenance cost explodes, and a single run burns your token budget.
With @, !, subagents, and MCP tools in play, the mechanisms for injecting "external capability" into a command are all on the table. How to choose:
| Need | Pick |
|---|---|
| Fixed standards, templates, context docs | @file reference |
| Live state (diff, logs, tests, DB queries) | !shell injection |
| Parameterized file content | !cat $ARGUMENTS |
| Context-heavy research, cross-file search | subagent via Task |
| External systems (issue tracker, monitoring, prod data) | MCP tools + allowed-tools |
| Multi-step sequential or parallel workflows | parent command dispatching subagents |
Declare permission boundaries explicitly with allowed-tools, especially for commands the whole team shares. Hardcoding what a command can do beats trusting manual approval on every run.
Putting it all together — a command that turns a Linear ticket into an implementation plan:
---
description: Produce an implementation plan from a Linear ticket
allowed-tools: mcp__linear__*, Task, Read, Grep, Glob, Bash(git log:*)
---
## Context
@.claude/context/architecture.md
@.claude/context/coding-standards.md
## Current repo state
!`git log --oneline -10`
## Task
Fetch Linear ticket $ARGUMENTS — description, comments, and linked tickets.
Then spin up two subagents in parallel:
1. First agent: scan the codebase for relevant existing implementations and reusable modules
2. Second agent: assess risk — which hot code paths does this change touch, where are the test coverage gaps
Once both return, output:
- Implementation steps (ordered by dependency)
- Risk list
- Suggested commit granularity
Don't start writing code.
Run /plan ENG-4213 and a single command walks the whole flow: fetch ticket → parallel codebase research → risk assessment → synthesized plan. All the user has to do is read the output and decide whether to start.
That closes the arc for the first three articles in the series: define reusable prompts (intro) → inject dynamic context (context) → orchestrate tools and subagents (this one). At this level, .claude/commands/ isn't a shortcut directory anymore. It's a small workflow library your team shares.