Most AI coding tools require you to be at your desk, watching the terminal, babysitting the process. This is different. Your assistant spins up a Claude Code session in tmux, monitors the output, reviews the resulting PR for correctness, and drives CI to green — while you are doing something else entirely. You delegate a task and get back a merged pull request.
The prompt
Send this in Slack from wherever you are. Replace the task description with whatever needs doing.
I need you to run a Claude Code session for this: [describe the task]
Spin up a tmux session, launch Claude Code with the task, and manage the whole loop:
- Monitor it every few minutes
- When it opens a PR, read the diff and approve if it looks right — flag anything off
- Watch CI. If a check fails, fix it or feed the error back to Claude Code
- When it's merged, send me the PR link and a quick summary
Handle it end to end. Let me know here if you get stuck on something that needs a decision.How it works
Your assistant uses shell access to start a named tmux session and launch Claude Code inside it with your task as the opening prompt. The session name is predictable so it can reattach and read output at any point. It polls the session every few minutes, watching for Claude Code to complete its work, encounter blockers, or open a pull request.
When a PR appears, your assistant reads the diff via the GitHub API. It checks that the changes make sense for the task, that nothing obviously wrong was introduced, and approves if it looks good. Then it watches the CI pipeline. If a check fails it reads the log, decides whether to fix it directly or feed the error back into the Claude Code session as a follow-up prompt, and repeats until green.
The whole loop runs in the background. You get a Slack message when the PR is merged, with a link and a plain-English summary of what was done. If it gets stuck somewhere it cannot resolve, it messages you with exactly what it needs and waits.
Why this pattern matters
Claude Code is a powerful coding agent, but it still needs a human in the loop to review output, handle CI failures, and decide when to merge. Your assistant fills that role. It does not write the code — it manages the agent that does. This is the orchestrator pattern: one assistant coordinates another AI tool, turning a multi-hour interactive session into a fully delegated background job. You stop being the operator and start being the stakeholder.
The outcome
Complex coding tasks that would normally take 2 to 3 hours of active, interrupted attention can be handed off end-to-end. You get a merged PR, not a half-finished suggestion in a chat window. The shift is real: your assistant stops being a tool you use and becomes a manager you delegate to. You come back to a done thing, not a half-done thing waiting for you.