persistproc
A shared process layer for multi-agent development workflows
What is persistproc
?
Persistproc is an MCP server and command line tool which lets agents and humans see and control long-running processes like web servers. The goal is to reduce the amount of copying and pasting you need to do while coding with AI, make it easier for you to use multiple agents, and be tool-agnostic.
There is no config file. Processes are managed entirely at runtime. This is not a replacement for supervisord.
Example use case: basic web development
Suppose you're working on a todo list app, and it has a dev server you normally start with npm run dev
. This server watches your code for changes, typechecks it, lints it, and hot-reloads the page. When there's an error, it prints the error to your terminal.
If you're working with an LLM agent such as Cursor or Claude Code, if you see an error, you might copy/paste it from your terminal to the agent and ask how to fix it. Then the agent might make some changes, and maybe you hit another error, so you copy/paste again, and the agent makes another change…etc.
If the agent could see the changes directly, you wouldn't need to do anything! With persistproc, that's possible. Instead of saying npm run dev
, say persistproc npm run dev
, and the agent can instantly read its output or even restart it. Otherwise, you can still see its output in your original terminal, and kill it with Ctrl+C, like your normally do.
graph TB
User[User] -->|"persistproc npm run dev"| PP[persistproc server]
PP <-->|"manages / logs"| NPM["npm run dev<br/>(web server)"]
PP -.->|"streams output"| User
Agent[Cursor] -.->|"output()<br/>restart()"| PP
style PP fill:#e1f5fe,stroke:#01579b,stroke-width:2px
style NPM fill:#fff3e0,stroke:#e65100,stroke-width:2px
style User fill:#f3e5f5,stroke:#4a148c,stroke-width:2px
style Agent fill:#e8f5e9,stroke:#1b5e20,stroke-width:2px
Note
Why not just use Cursor and let the agent open a terminal?
- Not everyone likes using the terminal in Cursor/VSCode. Engineers have many different workflows.
- Only Cursor's agents can see the process, not Claude Code, Gemini CLI, etc.
Example use case: complex web development
Suppose you need to run four processes to get your web app working locally. Maybe an API, frontend server, SCSS builder, and Postgres. Each service emits its own logs.
If you run into an error while testing locally, you can go read all four log files to find out what happened.
But if you started those processes with persistproc, then the agent can read everything at once and possibly give you a quicker diagnosis.
graph TB
User[User] -->|"starts processes"| PP[persistproc server]
subgraph processes["Managed Processes"]
API[API Server]
FE[Frontend Server]
SCSS[SCSS Builder]
DB[Postgres]
end
PP <-->|"manages / logs"| processes
Agent1[Claude Code] -.->|"read logs<br/>diagnose issues"| PP
Agent2[Cursor] -.->|"read logs<br/>diagnose issues"| PP
style PP fill:#e1f5fe,stroke:#01579b,stroke-width:2px
style API fill:#fff3e0,stroke:#e65100,stroke-width:2px
style FE fill:#fff3e0,stroke:#e65100,stroke-width:2px
style SCSS fill:#fff3e0,stroke:#e65100,stroke-width:2px
style DB fill:#fff3e0,stroke:#e65100,stroke-width:2px
style User fill:#f3e5f5,stroke:#4a148c,stroke-width:2px
style Agent1 fill:#e8f5e9,stroke:#1b5e20,stroke-width:2px
style Agent2 fill:#e8f5e9,stroke:#1b5e20,stroke-width:2px
style processes fill:#f5f5f5,stroke:#999,stroke-width:1px,stroke-dasharray: 5 5
Note
What if my processes are started by another system?
Then your best bet is to write log files to a predictable location and tell the agent where they are, rather than trying to use persistproc. Or, you can follow this issue on GitHub.
Currently, persistproc is most useful when you want your agent to be able to manage processes, not just see their output.
Available Tools
persistproc
exposes a standard Model Context Protocol (MCP) server on http://127.0.0.1:8947
. You can use any MCP-compatible client to interact with it programmatically.
The server exposes the following tools:
Tool | Description |
---|---|
ctrl | Unified process control: start, stop, or restart processes. |
list | List all managed processes and their status. Can optionally filter by pid, command, or working directory and provides log paths. |
output | Retrieve captured output from a process. |
This list is intentionally short because LLMs perform worse with too many tools, to the point where some IDEs (such as Cursor) have a cap on how many tools you can have.
Getting started
1. Install persistproc
2. Start the server and configure your agent
Run this in a dedicated terminal and leave it running.
The first thing persistproc serve
outputs is configuration instructions for various agents, so follow those instructions if you haven't already.
3. Start a Process
In another terminal, cd
to your project's directory and run your command via persistproc
.
The command is sent to the server, and its output is streamed to your terminal. You can safely close this terminal, and the process will continue to run.
Tip
Or just ask your agent to "run your dev server using persistproc," and it will probably find the right command by looking at your package.json
file and run it using persistproc
.
With this, your agent can now use the available tools to manage your development environment.
Example Agent Interaction
Once your agent is connected, you can ask it to manage your processes. Assuming you have started a web server with persistproc npm run dev
(PID 12345), you can now interact with it.
-
You: "List the running processes."
- Agent: Calls
list()
and shows you the runningnpm run dev
process.
- Agent: Calls
-
You: "The web server seems stuck. Can you restart it?"
- Agent: Identifies the correct process and calls
ctrl(action="restart", pid=12345)
.
- Agent: Identifies the correct process and calls
-
You: "Show me any errors from the web server."
- Agent: Calls
output(pid=12345, stream="stderr")
to retrieve the latest error logs.
- Agent: Calls
Development
Run persistproc in a fully configured virtualenv with ./pp
. Run other commands such as pytest
in a virtualenv with uv run
.
License
This project is licensed under the MIT License.