CLAUDE.md is read-only. ctlsurf is read-write.

A notebook your coding agent reads AND writes to.

CLAUDE.md is a static file you maintain — the agent reads it but never writes back. ctlsurf is a shared notebook the agent updates as it works. Next session, it picks up where it left off.

See ctlsurf in Action

The Workflow: In our interactive demo, you'll see how ctlsurf transforms AI coding workflows:

  1. Highlight requirements in your documentation and instantly create tasks for your AI agent
  2. Attach Skills (playbooks) to guide agents through complex workflows with guardrails
  3. Watch agents work as they check out tasks, update progress, and document their decisions
  4. Review structured completions showing exactly what was done, assumed, and skipped

Example: An agent implementing "user authentication" documents: "Added JWT-based auth" (summary), "Used existing User model" (assumption), and "Skipped refresh token implementation" (simplified). No more mystery about what your AI actually did.

You already tried CLAUDE.md.

You write a CLAUDE.md with your architecture decisions, coding standards, context. The agent reads it. Good. But then it finishes a task and... nothing comes back. What did it decide? What did it skip? What should the next session know?

CLAUDE.md is read-only. The agent consumes it but never writes back. So every session starts from scratch. You re-explain. You re-discover. You lose momentum.

You need a notebook the agent reads AND writes to.

📊 CLAUDE.md vs ctlsurf
CLAUDE.md ctlsurf
Direction You → Agent You ↔ Agent
Who writes You, manually Both, via MCP
Survives sessions Your instructions do Everything does
Agent accountability None What it built, decided, skipped
Task handoff Copy-paste into chat Agent checks for tasks on start

How the notebook works

The agent reads it. The agent writes to it. You can see everything.

🧠

Agent reads what was built yesterday

Architecture decisions, coding standards, past attempts — all in the notebook. The agent reads it at session start. No more re-explaining.

🎯

See something wrong? Turn it into a task

Highlight it, annotate it, turn it into an instruction. The agent picks it up immediately. You're steering, not starting over.

Works with Claude Code, Cursor, Windsurf

ctlsurf connects via MCP — the open protocol for AI tools. Any MCP-compatible agent can read and write to your notebook.

👁️

See every decision the agent made

What was built, what was tried, what was skipped and why. The notebook is human-readable. No git-diffing. No guessing.

The agent tells you what it skipped

When an agent completes a task, it documents what it built, what it assumed, and what it quietly dropped.

Structured Task Completion

No more guessing what the AI did. Every completed task includes:

  • Summary - What was actually done
  • Assumptions - What the agent assumed (required)
  • Attempted but failed - What was tried but didn't work
  • Simplified or skipped - What was quietly dropped

The "simplified or skipped" field is the most important - it catches when agents give up on parts of tasks without telling you.

✅ Implement user authentication
Completed by AI Agent
Summary: Added JWT-based auth with login/logout endpoints
Assumptions: Used existing User model, assumed bcrypt for hashing
⚠️ Simplified: Skipped refresh token implementation, used simple JWT expiry instead

🔄 Task Reopen Workflow

When you spot something the agent simplified or skipped that shouldn't have been, you can reopen the task with feedback:

  1. Click Reopen on any completed task
  2. Provide feedback explaining what needs to be addressed (e.g., "Actually implement refresh tokens - this is a security requirement")
  3. Agent receives context about why the task was reopened and what was missing
  4. Task completes properly with the full implementation this time

This creates an accountability loop - agents can't silently cut corners because you'll see exactly what they skipped and can push back.

How it works

Three steps to persistent AI context

1

Connect via MCP

Add ctlsurf to your MCP config. Works with Claude Code, Cursor, Windsurf, and any MCP-compatible tool.

2

Agent reads the notebook

On session start, the agent reads your architecture decisions, past work, and pending tasks. No re-explaining.

3

Agent writes to the notebook

As it works, the agent documents what it built, what it decided, and what it skipped. Next session picks up where this one left off.

What is MCP (Model Context Protocol)?

MCP is an open standard created by Anthropic that allows AI assistants to connect to external tools and data sources. ctlsurf is built as an MCP server, meaning any MCP-compatible AI coding assistant can connect to it seamlessly.

Setup is simple: Add a few lines to your MCP configuration file, and your AI agent gains access to 50+ ctlsurf tools for managing pages, tasks, skills, and documentation.

No code changes required. Your existing AI coding workflow stays the same - ctlsurf just gives your agent a persistent memory and knowledge base to work with.

Works with your existing tools

Claude Code Cursor Windsurf VS Code Any MCP Client

Built for how you work

From solo developers to engineering teams

👥

Engineering Teams

Maintain shared context across sprints, agents, and tools. Everyone stays aligned on decisions and progress.

🎯

Founders & Tech Leads

Understand why features shipped a certain way, with a traceable history of decisions and trade-offs.

🤖

AI-Driven Workflows

Coordinate long-running tasks with evolving state instead of isolated prompts. Context persists across sessions.

Skills: Reusable Agent Playbooks

Define workflows with guardrails that guide AI agents through complex tasks consistently.

What Are Skills?

Skills are structured workflow templates that guide AI agents through complex, multi-step tasks. Think of them as playbooks or runbooks that ensure consistency and quality across your team's AI-assisted work.

Each skill contains:

  • Inputs - Variables the agent needs to collect before starting (e.g., endpoint URL, error message)
  • Workflow Steps - Sequential actions to follow, with optional checkpoints for human review
  • Guardrails - Safety rules the agent must never violate (e.g., "Never modify production database directly")

Example Use Cases: API debugging workflows, code review checklists, deployment procedures, security audit processes, feature implementation patterns.

🔧

API Debug Workflow

Systematic approach to debugging

Reproduce the issue
Check logs for errors
Identify root cause
Implement and test fix

Why Skills Matter

  • ✓ Consistent workflows across team members
  • ✓ Built-in guardrails prevent mistakes
  • ✓ Reusable across projects
  • ✓ Fork and customize from marketplace

🏪 Skill Marketplace

Browse and fork skills from the community marketplace. Find battle-tested workflows for common development tasks and customize them for your team's needs.

  • Personal Skills - Private workflows for your own use
  • Project Skills - Shared within a specific codebase/team
  • Public Skills - Published to the marketplace for anyone to use

Simple pricing

Start free, upgrade when you need more

Free

For individual developers

$0/month
  • 5 projects
  • 100 requests/min rate limit
  • 500 MB storage
  • 10 project skills
  • MCP access
Get Started

What ctlsurf is NOT

Not another CLAUDE.md — that's read-only. Not a memory layer like claude-mem — that's invisible. Not an observability dashboard like claude-devtools — that's passive. ctlsurf is a notebook both you and the agent write to.

Get Started Free →