How we moved 16 automated jobs, months of context, and an entire AI operations stack from a third-party harness to native tools. No downtime. No lost context. Here's exactly how.
// situation reportAnthropic is ending subscription-based access for third-party harnesses. If you've been running an AI agent through OpenClaw, Cursor, or another harness on your Claude subscription, that path is closing. Everyone's moving to either the API (pay-per-token) or Anthropic's native tools. This guide is for people choosing the native path — specifically, migrating operational AI agent workflows into Cowork, Dispatch, and Claude's scheduled task system.
The Migration
What a real transition looks like
This isn't theoretical. We ran this migration on a live operation and documented every step. Here's the scope:
16
Automated Jobs
43
KT Documents
6h
Total Prep Time
0
Downtime
The outgoing agent had been running for 20 days with persistent file-based memory, cron jobs, health monitors, deployment scripts, and multi-project management across dozens of active systems. The incoming native agent needed to absorb all of that and be ready to operate autonomously.
transition-verify.sh
$check-health-statestbcps-health: 0 (healthy)agentgate-health: 0 (healthy)remotebb-health: 0 (healthy)orale-health: 0 (healthy)tauntaun-health: 0 (healthy)sites-health: 0 (healthy)$verify-scheduled-tasksmorning-ops-briefing daily 7:00 AM readyintel-scan mon/wed/fri 6 AM readytrading-daily-summary weekdays 1:05 PM readybuild-system-check daily 9:30 AM readyevening-ops-wrap daily 8:00 PM ready$echo "all systems verified"all systems verified
The Process
Five phases. That's it.
Every agent migration follows the same structure, regardless of what you're migrating from or to. The phases scale with complexity — a simple setup might take an hour, a complex one might take a day — but the steps don't change.
01The outgoing agent documents everything
The hardest step. Your current agent needs to write down everything it knows — not just what it thinks is important. This takes multiple passes. The first pass catches maybe 60% of what matters. The critical discoveries come later.
02The incoming agent reads and verifies
The new agent reads the documentation, then tests every claim against live systems. URLs get hit. State files get read. Logs get checked. TDD applied to knowledge transfer — don't trust it, verify it.
03Map the capability gaps
Different platforms have different strengths. The old agent might have native SSH; the new one controls the terminal through a scripting bridge. The old one messages on Telegram; the new one pushes notifications to your phone. Map every difference honestly.
04Build replacements disabled
Create every scheduled task, every monitoring check, every automated workflow in the new system — but don't turn any of them on. Test each one manually. Two agents monitoring the same thing means duplicate alerts or worse, conflicting actions.
05The flip
Disable overlapping old crons. Enable new tasks. Run each once to grant permissions. Monitor for 48 hours. Keep the old documentation as reference. Done.
Phase 1 Deep Dive
Why one pass isn't enough
The outgoing agent will think it's done after the first documentation pass. It's not. In our migration, the most critical discoveries came in passes 4 through 6. Each time the human pushed back with "what else?", the agent found entire systems it hadn't mentioned.
1-3
The obvious stuff
Project directories, configs, deploy scripts, API keys, server details. What any reasonable person would document.
✓ 33 files produced
4
The stuff you forgot existed
The human pushed: "What else?" The agent found a completely separate AI agent running on the same machine with its own workspace and messaging bindings. Also 2 databases nobody mentioned.
⚠ Critical systems discovered
5
The stuff that wasn't in any project folder
Pushed again. Found a video production pipeline, client work, and 37 undocumented projects scattered across the filesystem.
⚠ 37 projects not in any index
6
The full filesystem sweep
Creative work buried in Downloads. A CRM with hundreds of contacts. Multi-agent coordination protocols. Custom skills. Files that would have been lost forever.
⚠ Found critical data in unexpected locations
The rule: Keep asking "what else?" until the answer is genuinely nothing. Budget for at least 4 passes. The first pass gets 60%. The last pass finds the things that would have burned you.
Documentation Structure
Six questions per system
Every project, every service, every automated job should answer these six questions. If the documentation doesn't cover all six, the incoming agent will hit gaps.
What is it? — One paragraph, no jargon. Where does it live? — Exact file paths, URLs, servers. How does it run? — Commands to start, stop, check status. Who cares about it? — People, roles, relationships. What's broken? — Known bugs, workarounds, gotchas. What's next? — Pending work, deadlines, decisions.
The file structure that scales
knowledge-transfer/MASTER.md# read this first — index of everythingkt-project-alpha.md# one per projectkt-project-beta.mdkt-infrastructure.md# servers, deploys, DNSkt-cron-monitors.md# every automated jobkt-api-keys.md# key locations (never values)kt-contacts.md# people + contextkt-workflows.md# recurring processesGUIDE-playbook.md# reusable procedures
The master index is the single most valuable artifact. A new agent with a good index can find anything. A new agent with 43 unindexed documents is barely better off than starting from scratch.
Phase 3 Deep Dive
Mapping the gaps honestly
Third-party harnesses and native tools have different strengths. Here's the actual comparison from our migration:
Capability
Third-Party Harness
Native Tools
Scheduled tasks
Platform cron (LLM sessions)
Native scheduled tasks with cron
Memory
File-based, loads on startup
Reads same files via computer access
Terminal access
Direct shell
Via scripting bridge (osascript)
Messaging
Native Telegram integration
App notifications (different channel)
Document creation
Manual (CLI tools, templates)
Dedicated skills (docx, pptx, xlsx, pdf)
Browser
Limited
Full Chrome control + computer use
Deploys
Direct rsync/scp
Via terminal commands through scripting
Key insight: don't replace what already works. Our health monitors were pure bash scripts on launchd — they ran regardless of which AI agent was active. Same for project daemons. Only replace the parts that actually require the agent's involvement.
Phase 4 Deep Dive
Test-driven migration
For each automated task being replaced, follow this exact pattern:
1. Read the original implementation. Understand not just what it does, but why. A health monitor that alerts on failure 1, 3, 10, then every 10th exists because someone got spammed during an outage. Keep that logic.
2. Write the replacement. Include complete context in the task prompt — file paths, URLs, expected formats, success criteria. It needs to run autonomously.
3. Run every check manually first. Before scheduling anything, execute each check by hand. Verify the output matches expectations.
4. Create the tasks disabled. Both agents monitoring the same thing creates chaos. Stage everything dark.
5. Document the flip checklist. Exact steps: what to disable, what to enable, what to verify after.
manual-verification.sh
$verify-replacement morning-ops-briefing health state files (6): all readable ✓ site spot checks (6): all HTTP 200 ✓ pipeline freshness: data fresh ✓ daemon logs: readable ✓ coordination msgs: readable ✓ notification delivery: working ✓$verify-replacement intel-scan web search: functional ✓ dedup tracking: readable ✓ notification delivery: working ✓ All replacements verified. Staged dark. Ready to flip.
Lessons
What we learned the hard way
Push for more passes
The outgoing agent will think it's done after 2-3 passes. The most critical discoveries came in passes 4-6. The human's job is to keep asking "what else?" until there's genuinely nothing left.
The master index is everything
43 unindexed documents are barely better than starting from scratch. One good index file that links to everything else is the single most valuable artifact in the whole process.
Don't replace what works
Bash scripts, system daemons, and infrastructure that runs independently of the AI agent should be left alone. Only replace the parts that require the agent's judgment.
Test before you trust
Documentation drifts from reality the moment it's written. Every documented fact should be verified against the live system before you depend on it.
Verify your delivery channels
Whatever notification system you're counting on might not work. We discovered Apple Notes had permission issues from the scheduled task context. Always have a verified primary channel and a fallback.
Memory is a file organization problem
Both agents read files. If the files are well-organized, the transition is smooth. No amount of clever prompting compensates for messy file structure.
Write to the same system
The incoming agent should not create a parallel memory structure. Read from and write to the existing daily logs, project files, and coordination docs. One source of truth.
Your Turn
The migration checklist
Click to track your progress. This state is saved in your browser for this session.