Why Your AI Wakes Up Every Morning With No Memory(And how to fix it)
I was two weeks into a gnarly refactor when it happened.
Claude and I had been pair programming on an authentication system—tracking down race conditions, filing away “fix this later” issues, building up this rich context about why we made certain decisions. RS256 instead of HS256 for key rotation. Session middleware patterns. The whole architecture was in our shared understanding.
Then I hit compaction.
I came back the next day, opened a new Claude session, and asked: “Where did we leave off?”
Claude: “I don’t have information about previous sessions in my context.”
All of it. Gone.
The discovered bugs. The architectural decisions. The “by the way, we should fix this” notes. Everything we’d built up over dozens of hours—evaporated.
I spent 30 minutes re-explaining what we’d been working on. And even then, I couldn’t remember all the issues Claude had surfaced. How many edge cases had we found? Which ones were critical? What was blocking what?
This is what I call the amnesia problem. And it’s not just annoying—it’s a fundamental limitation of how we work with AI agents.
The TODO.md Trap
So I did what everyone does: I created a TODO.md file.
## TODO
- [ ] Add rate limiting to login endpoint
- [ ] Improve password hashing
- [ ] Fix email validation
- [ ] Build dashboard (depends on auth being done)Seemed reasonable. Every project has a TODO list, right?
Three days later, it was already a graveyard.
Half the items were done but still unchecked. A quarter were outdated. New issues Claude discovered during implementation? Lost in chat history. Dependencies? I had “(depends on auth being done)” in a parenthetical. Good luck having Claude parse that reliably after compaction.
Steve Yegge calls these “swamps of rotten half-implemented plans.” He’s right.
Here’s why markdown TODOs fail with AI agents:
They become stale instantly - You finish a task, forget to update the markdown. The agent reads it, doesn’t know what’s actually done.
No dependency tracking - Can I start the dashboard? Is auth done? The agent has to guess.
Context evaporates - “Fix email validation” tells you nothing. Which email? Where? What’s broken? Why does it matter? After compaction, this line is worthless.
Agents can’t use them reliably - Claude reads the whole list, can’t tell what’s ready to work on, and often just... ignores it.
That TODO.md file? After compaction, it’s all you have. And it’s not enough.
Enter Beads: A Memory System Built for Agents
That’s when I found beads.
Steve Yegge built it specifically to solve the amnesia problem. It’s an issue tracker, but not like Jira or Linear. It’s built for AI agents, not humans.
Here’s the breakthrough: You don’t manage beads. Claude does.
I ran bd init in my project. That’s it. One command.
Then I just... kept working with Claude. And something wild happened.
Claude started filing issues automatically. Not when I asked—just when it discovered something worth remembering. While implementing the auth system, Claude would say:
“I notice there’s a potential race condition in the session handler. Filing it:
bd create ‘Fix race condition in user session’ --deps discovered-from:myapp-a4f2 -p 0“
It wasn’t asking permission. It wasn’t waiting for me to maintain a TODO list. It was building its own memory.
At the end of each session, Claude would update the notes on active issues:
bd update myapp-a4f2 --notes “COMPLETED: JWT login with RS256
KEY DECISION: Chose RS256 over HS256 for key rotation capability
IN PROGRESS: Session middleware
BLOCKERS: Need decision on token expiry (1hr vs 24hr trade-off)
NEXT: Implement refresh endpoint after expiry decision”This wasn’t for me. This was for future-Claude.
The Compaction survival Test
The next day, I hit compaction again. New session, empty context.
Me: “What should we work on?”
Claude (without me asking, automatically ran these commands):
bd ready --json
bd list --status in_progress --json
bd show myapp-a4f2Then replied:
“I can see we’re working on authentication (myapp-a4f2). Last session we completed the JWT login flow, decided to use RS256 for key rotation capability, and got blocked waiting for your decision on token expiry time.
The trade-off is: 1-hour tokens (better security, worse UX) vs 24-hour tokens (better UX, but wider attack window if tokens are compromised). What’s your preference?
Also, I’ve filed 3 issues we discovered along the way:
Rate limiting for login endpoint (P0)
Bcrypt rounds for password hashing (P1)
Race condition in session handler (P0)
Should we resolve the token expiry question first, or tackle one of the P0 issues?”
The conversation history was gone. But Claude had full context.
Every decision. Every discovered bug. Every blocker. Every trade-off. Right there.
No re-explaining. No “wait, what were we doing?” No hunting through old conversations.
This is what beads does.
TodoWrite vs Beads: Two Memory Systems
Here’s where people get confused. Claude actually has two memory systems, and they serve different purposes.
TodoWrite: Working Memory (This Hour)
TodoWrite is Claude’s scratch pad for the current session:
✓ [completed] Implement login endpoint
→ [in_progress] Add password hashing
[pending] Create session middlewareIt shows you real-time progress. Gets marked complete as work happens. Disappears when the session ends.
Perfect for: “What’s Claude doing right now?”
Beads: Long-Term Memory (This Week/Month)
Beads is Claude’s episodic memory across sessions:
bd show myapp-a4f2
Notes: “COMPLETED: Login with bcrypt (12 rounds)
KEY DECISION: JWT (not sessions) for stateless auth
IN PROGRESS: Session middleware
NEXT: Need input on token expiry (1hr vs 24hr)”Survives compaction. Captures meaning, not just tasks. Persists across all sessions.
Perfect for: “What happened last week? What decisions were made?”
The Handoff Pattern
Session start: Claude reads bead notes → creates TodoWrite items for immediate work
During work: TodoWrite gets marked complete
Reach milestone: Claude updates bead notes with outcomes + context
Session end: TodoWrite disappears, bead survives with enriched notes
After compaction: TodoWrite is gone forever. Bead notes reconstruct everything.
The Magic: Dependencies That Prevent Mistakes
This is where beads gets brilliant. It supports four relationship types:
1. blocks - Hard Blocker
bd create “Build user dashboard” -p 1
# Created myapp-e3f7
bd create “Implement authentication” -p 0
# Created myapp-g2h9
bd dep add myapp-e3f7 myapp-g2h9
# → “myapp-g2h9 blocks myapp-e3f7”Now dashboard won’t show in bd ready until auth is closed. Claude can’t accidentally start building the dashboard before auth exists.
2. discovered-from - The Audit Trail
This is the agent’s secret weapon:
# Claude finds bug B while implementing feature A
bd create “Fix memory leak in session handler” \
--deps discovered-from:myapp-a4f2 -p 0
Creates an audit trail of how work was found. Those “oh by the way” issues Claude mentions? They now get filed permanently, linked to context.
After a week of work, you have an automatically maintained discovery backlog. Prioritized. Linked. Ready to tackle.
3. parent-child - Hierarchy
bd create “Epic: Authentication system” -t epic
# Created myapp-j4k2
bd create “Add OAuth” --parent myapp-j4k2
# Created myapp-l8m1 (auto-linked)Good for breaking down large features.
4. related - Soft Connection
bd dep add myapp-b7c3 myapp-d1e8 -t related
# “These touch the same code but don’t block each other”What You Actually Do (Almost Nothing)
Your workflow:
One-time setup:
cd your-project
bd initDone. That’s it.
Work with Claude normally:
“Let’s build user authentication”
Claude automatically:
Creates issues as work emerges
Tracks dependencies
Updates notes at milestones
Files discovered work with proper links
Checks ready work at session start
You just work. The memory management happens in the background.
When you DO interact with beads (rarely):
# Weekly review
bd stats
# Check what’s blocked
bd blocked
# Context restore after time away
bd show myapp-a4f2The agent does the rest.
Why Claude Loves It
The most interesting thing about beads isn’t the technology. It’s how Claude uses it.
Claude’s behavior changes:
1. Proactive filing: Claude files issues without being asked. “I notice X could be improved. Filing: bd create...“
2. Better planning: Claude uses dependencies to think through work order before starting.
3. Context awareness: Claude references past decisions from bead notes. “Last session we decided to use RS256 because...”
4. Discovery tracking: Claude treats discovered work as first-class, not throwaways.
Why? Because beads is built for how Claude actually works:
Structured data (JSON)
Clear state (open/in_progress/closed)
Explicit relationships (dependencies)
Queryable memory (show me what’s ready)
It’s not forcing Claude into a human workflow. It’s giving Claude the database it naturally wants.
When Beads is Overkill
Not every task needs beads. Use this test:
Use Beads when:
Work spans multiple sessions
You might hit compaction before finishing
There are dependencies or blockers
You’re discovering related work along the way
You need to resume after time away
Example: “Build authentication system” (multi-day, many parts)
Use TodoWrite when:
Work completes in this session
It’s a simple linear checklist
All context is in the conversation
No dependencies or discovery
Example: “Refactor this 200-line file” (done in an hour)
The test: “Will I need this context in 2 weeks?”
Yes → Beads
No → TodoWrite
The Git Sync: How It Works Across Machines
Beads stores everything in two places:
.beads/beads.db- Local SQLite (fast queries).beads/issues.jsonl- Git-versioned JSONL (syncs across machines)
On your desktop:
bd create “New issue”
# → SQLite write (instant)
# → After 5 seconds, exports to JSONL
# → Git commit with your code changesOn your laptop:
git pull
# → JSONL updates
# → bd auto-imports (newer than local DB)
# → SQLite now has the issueYou get:
Fast local operations (SQLite, <100ms)
Git versioning (full audit trail)
Multi-machine sync (JSONL)
Offline support (no server)
It’s a distributed database... that’s just files in git.
Memory as Infrastructure
We’re at this weird moment where AI coding agents are incredibly capable but also incredibly forgetful.
We expect them to remember complex multi-week projects, track dozens of discovered issues, maintain perfect context across compaction—but we give them... markdown files.
Beads doesn’t make agents smarter. It makes them less forgetful.
And honestly? That might be more important.
Because the hardest part of any project isn’t writing code. It’s not losing track of what needs to be written.
Beads gives your agent:
Memory that survives compaction
A discovery backlog that doesn’t evaporate
A dependency graph that prevents mistakes
And you barely have to do anything. Install it, initialize it, let Claude manage it.
The agent handles the rest.
Get started:
GitHub: steveyegge/beads
Quick start:
bd initin your projectLet Claude do the rest
Key commands (mostly for reference—Claude uses these automatically):
bd init # One-time setup
bd ready # What’s ready? (Claude checks this)
bd show <id> # Issue details (Claude reads notes)
bd stats # Weekly review (you use this)
bd blocked # What’s stuck?
Give your agent a memory. See what happens.

