777-1: Why Cassandra Hayes Finally Gets to Do Her Job Right
Published: December 9, 2025 • 10 min read
In my previous blog post, I explained why running all 7 subagents through a single Claude session would have been a disaster. Attention dilution and the lost in the middle phenomenon would have compounded to degrade the quality of work that my middle subagents (Micaela Santos, Lindsay Stewart, and Eesha Desai) would have performed.
I also explained how this would affect Cassandra Hayes' work. She is the last subagent and her entire job is to verify that everything works together by catching cross-feature issues. However, if I stuck to the disastrous workflow shown in my previous blog post, she would have been working with impaired attention to the very changes she needed to verify.
So what's the solution? Well, that is what we'll address in this blog post.
One conversation per subagent.
I know, I know. It sounds almost too simple, but here's the thing. Past Prisca, the one who did not understand tokens and how context windows worked, always tried to keep all her conversations about a specific project within one single Claude session when working with Claude Code on the terminal.
However, simply starting a new session for every single subagent fundamentally changes the attention dynamics, and I will show you how.
The Core Insight: Position Resets
When you start a fresh Claude session, the context window resets. This means every subagent gets to work in an environment where:
- Their specification loads at the BEGINNING - strong positional attention
- The codebase loads in the MIDDLE - but with minimal dilution since context is small
- Their analysis and fixes happen at the END - strongest positional attention
If you compare this to the single-session approach which I talked about in the previous blog post, you would immediately see why this works.
To better illustrate it, let's compare how Lindsay Stewart, The Accessibility Advocate (who lies at the very middle of the workflow), would perform in both instances:
SINGLE SESSION (Problematic):
When Lindsay starts working, this is what's already in the context window:
+-- System Prompt ---------------------- [Beginning]
+-- Amber Williams' spec + work -------- [Early]
+-- Kristy Rodriguez' spec + work ------ [Early-mid]
+-- Micaela Santos' spec + work -------- [Middle]
+-- Lindsay Stewart's spec ------------- <- Lindsay loads here
+-- Codebase state
+-- Lindsay Stewart's work ------------- <- Lindsay works here
The problem with the workflow above is that:
- Context is already ~60-70% full before Lindsay even starts
- Lindsay's specification file lands in the MIDDLE, that is, the weak attention zone
- The codebase is buried under 3 subagents' worth of conversation
- Lindsay's output quality is compromised
Now here's a visual representation of the solution:
SEPARATE SESSION (Solution):
When Lindsay starts working, this is what's in the context window:
+-- System Prompt ---------------------- [Beginning - STRONG]
+-- Lindsay Stewart's spec ------------- [Beginning - STRONG]
+-- Current codebase ------------------- [Middle - small context, minimal penalty]
+-- Lindsay Stewart's work ------------- [End - STRONGEST POSITION]
Can you see why the above is different? Do you see why it works better? Let me help you by highlighting the main differences:
- Context is only ~15-25% full
- Lindsay's specification file is at the BEGINNING - strong attention
- The codebase has her full, undivided attention
- Lindsay's output quality is optimal
With this solution in place, every subagent now operates with:
- Full attention capacity (no dilution from previous subagents' conversations)
- End position for their actual work (strongest attention zone)
- Clear view of the codebase (not buried under layers of previous analysis)
Why This Works: The Attention Math
Let me show you the attention dynamics for the same scenario (Lindsay Stewart checking accessibility) under both approaches.
Single Session (When Lindsay Is Subagent 4 of 7):
Total context consumed before Lindsay works: ~70,000 tokens
+-- System prompt: ~10,000
+-- Amber Williams' spec + conversation: ~15,000
+-- Kristy Rodriguez' spec + conversation: ~15,000
+-- Micaela Santos' spec + conversation: ~15,000
+-- Lindsay Stewart's spec: ~1,200
+-- Current codebase: ~15,000
Lindsay's available attention: Spread across 70,000 tokens
Lindsay's position: Middle of a bloated context
Codebase clarity: Competing with 3 subagents' worth of noise
Separate Session (Lindsay Gets Her Own):
Total context when Lindsay works: ~26,000 tokens
+-- System prompt: ~10,000
+-- Lindsay Stewart's spec: ~1,200
+-- Current codebase: ~15,000
Lindsay's available attention: Focused on 26,000 tokens
Lindsay's position: Her work happens at the END
Codebase clarity: Only her specification file + the code she's reviewing
The difference is dramatic. Lindsay goes from fighting for attention in a 70,000-token context to having focused attention in a 26,000-token context. That's roughly 2.7x more attention density on the actual codebase.
The Tradeoff: What We Lose
I want to be honest about what this approach sacrifices.
Loss: Conversational Continuity
In a single session, later subagents can reference earlier subagents' reasoning. For example, when Cassandra Hayes runs her final review, she could ask:
"Why did Micaela Santos implement the theme toggle using Context instead of localStorage?"
And Claude would have access to that earlier conversation that shows Micaela's reasoning, the alternatives she considered, why she made that choice.
In separate sessions, Cassandra only sees the code. She knows WHAT Micaela implemented, but not WHY. If she questions the approach, she might refactor something that Micaela had good reasons for doing a specific way.
In separate sessions, each subagent starts fresh. They don't "remember" what previous subagents discussed. They only see the current state of the code.
Why this is acceptable: The code itself should be self-documenting. If Micaela's implementation choice isn't obvious from the code, that's a code quality issue that I expect Daniella Anderson, The Code Quality Specialist, to catch. And ultimately, each subagent is the expert in their domain. I don't expect Cassandra to second-guess Micaela's design system decisions anyway. She should be checking if features integrate properly, not disputing implementation choices made by the earlier subagents.
Loss: Cross-Subagent Discussion
In a single session, I could theoretically ask "Micaela Santos, what do you think about what Amber Williams did to the navigation?"
In separate sessions, Micaela has no context about Amber's reasoning. She only sees the navigation as it currently exists.
Why this is acceptable: These subagents are specialists. Micaela shouldn't be second-guessing Amber's responsive decisions. Her work is to evaluate the design system. If Amber's navigation does not align with Micaela's design consistency expectations, Micaela will fix it based on HER criteria. That's the point.
Loss: Efficiency
Seven separate sessions means seven times the startup overhead. Loading each specification in a separate Claude session is acceptable. However, having to re-orient Claude to the codebase every single time will definitely take up a good amount of time.
Why this is acceptable: Quality of this experiment matters more than speed. A fast workflow that produces broken code is worse than a slower workflow that produces solid code. The whole point of the 777-1 experiment is to demonstrate QUALITY improvements from systematic subagent application.
The Unexpected Benefit: Cleaner Documentation
Here's something I didn't anticipate: having separate sessions will make documentation dramatically easier.
In a single session, everything sort of blurs together. Which fix was Amber's? Which was Kristy's? I'd have to scroll through a massive conversation log to reconstruct the timeline.
With separate sessions plus well-defined Git commits that look something like the example below, it would be a lot easier to create documentation for the "7 Case Studies" part of this experiment. I can also use the claude --resume command to sift through the multiple Claude sessions where each subagent was called to better understand each change that was made, especially if the verbose output feature is also turned on:
git log --oneline
a2f8c1 After Cassandra Hayes: cross-feature integration verified
b9e3d2 After Daniella Anderson: TypeScript interfaces added
c4a1f5 After Eesha Desai: localStorage persistence implemented
d7b2e8 After Lindsay Stewart: WCAG AA compliance achieved
e1c9a3 After Micaela Santos: design system unified
f5d4b7 After Kristy Rodriguez: all buttons functional
g8e2c1 After Amber Williams: responsive issues fixed
h3f6a9 Initial generation from general-purpose subagent
Each commit is a clean checkpoint. I can:
git diff after-amber..after-kristyto see exactly what Kristy Rodriguez changedgit checkout after-micaelato see the app's state at that point- Write case studies with precise "before and after" comparisons
This is exactly the kind of documentation I need for the case studies and for building the failure prediction algorithm.
The Refined Workflow Diagram
Now, here's the complete picture:
+------------------------------------------------------------------+
| 777-1 PROJECT WORKFLOW |
+------------------------------------------------------------------+
PHASE 1: GENERATION
+----------------------+
| Claude Session |
| (General) |-------> git commit -------> tag: initial-generation
| |
| Build app from |
| enhanced prompt |
+----------------------+
PHASE 2: SUBAGENT PASSES (Each in fresh session)
+----------------------+
| Claude Session |
| + Amber spec |-------> git commit -------> tag: after-amber
| |
| Responsive/ |
| Mobile fixes |
+----------------------+
|
v (code carries forward via Git)
+----------------------+
| Claude Session |
| + Kristy spec |-------> git commit -------> tag: after-kristy
| |
| Functionality |
| completion |
+----------------------+
|
v
[... Micaela, Lindsay, Eesha, Daniella ...]
|
v
+----------------------+
| Claude Session |
| + Cassandra spec |-------> git commit -------> tag: after-cassandra
| |
| Cross-feature |
| integration |
+----------------------+
PHASE 3: DOCUMENTATION
+-------------------------------------------------------------+
| For each tag transition: |
| +-- git diff [previous-tag]..[current-tag] |
| +-- Document: What issues were found? |
| +-- Document: What patterns does this reveal? |
| +-- Update: Cross-project pattern database |
+-------------------------------------------------------------+
Cassandra Finally Gets to Do Her Job
Remember the whole problem from the previous post? Cassandra Hayes, The Feature Detective, was supposed to catch cross-feature issues but was handicapped by attention dilution and the "lost in the middle" phenomenon.
With this new workflow:
Cassandra's Session:
+-- System Prompt ---------------------- [Beginning - STRONG]
+-- Cassandra Hayes' spec -------------- [Beginning - STRONG]
+-- Final codebase --------------------- [Middle - minimal dilution]
| (includes ALL fixes from subagents 1-6)
+-- Cassandra's analysis --------------- [End - STRONGEST]
Attention to codebase: FULL
Attention to her own work: MAXIMUM
Cross-feature detection: OPTIMAL
Cassandra can now properly ask:
- "Does the theme toggle affect the accessibility modal?" (Micaela Santos + Lindsay Stewart interaction)
- "Does the form clear after submission?" (Eesha Desai's persistence logic)
- "Do all the TypeScript interfaces connect properly?" (Daniella Anderson's work)
She sees the COMPLETE codebase with FULL attention. No more degraded awareness of the changes made by the middle subagents.
The Experiment Begins
This has been refreshing to learn about. From the moment I declared the 777-1 Experiment, I felt a lot of fear if I am being completely honest. But now, even if it all fails, just learning everything that I have explained in this blog post and the previous one would have made it all worth it.
Now that I have this blog post out, I think it is safe to say that the experiment is about to begin. I plan to write more blog posts today introducing the 7 projects I have chosen to use for this experiment and why I chose each one of them. Yes, I am going to be doing a lot of writing today to ensure that the building can start tomorrow.
I've got only what now? 4 more days till the end of my SDR Era so you can imagine the pressure is building. Time to get to work!
As always, thanks for reading!