777-1: I'm Sick But My Brain Won't Stop Plotting
Published: December 19, 2025 • 4 min read
The past few days have been hectic. It's been a combination of me falling sick (feeling a lot better today though, but still on medications), having to think about moving soon, posting on LinkedIn and engaging there (still very new territory for me), and now my latest challenge to create a product and sell it.
Here's the thing: I may not be able to start the 777-1 experiment until the new year, but that does not mean I will stop thinking about it. I'm essentially using this period to plot and scheme before I execute.
The Workflow Question That Kept Nagging Me
Now, here are a few things that I have been thinking about regarding this experiment's workflow.
First, when I begin a project, I will have to pass the prompt to Claude. Normally, my starting prompt (which is usually a file with implementations broken down into phases) for any session is passed in Plan mode. This enables me to start by evaluating Claude's execution plan to be sure that I agree with it.
However, the 777-1 experiment is supposed to test what the general-purpose subagent creates with minimal intervention, then see how well my subagents (Context Engineers) can improve the work of the general-purpose subagent.
If I turn on Plan mode, that adds an extra layer of "Claude intelligence" which will ultimately obscure the baseline.
What Do I Mean by Baseline?
The baseline is the "goldilocks prompt" I mentioned when I introduced the 777-1 experiment: not too lazy and not too robust. Usually in Plan mode, the plan agent asks questions and provides options for the user to choose from. This means that I also have to consider how the options I choose affect the result generated by the agent, which is an extra layer that makes the baseline inconsistent.
So I have decided to use Edit mode as it provides a more direct baseline that is consistent across all projects.
I'll be telling Claude, "here's a prompt, execute it." Simple as that.
The Classification Framework
After my subagents go through their rounds, I'll then do a final round of fixes while simultaneously classifying each fix by checking for:
- Which subagent should have fixed it? This tells me if a subagent missed something in their domain.
- Was it fixed by a subagent in a previous round and then reverted by another subagent? If so, why do they contradict each other?
- Were there cascading fixes where a subagent was fixing the work of another subagent? This reveals dependencies I might not have anticipated.
- Are there fixes that could not have been identified by any of the subagents? Hence, I may potentially need to create a new subagent.
The Possible Outcome I'm Bracing For
At the end of this experiment, I hope to improve the definitions for all my subagent files. However, there is a likelihood that I may find out at the end of this experiment that I would need multiple flavors of each subagent.
Oh well, I guess we'll find out once the experiment is over and all relevant data has been collected.
As always, thanks for reading!