Claude God Tip #13: Why Claude Gets Dumber the Longer You Talk to It
Published: December 7, 2025 • 6 min read
Welcome to another episode of Claude God tips. This one is going to be a small tip. However, the blog post might be slightly longer because I need to set the stage properly.
This post is actually great timing. Over the past few days, I have written extensively about tokens, context windows, and how AI really processes the messages we send to it. If you haven't read any of those posts, I really suggest checking out the first one at least.
Quick Recap: What You Need to Know
Here's a brief (and partly simplified) summary of what you absolutely must know for this blog post to make sense:
- When you send a message to a Claude instance, that text message is converted into what are called tokens
- LLMs have a vocabulary book containing billions of tokens. They reference this vocabulary book to make sense of the message you send them
- LLMs do not read your prompt, come up with a full response, and then provide it to you. All responses are simply token-by-token predictions. It predicts the first token, then the next, then the next
- Every new conversation you start with an LLM opens a context window. Think of the context window as the total amount of "memory" or "space" you have within a chat. The more prompts (including files) you send as input and the more responses Claude returns as output, the more that context window fills up
- When you use up your context window, you get the dreadful message: "Claude hit the maximum length for this conversation. Please start a new conversation to continue chatting with Claude"
- You can think of tokens as the unit of measurement for your context window. The sum total of inputs and outputs in a conversation are measured with tokens, and these tokens determine how quickly you fill up your context window. For most Claude users, at the time of writing, each chat has a context window of 200k tokens.
How Context Actually Accumulates
Now on to the good stuff. When you start a conversation with Claude and send your first prompt, this prompt is processed by Claude and you get an output.
When you send a second prompt within that same context window, the first prompt you sent, the response Claude provided to that prompt, AND the second prompt you just sent all get sent to Claude again. Here's what that looks like:
MESSAGE 1:
+---------------------------------------------------------+
| You send: "I'm on a strict no-carb diet" |
+---------------------------------------------------------+
|
v
+---------------------------------------------------------+
| Claude receives: |
| +-----------------------------------------------------+ |
| | "I'm on a strict no-carb diet" | |
| +-----------------------------------------------------+ |
| |
| Claude responds: "That's great! Let me know if you |
| need meal ideas..." |
+---------------------------------------------------------+
MESSAGE 2:
+---------------------------------------------------------+
| You send: "What should I make for dinner?" |
+---------------------------------------------------------+
|
v
+---------------------------------------------------------+
| Claude receives: |
| +-----------------------------------------------------+ |
| | "I'm on a strict no-carb diet" | |
| | "That's great! Let me know if you need meal ideas" | |
| | "What should I make for dinner?" <-- YOUR NEW MSG | |
| +-----------------------------------------------------+ |
| |
| Claude responds: "How about grilled salmon with |
| roasted asparagus..." (no carbs suggested!) |
+---------------------------------------------------------+
MESSAGE 3:
+---------------------------------------------------------+
| You send: "Do I need to season it?" |
+---------------------------------------------------------+
|
v
+---------------------------------------------------------+
| Claude receives: |
| +-----------------------------------------------------+ |
| | "I'm on a strict no-carb diet" | |
| | "That's great! Let me know if you need meal ideas" | |
| | "What should I make for dinner?" | |
| | "How about grilled salmon with roasted asparagus" | |
| | "Do I need to season it?" <-- YOUR NEW MESSAGE | |
| +-----------------------------------------------------+ |
| |
| Context window filling up: ########......................
+---------------------------------------------------------+
Each message = Claude re-reads EVERYTHING above it
This is why Claude (and LLMs in general) can infer your preferences based on messages you sent previously. Here are two examples of what this might look like in everyday conversations:
Example 1: The Diet Betrayal
Message 1: "I'm on a strict no-carb diet and I'm really committed this time"
Response 1: "That's great! Low-carb can be really effective. Let me know if you need meal ideas or tips for staying on track."
Message 2: "What should I make for dinner tonight?"
Response 2: "How about grilled salmon with roasted asparagus and a side salad with olive oil dressing? Quick, filling, and zero carbs."
Insight: Without Message 1, Claude would likely suggest pasta, rice bowls, or sandwiches. The diet context completely filtered out carb-heavy options.
Example 2: The Accidental Cat Parent
Message 1: "My cat knocked my coffee off the desk again this morning"
Response 1: "Classic cat behavior! They really do think everything on flat surfaces is their personal enemy."
Message 2: "What's a good desk setup for productivity?"
Response 2: "Get a monitor arm to free up space, a good keyboard tray, and - given your situation - maybe a spill-proof mug with a lid and a weighted base. Your cat will hate it."
Insight: The desk setup question had nothing to do with cats. But Claude "remembered" the coffee incident and snuck in cat-proofing advice unprompted.
The Bottom Line on Context
Claude does not exactly "remember" anything. It receives all the messages you sent previously because they exist in the context window, and these affect future responses. I believe models are being trained more and more to ignore unrelated context, but that's just by the way.
Why This Matters When Coding
Now what implications does this have when programming? Well, one point I did not mention earlier but which you should know if you have read my previous blog posts about context is that LLMs experience a phenomenon called "Lost in the Middle".
To explain this briefly: let's say you send 200 messages to an LLM instance. As you send more and more messages, the response generated by the model becomes largely influenced by the first few messages and the most recent messages. The context in the middle gets lost.
When working with Claude Code in the terminal, you can see where this becomes an issue if you have the tendency to keep working with one specific Claude session without starting new ones:
- Polluted responses - Your outputs might be "polluted" because of heavy previous context
- Token waste - The large context window means you're using up tokens faster because Claude is trying to process all the previous messages, which is not smart if you're trying to be cost-efficient
- Weird responses - If your responses are heavily influenced by messages you sent at the very beginning of that session because of the "lost in the middle" effect, you may start getting strange outputs, especially if the current issue isn't related to the first ones
Here's a diagram to make sense of it:
ATTENTION DISTRIBUTION ACROSS 200 MESSAGES:
Message Position: 1 25 50 75 100 125 150 175 200
| | | | | | | | |
Attention Level:
# #
# #
# ##
## ###
## ###
### ####
####.................................. #####
^ ^ ^
| | |
HIGH ATTENTION LOW ATTENTION HIGH ATTENTION
(Beginning) ("Lost" Zone) (Recent)
THE PROBLEM IN PRACTICE:
Message 1-20: "Build me a React dashboard with authentication"
#################### <-- Strong influence on responses
Message 21-150: [Debugging, styling questions, refactoring discussions]
.................... <-- Fuzzy, diluted influence
Message 151-200: "Now add a calendar feature"
#################### <-- Strong influence on responses
WHAT THIS MEANS:
Your new calendar feature might accidentally inherit patterns
from the authentication code (Message 1-20) while ignoring
relevant styling decisions you made in Message 75-90.
The Solution: Use /clear
Now how do you remedy this?
When working with Claude on the web: You simply have to keep your conversations organized as much as possible. Start new conversations for different topics or talking points.
When working with Claude Code in the terminal: Use the /clear slash command to clear your context. This way, you don't have to use Ctrl+C+C to exit the current session and then start a brand new session. If you're done with the work of the previous session, simply clear your context window.
The Discovery: What /clear Actually Does
Now as I was writing this blog post, I wondered: What happens if I start a Claude session with Claude Code, ask a couple of questions, clear the context, then exit the session? If I run the claude --resume command, would I see that session listed?
Well, time to test it out.
...A few moments later (read in SpongeBob SquarePants narrator's voice)
I tested it out and figured out something really interesting. The /clear slash command is essentially a shortcut to ending the current session and starting a new one. That session does not actually get deleted because the moment you run the claude --resume command, you still see it listed in the session history and you can return to it.
This is actually good insight. It should make you feel more comfortable running the /clear command since you know that what gets cleared is the active context window, not the stored conversation log.
Wrapping Up
This was fun to write about, and I hope you learned something new. The key takeaway: don't be afraid to use /clear liberally. Your sessions are safe. Your context window gets a fresh start. And your responses get better.
As always, thanks for reading!