How to Capture and Replicate Your AI's Personality
Published: November 8, 2025 • 7 min read
I had the perfect Claude Instance. It knew exactly how to refine my blog posts by preserving my voice, organizing for readability, adding hyperlinks, creating the blog object structure I needed, and many more. And then I lost it.
Well, not exactly 'lost it.' I just... reached the conversation limit and had to start a new chat. And suddenly, I was back to square one, trying to remember all the instructions I'd painstakingly refined over dozens of prompts. That's when I realized: I needed to extract and clone my Claude Instance's 'personality.' Here's the full story:
The Beginning: "Improve It" (Famous Last Words)
So I wrote the draft of a blog post, read the draft, loved what I read. Then I thought, hmm, this could probably use a little polishing.
Perhaps I'm unnecessarily too verbose in some areas, perhaps in other areas, my thoughts could be expressed in a better manner so as not to confuse the reader. I may have also made some spelling or grammar errors that my eyes are not catching right now due to writer's bias or a better term for it might be 'blind spot bias', you know, when you're so close to your own work you can't see the mistakes anymore.
So I did what I believed was the most effective way to deal with this! I handed my draft to Claude with a simple prompt phrase: "Improve it".
That prompt right there was my first mistake. The output I received was a writing I did not recognize. It was packed with words even I had never seen before. Classic LLM behavior to remind me of my English language limitations. I had to try again.
Training My Claude Instance (Without Realizing It)
I used a sequence of prompts, I guess you could call it prompt chaining, to get the model to format my draft exactly how I wanted. I asked it to identify any spelling or grammatical errors in one prompt and fix them. Then I asked it to organize the writing for easier readability, breaking it down using bullet points and lists where necessary. Then I asked it to break down sections that are too long and extract appropriate subheaders for them. Then I specified the blog post object structure for my code and asked it to create one for that new blog post. Eventually, I had to ask it to identify areas where it can add appropriate hyperlinks to previously written blog posts and other sections of my portfolio (projects or case studies that I mention in the blog post).
Well, many, many prompts later, I finally got Claude to do exactly what I wanted. Or the more correct thing to say is that I got that particular Claude Instance to do exactly what I wanted.
I had trained my Claude Instance at becoming a perfect blog draft refiner for my specific purpose. For the rest of this blog post, when I say Claude Instance, I'm referring to a single chat conversation where Claude has learned my preferences and requirements through our back-and-forth interactions.
The Problem: I Ran Out of Space
You see, at this time, I was still not great at organizing my chats with Claude, so I ended up asking a lot of other questions to that particular Claude Instance(unrelated to blog refinement) and eventually, I reached the limit for that specific chat.
So, when I had a new blog post, I just thought, hmm, I can just start a new chat.
But wait, do I now have to specify all the instructions that I gave to the previous Claude Instance? I don't even remember all of them. Hmm, nope. I'm not starting a new chat. I'll simply scroll up as far back as I can to a point where I diverted from asking about refining my blog post and edit one of the messages to essentially create a new branch within that chat.
The Branching Nightmare
I kept doing the above for a while. However, there was no way I could keep up with the branches I had created, and sometimes, I needed to revisit some of those questions that I edited to create a new branch, but gosh, it was so hard to keep track of all of them. There was also the prompt I sent to Claude about adding relevant hyperlinks to previous blog posts, but hey, because of my multiple branches, it could no longer see some of the previous blog posts.
To solve this, I did one more branching, but not to write a blog post this time. Instead, I was going to capture the essence of my already trained Claude Instance so I could replicate the same behavior across multiple Claude Instances and perhaps even other LLMs like ChatGPT. But how do you 'export' an AI's personality?
Introducing Prompt Extraction
Remember in this blog post, I mentioned that I'll be exploring and writing about other prompting strategies that I was just learning about? Well, this blog post is about "Prompt Extraction", the process of asking an LLM to articulate its own methodology so you can recreate it elsewhere. It's just like asking a chef to write down not just their recipe, but their entire approach to cooking a specific dish, including all the little-bitty tricks they've learned along the way.
So I asked Claude to help me understand its own process. First, I needed to know what it was actually doing:
"You have helped me to organize every blog post I have on this portfolio website (see them attached to this project context files). I like how you have retained my voice in each one, you always output them in an artifact and you also always provide a blog post object. I am sure you have your own in-built mechanism for helping me organize my blogs whenever I paste them to you to achieve all the above. What is your mechanism?"
Once I understood its mechanism, I needed to make it portable. So I asked the next question:
"Awesome, well I have used up most of the space in this chat. I would like to start a different chat where I would continue writing these blog posts and expecting you to refine them for me. Using the information you just provided, can you write an effective prompt that I would use to start a new conversation so that whenever I provide blog posts there, it follows the exact mechanism you follow as well. Feel free to mention in the prompt where relevant that example blog posts are attached to the context files of this project and the output should always be in an artifact, the same way you always provide them for me. Place this prompt in a downloadable artifact."
And just like that, I had an artifact capturing the very essence or personality of my Claude Instance whose role was to be my personal blog draft refiner.
From Prompt Extraction to LLM Instance Cloning
This technique I used to export my 'AI's personality' was primarily Prompt Extraction, but honestly, I believe this method uses a combination of other prompt engineering strategies as well. For instance, the artifact created was essentially a reusable instruction set (System Prompt Engineering), and by creating this instruction set with Claude, I was also using Meta-prompting.
So I decided to group these prompting strategies together and give it a different name. It was hard to choose between 'Prompt Extraction and Replication', 'LLM Personality Capturing', and 'LLM Instance Cloning'. But then I settled with LLM Instance Cloning.
My Official Definition:
LLM Instance Cloning is the process of asking an AI to document its own behavior patterns, preferences, and decision-making processes within a specific conversation, then packaging those insights into a reusable prompt that recreates the same "personality" in a new conversation. It's basically teaching your AI to write its own instruction manual, so you can spin up identical copies whenever you need them.
Think of it like this: instead of training a new assistant from scratch every time, you're capturing the essence of your perfectly trained assistant and using that blueprint to create clones that already know exactly how you like things done.
Introducing my second Case Study
Now, you might be wondering if I'll share the exact document Claude created. Unfortunately, it contains some mildly sensitive data about me and my portfolio structure that I'd need to redact, and doing so would strip away the intricate details that make this technique so powerful.
Now don't get mad, I didn't make you read all this just to leave you empty-handed. So as you probably guessed, I'm working on a new case study to show you the power of this technique. When it's ready, you'll be able to access it here.
P.S. If you're thinking "couldn't you just use Claude Projects?", here's the thing: Projects provide context (your files, examples, etc.), but they don't capture the trained behavior you develop through back-and-forth refinement. LLM Instance Cloning lets you document that evolved understanding and replicate it anywhere. In fact, the best approach is to use both: apply this technique in a chat to train the instance, then add the resulting artifact to your Project's context files. Plus, this technique works across ANY LLM, not just Claude so you can clone your perfectly trained instances to ChatGPT, Gemini, or whatever model you're using.
As always, thanks for reading!