How a financial blogger captured 3 months of AI training and made it work across Claude, ChatGPT, Gemini, and Grok
If you haven't read this blog post where I introduced the concept of LLM Instance Cloning, I suggest you do. In there, I shared my own experience where I had the perfect Claude instance for refining my blog posts, lost it when I hit the message limit, and learned to extract its "personality" so I could recreate it anywhere.
But here's what I didn't share: the actual extracted prompt. It contained too many sensitive details, particularly about my portfolio structure, and there was no way I was going to publish it without redacting so heavily that it would lose its instructional value.
So instead, I'm showing you the process itself through Nancy Evans, a fictional financial literacy blogger who faced the same problem. Nancy spent 3 months training a specific Claude instance to refine her blog posts perfectly. For the rest of this case study, when I say Claude Instance, I'm referring to a single chat conversation where Claude learns one's preferences and requirements through a series of back-and-forth interactions. Then she extracted that trained behavior into a reusable prompt that works across Claude, ChatGPT, Gemini, and Grok.
This case study demonstrates:
Nancy Evans is a 32-year-old financial literacy advocate who left corporate accounting to teach young professionals, between the ages of 25 to 40, how to manage money without shame or confusion. She writes about budgeting, debt payoff, investing basics, and money mindset across three platforms:
Each platform serves a different purpose and requires different optimization. Her blog needs depth and SEO, her Substack needs engagement and vulnerability, and her LinkedIn needs credibility and polish.
Nancy's voice is distinct: she opens with "Let's be real..." about 60% of the time, uses "Here's the thing..." as her primary transition phrase, always includes at least one personal money story per post, and never uses shame-based language about financial decisions. She balances empathy with accuracy to encourage readers while ensuring all financial advice is factually correct with appropriate disclaimers.
Over 3 months of back-and-forth conversations, Nancy trained a single Claude instance to refine her rough drafts while preserving exactly these characteristics. The AI learned her patterns, her voice, her platform requirements, and her subject-matter needs. It became her perfect blog refiner.
Then she hit the conversation limit and faced losing all that training.
Nancy's trained Claude instance was irreplaceable. It knew things that would take months to teach a new AI:
Starting a new conversation meant starting from scratch. She'd have to:
But there was another problem: Nancy wanted to experiment with other LLMs. What if ChatGPT was better at structure? What if Gemini caught more accuracy issues? What if Grok captured casual tone more naturally? She was locked into Claude and not necessarily because it was the best for every task, but because that's where her training lived.
The challenge became: How do you capture months of training in a format that's portable across any LLM? How do you extract what an AI has learned about YOU and package it into instructions that any AI can follow?
This is the core problem that LLM Instance Cloning solves.
I cannot share my actual blog refinement prompt as it contains personal portfolio references and details about my writing style. Therefore, I created Nancy Mitchell, a financial literacy blogger, as a relatable fictional character instead.
Unlike me who only creates blog posts for this portfolio website, Nancy has 3 different platforms where she writes, hence, she needs different outputs: detailed blog guides for teaching, personal Substack stories for connection, professional LinkedIn articles for credibility. Each platform serves a different audience goal.
Unlike creative writing, financial advice requires fact-checking, up-to-date tax info, and ethical disclaimers. This means that all her blog posts must ensure accuracy while maintaining an encouraging tone.
Financial literacy affects everyone regardless of career or background. This case study needed to demonstrate LLM Instance Cloning with a universally relatable scenario.
These constraints weren't arbitrary as they reflect the real challenge of extraction. Privacy protection forced demonstration through adaptation rather than direct sharing. Multi-platform requirements tested true portability beyond single-use cases. Financial accuracy added subject-matter complexity that generic extraction can't ignore. And general audience appeal ensured the technique's value extends beyond niche technical scenarios to universal applications.
The extraction happened in three distinct phases, each building on the previous one.
First, I needed Nancy's trained Claude instance to document what it had learned. Not generic statements like "I keep your voice," but specific, actionable patterns like "You use 'Let's be real' in 60% of post openings."
I asked the trained instance to analyze:
Critical insight: The first response was too vague. I had to ask "be MORE specific" twice before getting usable detail. Generic descriptions don't transfer well. Specific examples do.
Once the AI documented its learned behavior, I needed to convert that analysis into a system prompt, that is, a comprehensive set of instructions that any fresh AI instance could follow to replicate the trained behavior.
This meant translating:
The goal: A prompt detailed enough that a brand new AI with NO history could read it and immediately understand how to refine Nancy's content exactly as the trained instance did.
The final phase tested whether the extracted prompt actually worked and more importantly, whether it was truly portable.
I took the extracted prompt and tested it in:
Same input draft. Same refinement request. Different AI models.
The results are interesting, to say the least.
Nancy's 3 months of training weren't lost. By asking the AI to document its own learned patterns, she captured behaviors that took dozens of conversations to develop. The extraction process works.
Generic instructions like 'keep my voice' fail across LLMs. Specific examples like 'I open with let's be real, use here's the thing for transitions' work everywhere. The more specific the extraction, the better the portability.
The same extracted prompt produced 85-95% match across Claude, ChatGPT, Gemini, and Grok. Claude and ChatGPT were most accurate. Gemini needed tone adjustment. Grok surprised with voice capture. All produced usable results.
Financial content requires accuracy + empathy. Nancy's extraction needed to capture both 'be encouraging' AND 'verify tax info accuracy' AND 'add disclaimers where needed.' Subject-specific requirements must be explicitly documented.
Nancy's process works for ANY trained instance: coding assistants, writing editors, research helpers, tutors, brainstorming partners. If you've spent time training an AI, you can extract and replicate it. This isn't just for blog refinement.
The full 800-line system prompt Nancy extracted from her trained Claude instance, showing role definition, voice rules, platform optimization, and quality checks for financial content.
French version of Nancy's complete 800-line system prompt with role definition, voice preservation rules, platform-specific optimization strategies, and financial content quality checks.
A fill-in-the-blanks template for extracting YOUR OWN trained AI instance. Includes the self-documentation prompt, conversion instructions, testing checklist, and step-by-step guidance.
French version of the blank template for extracting your trained AI instance. Complete guide with self-documentation prompts, conversion process, and cross-LLM testing strategies.