How a financial blogger captured 3 months of AI training and made it work across Claude, ChatGPT, Gemini, and Grok
Nancy's trained Claude instance was irreplaceable. It knew things that would take months to teach a new AI:
Starting a new conversation meant starting from scratch. She'd have to:
But there was another problem: Nancy wanted to experiment with other LLMs. What if ChatGPT was better at structure? What if Gemini caught more accuracy issues? What if Grok captured casual tone more naturally? She was locked into Claude and not necessarily because it was the best for every task, but because that's where her training lived.
The challenge became: How do you capture months of training in a format that's portable across any LLM? How do you extract what an AI has learned about YOU and package it into instructions that any AI can follow?
This is the core problem that LLM Instance Cloning solves.
I cannot share my actual blog refinement prompt as it contains personal portfolio references and details about my writing style. Therefore, I created Nancy Mitchell, a financial literacy blogger, as a relatable fictional character instead.
Unlike me who only creates blog posts for this portfolio website, Nancy has 3 different platforms where she writes, hence, she needs different outputs: detailed blog guides for teaching, personal Substack stories for connection, professional LinkedIn articles for credibility. Each platform serves a different audience goal.
Unlike creative writing, financial advice requires fact-checking, up-to-date tax info, and ethical disclaimers. This means that all her blog posts must ensure accuracy while maintaining an encouraging tone.
Financial literacy affects everyone regardless of career or background. This case study needed to demonstrate LLM Instance Cloning with a universally relatable scenario.
These constraints weren't arbitrary as they reflect the real challenge of extraction. Privacy protection forced demonstration through adaptation rather than direct sharing. Multi-platform requirements tested true portability beyond single-use cases. Financial accuracy added subject-matter complexity that generic extraction can't ignore. And general audience appeal ensured the technique's value extends beyond niche technical scenarios to universal applications.
The extraction happened in three distinct phases, each building on the previous one.
First, I needed Nancy's trained Claude instance to document what it had learned. Not generic statements like "I keep your voice," but specific, actionable patterns like "You use 'Let's be real' in 60% of post openings."
I asked the trained instance to analyze:
Critical insight: The first response was too vague. I had to ask "be MORE specific" twice before getting usable detail. Generic descriptions don't transfer well. Specific examples do.
Once the AI documented its learned behavior, I needed to convert that analysis into a system prompt, that is, a comprehensive set of instructions that any fresh AI instance could follow to replicate the trained behavior.
This meant translating:
The goal: A prompt detailed enough that a brand new AI with NO history could read it and immediately understand how to refine Nancy's content exactly as the trained instance did.
The final phase tested whether the extracted prompt actually worked and more importantly, whether it was truly portable.
I took the extracted prompt and tested it in:
Same input draft. Same refinement request. Different AI models.
The results are interesting, to say the least.
Nancy's 3 months of training weren't lost. By asking the AI to document its own learned patterns, she captured behaviors that took dozens of conversations to develop. The extraction process works.
Generic instructions like 'keep my voice' fail across LLMs. Specific examples like 'I open with let's be real, use here's the thing for transitions' work everywhere. The more specific the extraction, the better the portability.
The same extracted prompt produced 85-95% match across Claude, ChatGPT, Gemini, and Grok. Claude and ChatGPT were most accurate. Gemini needed tone adjustment. Grok surprised with voice capture. All produced usable results.
Financial content requires accuracy + empathy. Nancy's extraction needed to capture both 'be encouraging' AND 'verify tax info accuracy' AND 'add disclaimers where needed.' Subject-specific requirements must be explicitly documented.
Nancy's process works for ANY trained instance: coding assistants, writing editors, research helpers, tutors, brainstorming partners. If you've spent time training an AI, you can extract and replicate it. This isn't just for blog refinement.
The full 800-line system prompt Nancy extracted from her trained Claude instance, showing role definition, voice rules, platform optimization, and quality checks for financial content.
A fill-in-the-blanks template for extracting YOUR OWN trained AI instance. Includes the self-documentation prompt, conversion instructions, and testing checklist.
Nancy's methodology for testing extracted prompts across Claude, ChatGPT, Gemini, and Grok. Includes comparison rubric and adjustment strategies.