PO
Prisca Onyebuchi
HomeAboutView My WorkExperienceBlogContact
Get In Touch
PO
Prisca Onyebuchi

Full-Stack Developer specializing in AI-assisted development, enterprise applications, and modern web technologies.

Navigation

  • Home
  • About
  • Projects
  • Experience

Resources

  • Blog
  • Resume
  • Contact

© 2025 Prisca Onyebuchi. All rights reserved.

Back to Case Studies
generalintermediateFeatured

From Trained Instance to Portable Prompt: Extracting Nancy's Blog Refiner

How a financial blogger captured 3 months of AI training and made it work across Claude, ChatGPT, Gemini, and Grok

November 14, 202512 min read
LLM Instance CloningPrompt EngineeringAI

Overview

If you haven't read this blog post where I introduced the concept of LLM Instance Cloning, I suggest you do. In there, I shared my own experience where I had the perfect Claude instance for refining my blog posts, lost it when I hit the message limit, and learned to extract its "personality" so I could recreate it anywhere.

But here's what I didn't share: the actual extracted prompt. It contained too many sensitive details, particularly about my portfolio structure, and there was no way I was going to publish it without redacting so heavily that it would lose its instructional value.

So instead, I'm showing you the process itself through Nancy Evans, a fictional financial literacy blogger who faced the same problem. Nancy spent 3 months training a specific Claude instance to refine her blog posts perfectly. For the rest of this case study, when I say Claude Instance, I'm referring to a single chat conversation where Claude learns one's preferences and requirements through a series of back-and-forth interactions. Then she extracted that trained behavior into a reusable prompt that works across Claude, ChatGPT, Gemini, and Grok.

This case study demonstrates:

1. What the extraction conversation looks like
2. How to identify what makes your instance "yours"
3. How to package trained behavior into a portable prompt
4. How to test it across different LLMs
5. How YOU can do this for your own trained instances

The Project

Nancy Evans is a 32-year-old financial literacy advocate who left corporate accounting to teach young professionals, between the ages of 25 to 40, how to manage money without shame or confusion. She writes about budgeting, debt payoff, investing basics, and money mindset across three platforms:

1. Personal Blog (Next.js) - Comprehensive guides and tutorials (2,000-3,500 words)
2. Substack Newsletter - Weekly insights and personal stories (800-1,500 words)
3. LinkedIn Articles - Professional thought leadership (1,000-1,800 words)

Each platform serves a different purpose and requires different optimization. Her blog needs depth and SEO, her Substack needs engagement and vulnerability, and her LinkedIn needs credibility and polish.

Nancy's voice is distinct: she opens with "Let's be real..." about 60% of the time, uses "Here's the thing..." as her primary transition phrase, always includes at least one personal money story per post, and never uses shame-based language about financial decisions. She balances empathy with accuracy to encourage readers while ensuring all financial advice is factually correct with appropriate disclaimers.

Over 3 months of back-and-forth conversations, Nancy trained a single Claude instance to refine her rough drafts while preserving exactly these characteristics. The AI learned her patterns, her voice, her platform requirements, and her subject-matter needs. It became her perfect blog refiner.

Then she hit the conversation limit and faced losing all that training.

The Challenge

Nancy's trained Claude instance was irreplaceable. It knew things that would take months to teach a new AI:

  • Her exact voice patterns ("Let's be real," "Here's the thing," "I'm not saying it's easy, but it's simple")
  • When to be vulnerable vs. when to be instructional
  • How to balance financial accuracy with encouraging tone
  • Platform-specific optimization strategies for blog, Substack, and LinkedIn
  • Where to add disclaimers without killing the conversational flow
  • Her audience's pain points and how she addresses them

Starting a new conversation meant starting from scratch. She'd have to:

  • Re-explain her voice preferences dozens of times
  • Correct the same mistakes repeatedly (too formal, too generic, too preachy)
  • Rebuild trust in the AI's understanding of financial content requirements
  • Retrain platform-specific optimization strategies

But there was another problem: Nancy wanted to experiment with other LLMs. What if ChatGPT was better at structure? What if Gemini caught more accuracy issues? What if Grok captured casual tone more naturally? She was locked into Claude and not necessarily because it was the best for every task, but because that's where her training lived.

The challenge became: How do you capture months of training in a format that's portable across any LLM? How do you extract what an AI has learned about YOU and package it into instructions that any AI can follow?

This is the core problem that LLM Instance Cloning solves.

The Constraints

1

Privacy First

I cannot share my actual blog refinement prompt as it contains personal portfolio references and details about my writing style. Therefore, I created Nancy Mitchell, a financial literacy blogger, as a relatable fictional character instead.

2

Three-Platform Strategy

Unlike me who only creates blog posts for this portfolio website, Nancy has 3 different platforms where she writes, hence, she needs different outputs: detailed blog guides for teaching, personal Substack stories for connection, professional LinkedIn articles for credibility. Each platform serves a different audience goal.

3

Financial Accuracy Matters

Unlike creative writing, financial advice requires fact-checking, up-to-date tax info, and ethical disclaimers. This means that all her blog posts must ensure accuracy while maintaining an encouraging tone.

4

General Audience Appeal

Financial literacy affects everyone regardless of career or background. This case study needed to demonstrate LLM Instance Cloning with a universally relatable scenario.

These constraints weren't arbitrary as they reflect the real challenge of extraction. Privacy protection forced demonstration through adaptation rather than direct sharing. Multi-platform requirements tested true portability beyond single-use cases. Financial accuracy added subject-matter complexity that generic extraction can't ignore. And general audience appeal ensured the technique's value extends beyond niche technical scenarios to universal applications.

My Approach

The extraction happened in three distinct phases, each building on the previous one.

Phase 1: Self-Reflection

First, I needed Nancy's trained Claude instance to document what it had learned. Not generic statements like "I keep your voice," but specific, actionable patterns like "You use 'Let's be real' in 60% of post openings."

I asked the trained instance to analyze:

  • Patterns it noticed in Nancy's writing style
  • Rules it followed when refining her drafts
  • Voice characteristics it preserved
  • Platform-specific optimizations it applied
  • Quality checks it performed before delivering

Critical insight: The first response was too vague. I had to ask "be MORE specific" twice before getting usable detail. Generic descriptions don't transfer well. Specific examples do.

Phase 2: Behavior to Instructions

Once the AI documented its learned behavior, I needed to convert that analysis into a system prompt, that is, a comprehensive set of instructions that any fresh AI instance could follow to replicate the trained behavior.

This meant translating:

  • "I've noticed you..." → "You are Nancy's blog editor and you should..."
  • Observed patterns → Explicit rules with examples
  • Implied preferences → Clearly stated requirements
  • Context from 3 months → Zero-context instructions

The goal: A prompt detailed enough that a brand new AI with NO history could read it and immediately understand how to refine Nancy's content exactly as the trained instance did.

Phase 3: Cross-Platform Testing

The final phase tested whether the extracted prompt actually worked and more importantly, whether it was truly portable.

I took the extracted prompt and tested it in:

  • Claude (new conversation)
  • ChatGPT 4o
  • Gemini Advanced
  • Grok

Same input draft. Same refinement request. Different AI models.

The results are interesting, to say the least.

Prompts Used

Deep Dive

Key Findings

Trained Behavior IS Extractable

Nancy's 3 months of training weren't lost. By asking the AI to document its own learned patterns, she captured behaviors that took dozens of conversations to develop. The extraction process works.

Specificity Makes Prompts Portable

Generic instructions like 'keep my voice' fail across LLMs. Specific examples like 'I open with let's be real, use here's the thing for transitions' work everywhere. The more specific the extraction, the better the portability.

Cross-LLM Results Vary (But All Work)

The same extracted prompt produced 85-95% match across Claude, ChatGPT, Gemini, and Grok. Claude and ChatGPT were most accurate. Gemini needed tone adjustment. Grok surprised with voice capture. All produced usable results.

Subject Matter Adds Complexity

Financial content requires accuracy + empathy. Nancy's extraction needed to capture both 'be encouraging' AND 'verify tax info accuracy' AND 'add disclaimers where needed.' Subject-specific requirements must be explicitly documented.

Anyone Can Do This

Nancy's process works for ANY trained instance: coding assistants, writing editors, research helpers, tutors, brainstorming partners. If you've spent time training an AI, you can extract and replicate it. This isn't just for blog refinement.

Download Resources

Nancy's Complete Extracted Prompt

The full 800-line system prompt Nancy extracted from her trained Claude instance, showing role definition, voice rules, platform optimization, and quality checks for financial content.

md15 KB

Blank Extraction Template (Your Turn)

A fill-in-the-blanks template for extracting YOUR OWN trained AI instance. Includes the self-documentation prompt, conversion instructions, and testing checklist.

md6 KB

Cross-LLM Testing Guide

Nancy's methodology for testing extracted prompts across Claude, ChatGPT, Gemini, and Grok. Includes comparison rubric and adjustment strategies.

pdf420 KB

Related Content

Related Blog Post

Read the related blog post

Explore more insights and details in the accompanying blog post.

Read more