Learning ThroughDocumented Experiments
In-depth explorations of prompt engineering techniques, AI limitations, and real-world applications. Each case study includes interactive examples, bilingual videos, and downloadable resources.
What Are Case Studies?
Unlike traditional blog posts, these case studies are detailed experiments where I test prompt engineering techniques, document every step, analyze results critically, and share everything—including what didn't work.
Each study includes:
- Bilingual video walkthroughs (EN & FR)
- Downloadable prompts and templates
- Before/after interactive comparisons
- Expandable deep-dive sections
- Live demos and deployed applications
- Detailed analysis and key findings
All Case Studies (2)

From Trained Instance to Portable Prompt: Extracting Nancy's Blog Refiner
How a financial blogger captured 3 months of AI training and made it work across Claude, ChatGPT, Gemini, and Grok
I had the perfect blog refiner. Then I lost it. Here's how I extracted my trained AI's personality using prompt extraction and made it work everywhere.

Meta-Prompting in Action: What Happens When You Let Claude Code Redesign Your App
A brutally honest experiment in trusting AI without human oversight
I asked Claude to write detailed prompts for Claude Code, then executed them without any testing or modifications. Here's what worked brilliantly—and what broke completely.