The Art of Meta-Prompting and the Launch of Case Studies
Published: October 25, 2025 • 6 min read
In my last technical blog post, I mentioned that the next technical post was going to be about Prompt Engineering strategies I was just learning about. The thing is, there are actually quite a number of cool new strategies and techniques that could potentially cut my development time even further.
My initial thought was to simply write about all the new prompting techniques. However, the last thing I would want to do is bore you with an extremely long blog post like that. So I decided to take them one at a time, starting with Meta-Prompting.
What is Meta-Prompting?
Meta-Prompting, also known as Prompting About Prompting, is when you ask whatever model you're working with to help you write better prompts or to critique your prompting approach. In some ways, I've used this approach in the past, but not in as sophisticated of a manner as I've just experimented with recently.
Remember the Prompt Engineering Toolkit I'm working on? Well, regardless of what it looks like right now, it's still very much in its early stages of development. The UI/UX design trend I decided to use for it is Neobrutalism with Gradient Accents. I chose this trend because I wanted to create a broad enough visual distinction against this portfolio website, which uses Glassmorphism, while still working with a trend that is modern, aesthetically pleasing, and appropriate for an application aimed at portraying serious technical work.
Now, I don't promise that I'll stick to this design trend as I continue to work on the Prompt Engineering Toolkit. However, I decided to test out Meta-Prompting by combining this approach with prompt decomposition and prompt chaining.
Enter Claude Code: My First Terminal Love Story
To effectively test out this approach, I decided that the most effective way for me to do it was with Claude Code from the terminal.
Now here's the thing: for most of my prompting and working with AI, I've used projects and simply chats on the Claude website online. Every once in a while, I would get a notification from Claude that I should try out Claude Code, but I never did. I don't feel proud to admit my resistance to trying out or embracing a new approach because I try to stay open and flexible to new possibilities. I definitely wish I had tried this out earlier.
Claude Code is seriously a game changer.
The Experiment Setup
Here's exactly how I used it. On the web, within a chat in a specific project, I asked the following question:
"I would like to migrate the design system of my Prompt Engineering Toolkit, but I don't want to do that with you exactly. I would like to use Claude Code from within my terminal to accomplish this. So I would like you to design a game plan for accomplishing this migration. Remember, I would like to use the same design trend and theme from my Personal portfolio website while also ensuring that there are a lot of micro-interactive elements on the website that makes it enjoyable to use and gives the user a smooth, soothing experience as they navigate the page. So design the plan and the step-by-step prompts I would have to provide to Claude Code to accomplish this."
The result from this was an artifact with 13 different step-by-step prompts which I pasted in my terminal and simply let Claude Code do its thing.
The Surreal Experience
This was my first time coding in this manner, that is, letting the tool actually make changes to my code. However, the experience felt surreal. As I pasted each prompt and watched Claude Code in action, I could not help but wonder: If this actually works out fine, that would be so freaking cool!
When I started this process of copying and pasting prompts to Claude Code, all that was going on in my head was "Oh my God, this is so exciting, I may never have to write a line of code ever in my life again."
Then, when I finished the experiment, I concluded that we are not yet at the point where we can fully rely on AI yet.
The Reality Check
The "meta-prompts" generated by the model, which I eventually pasted in my terminal for Claude Code to execute, were good, but there were definitely some gaps. I had to resist the temptation to modify the prompts generated by the model, just so I could present a real, raw analysis.
Now, I didn't fully resist the temptation as I did add one more prompt, but you'll see why that was necessary in the case study.
I could go on to talk about the good, the gaps, the faults, and the quirks here in this blog post, but I know that a simple blog post will not do justice to the nuances and complexities I observed in how Claude Code executed this design migration.
Introducing: Case Studies
Therefore, I launched a Case Study section here in this portfolio.
Now here's exactly what the goal of the case study is. Remember when I worked at Outlier building and reviewing over 65 fully functional applications? My work at the time was to look at the applications generated by multiple competing models, rank them, and make edits to create golden standard applications that surpassed the highest ranked model while fulfilling all explicit and implicit requirements stated in the prompt. Then when I was promoted to a reviewer, I had to extensively test out other people's applications and identify the faults, gaps, and issues, then provide detailed feedback to the initial attempter.
This case study is inspired by that workflow, except it's a lot more in-depth.
What to Expect in the Case Studies
I will provide you with detailed analysis of what works and what doesn't, from:
- A technical development perspective
- A user experience perspective
- A UI/UX design perspective
The content, text and videos included, will be available both in English and French.
Here you'll find the link to the state of the PEK application before I applied the meta-prompts, and here you'll find the state of the application after. The case study should be available by the end of today or tomorrow, and you should be able to access it here once it's completed.
What This Means for My Development Workflow
This experiment opened my eyes to a few things:
- Meta-prompting works, but it requires human oversight and iteration
- Claude Code is powerful, but it's a tool, not a replacement for developer thinking
- Documentation matters and that is why I'm launching case studies to capture these learnings
Honestly, this feels like the beginning of something bigger. I'm not just building projects anymore; I'm building a methodology for working with AI as a development partner. And that's the kind of skill that's going to matter more and more as these tools evolve.
Once again, thank you for reading, and I'll see you in the next post (and in the case study)!
P.S. If you're wondering whether I actually had to write any code during this experiment... well, that's exactly what the case study will reveal. Stay tuned!