The Prompt Engineering Vocabulary I Accidentally Mastered (Without Knowing the Names)
Published: October 17, 2025 • 12 min read
Over the course of the past 2 years, I've worked with and used different Generative AI tools. I started with the most popular of course, ChatGPT, but since then, over the course of my personal life, academic career, and professional prompt engineering jobs, I've experimented with different ways and methods of writing prompts across multiple tools including but not limited to Gemini, Grok, Copilot, Claude, and more.
How I Discovered "Prompt Engineering"
The term prompt engineering was something I stumbled upon when I was trying to decide what to call the work I was doing while training AI for mathematics at the time. My official role title given by Outlier was AI Trainer under the Mathematics domain.
That title sounded nice, but I was looking for something more professional, I guess? Especially since I wanted to add it to my LinkedIn profile. I remember describing the work I was doing to ChatGPT at the time, then I asked it what the work was called. There and then, I discovered the phrase "Prompt Engineering."
My favorite part about this phrase 2 years ago was that it had the word "Engineering" in it. You know, that made me feel smart!
The Realization
As I try now to stay current with the extremely fast pace at which AI is growing, I tried reading a few articles and blogs on Prompt Engineering, especially since I see people using these big words and I have no clue what they mean... or so I thought.
You see, despite really liking the term "Prompt Engineering," I think in some ways I have taken it for granted by not taking the time to actively study the processes I was using. Like I mentioned earlier, I've experimented with a lot of ways of writing prompts, and not just writing prompts, but even setting up what I'd call a version of an AI Agent using Claude.
So let's talk about all the prompt engineering techniques I've mastered without knowing their official terms:
The Techniques I've Been Using All Along
Zero-Shot Prompting
I believe this is the default method of writing prompts that most people use, myself included. It's when we give AI a task to accomplish without providing examples of what the finished product would look like.
When it works well:
- Simple prompts that don't require a specified format for the output
When you need to be more specific:
- For more complicated questions where you require the model to produce output in a very specific format, you have to be very thorough and specific in how you write the prompt
Related techniques:
- Few-shot prompting: Providing a few examples of your desired output
- Many-shot prompting: Providing multiple examples
My professional use case: When I provide a prompt for a specific website and ask the model to draw out a detailed plan for that website without implementing any code, that's zero-shot prompting in action.
Prompt Chaining (Sequential Prompting)
This is when you break a complex task into sequential prompts where each output is then fed into the next prompt.
When building applications using AI, this is definitely a recommended step. Most models do not have the capacity to accurately execute all the requirements for a very complex application in one shot. Therefore, it's a good idea to plan it all out.
Why I use this: If you've ever built complex applications using a tool like Claude, you've probably faced the issue where you exceed the chat length restriction and now you have to start another chat. But this new chat doesn't have the context of the previous one where you systematically asked multiple questions and debugged.
Of course, you can attach the original code you've already written using the GitHub attachment tool, but you could also use prompt chaining.
Example: Building a Professional Development Tracker
Consider this complex prompt:
My team members need to track their personal development initiatives and professional growth objectives. This includes pursuing industry certifications, completing online courses, developing technical skills, or working toward promotion requirements. I want to motivate them to decompose these objectives into manageable milestones, requiring at least 3 concrete action items per objective. Can you help me create a professional development tracker for this purpose? Include visual progress indicators, analytics dashboards, and celebratory notification messages to maintain engagement, and I'd like the interface to feature smooth micro-animations and interactive elements. The application should launch in a preview mode displaying sample data from a demo account that users can explore, but editing capabilities should be restricted until they authenticate with valid credentials. Skip the registration functionality since this is a prototype. Return the complete code implementation using NextJS. The implementation should be fully contained within a single TSX file.
The above prompt could be executed by the model in a single step, but here's how I'd break it down using prompt chaining:
Chain 1: Blueprint & Specification
I need to build a professional development tracker for team members. The core features should include:
- Tracking professional growth objectives (certifications, courses, skills)
- Breaking objectives into at least 3 actionable milestones
- Visual progress indicators and analytics
- Motivational notifications
- Micro-interactions
- Demo mode with sample data (read-only)
- Login required for editing (no signup)
- NextJS, single TSX file
Can you help me create a technical specification document that outlines:
1. The data structure needed
2. Main components and their responsibilities
3. State management approach
4. UI/UX requirements
Chain 2: Data Modeling
Based on this specification: [paste output from Chain 1]
Define the TypeScript interfaces and types for:
1. User object (demo user vs authenticated user)
2. Development objective structure
3. Milestone/action item structure
4. Progress tracking data
5. Any authentication state needed
Provide the complete type definitions with comments explaining each field.
Chain 3: Component Architecture
Using these type definitions: [paste output from Chain 2]
Design the component hierarchy for the application. For each component, specify:
1. Component name and purpose
2. Props it receives
3. State it manages
4. Key interactions/behaviors
5. Which child components it renders
Focus on separation of concerns and reusability.
Chain 4: Sample Data Generation
Based on these data structures: [paste relevant types from Chain 2]
Create realistic sample/demo data that includes:
1. A demo user profile
2. 3-4 professional development objectives with variety (certification, course, skill)
3. Each objective should have 3-5 milestones at different completion stages
4. Make the data motivating and realistic for a professional development context
Return this as a TypeScript constant that can be imported.
Chain 5: Authentication & State Logic
Using this component structure: [paste output from Chain 3]
And these types: [paste relevant types from Chain 2]
Implement the authentication and global state management logic:
1. Demo mode vs authenticated mode handling
2. State for storing objectives and milestones
3. CRUD operations for objectives (create, update, delete, complete)
4. Progress calculation logic
5. Local storage persistence (for authenticated users only)
Provide the hooks and utility functions needed.
Chains 6-10 would continue with:
- UI Components (Layout & Navigation)
- UI Components (Objectives Management)
- Data Visualization
- Motivational Elements
- Final Integration & Assembly
By breaking it down this way, you maintain context, avoid hitting length limits, and can debug each piece individually.
System Prompts vs User Prompts
What most of us are used to writing are User prompts, your individual request within a conversation with a model.
System prompts are different. They set the AI's behavior or role for an entire conversation or an entire project. If you use Projects in Anthropic Claude, you know that you can provide an overall instruction for a project which will set the AI's role. Then, as you ask questions in chats within that project (User prompts), the system prompt is always adhered to.
Example: The System Prompt I Wish I'd Written
When building this portfolio, I wish I had written this system prompt before starting:
You're a Next.js 14 expert specializing in TypeScript, Tailwind CSS, and responsive design. You follow atomic design principles, prefer server components over client components, and always account for HTML entity encoding in JSX before I ask.
This system prompt would have saved me that HTML entity debugging nightmare where I had to manually replace apostrophes with ' throughout my deployment.
Example: My French Learning System Prompt
Here's a system prompt I'm using for a specific Claude Project designed to help me get to C2 level in French:
I am at a B1+ level in French. I want to get proficient in French and get to a C2 level. Your job is to help me get there by all means necessary. For every piece of instruction or text response I send to you in French, you must correct any grammatical errors in it and write out the corrected sentence to me briefly before responding to the question or instruction. You must also grade its level BETWEEN A1 - C2. You must not respond to any questions typed in English unless specified that you can in that particular chat. Instead, you must tell me to type my instruction in French. Every once in a while, if you notice that I send a piece of text to you that can be better expressed in French either by a saying or an idiom popular in French, you must tell me what that is. But try to keep things brief unless I ask you to explain further.
With the above system prompt, when I send this message in a chat:
"I found a partner on ConversationExchange.com, yayy!"
The response I get is:
"Please type your instruction in French! I'm here to help you reach C2 level, so let's practice. 😊 Try telling me about finding your partner in French!"
Prompt Templates
This is essentially a reusable prompt structure that I know will provide my desired output.
When I wrote prompts at Outlier, I knew that specific requirements were necessary for me to state in the prompt to ensure it provided the required output.
My standard requirements:
- File structure: "The solution should be self-contained within a single TSX file" (or HTML file when building HTML+CSS+JavaScript applications)
- Design trend: Specify whether I wanted 3D Design, Neobrutalism, Glassmorphism, Neumorphism, etc.
- Color palette: Specific colors or theme specifications
My Prompt Template Structure:
[CONTEXT/ROLE]
[Who needs this and why - sets the scene]
[CORE FUNCTIONALITY]
[Main features and requirements - what it should do]
[USER INTERACTION REQUIREMENTS]
[Specific interaction patterns, minimum requirements]
[AUTHENTICATION/STATE REQUIREMENTS]
[Demo mode, login requirements, data persistence needs]
[VISUAL/DESIGN SPECIFICATIONS]
- Design Style: [3D Design/Neobrutalism/Glassmorphism/Neumorphism/Material Design/etc.]
- Color Palette: [Specific colors or theme]
- UI Elements: [Charts, graphs, animations, micro-interactions, etc.]
- Feedback Mechanisms: [Toast messages, modals, celebrations, etc.]
[TECHNICAL CONSTRAINTS]
- Framework: [NextJS/React/Vue/etc.]
- File Structure: [Single TSX file / Single HTML file / etc.]
- Libraries: [Specific libraries if needed]
- No [signup/backend/database/etc.] implementation
[OUTPUT REQUEST]
Return the complete [type of solution] fully self-contained within a single [TSX/HTML] file.
Example Prompt Using This Template:
My freelance clients need to track their project timelines and deliverables. This includes managing multiple client projects, tracking milestones, monitoring deadlines, and organizing deliverables. I want to encourage them to break down projects into phases, requiring at least 3 phases per project. Can you help me build a project timeline tracker? Users should be able to add projects, define phases, mark deliverables as complete, and view progress. The default state should display a demo account with 2-3 sample projects that users can explore read-only. Editing capabilities should require login with hardcoded credentials.
Design Specifications:
- Design Style: Glassmorphism
- Color Palette: Deep purples and blues with frosted glass effects
- UI Elements: Timeline visualizations, progress rings, Gantt-style charts
- Feedback Mechanisms: Celebratory toast notifications when milestones are hit
Technical Specifications:
- Framework: NextJS
- File Structure: The solution should be self-contained within a single TSX file
- Libraries: Use Recharts for visualizations, Lucide React for icons
- No signup functionality since this is a demo application
Return the complete code solution using NextJS fully self-contained within a single TSX file.
Retrieval-Augmented Generation (RAG)
While this term sounds fancy, it really isn't. This is simply when you give AI access to external knowledge sources that it originally would not have access to, so that it can retrieve relevant information before generating responses.
You do this every time you:
- Attach a file containing information that you believe the model will need to effectively respond to your question
- Create a project in Claude and add "Context files", you're giving the model the ability to perform RAG
It's that simple!
Prompt Decomposition
This is very similar to prompt chaining, but it's more focused on the planning side of things.
For instance, my first prompt when designing this website was to ask for a detailed plan of the folder structure of all the components of my portfolio application. By doing that, I was decomposing the "build a portfolio" mega-task into manageable pieces.
Constraint-Based Prompting
This is one I have had to use a lot, and you probably have as well. It's when we add specific limitations/requirements that guide the AI toward solutions that better meet our expectations.
Examples:
- Design specifications: Adding design requirements when creating a website ensures that the model's output more closely matches my anticipations
- Resume restructuring: Adding a constraint like "no more than 6 bullet points for each job experience" allows the model to extract the most relevant information you have that's related to a job post while discarding the rest
The Plot Twist
Oh well, that's all for today. You're probably a prompt engineer yourself if you find that you've used most of these methods.
But there's more!! Yes, in my next blog post, I'll talk about the prompt engineering terms I'm just learning about, the ones I haven't been using intuitively and am now actively studying.
Stay tuned!
This post is part of my "Technical Insights" series, where I demystify complex tech concepts by sharing what I've learned through hands-on experience. Sometimes the best way to understand something is to realize you've been doing it all along.