Skip to content

Pillar 3: Prompt Engineering

Prompting is a core engineering skill that requires practice and intentionality.

The difference between a vague request and a well-structured prompt is often the difference between usable output and wasted time. Prompt engineering is not about memorizing magic phrases. It is about communicating clearly, providing the right context, and understanding how to guide an AI toward the output you need.

Words matter. “Can you take a look at this and see why it’s not working” produces a very different result than “I tested [files] and expected [outcome] but [result] occurred instead. Analyze [files] and identify the root cause. Share your thinking with me.”

You write clear, specific prompts with explicit requirements. Be unambiguous about what you want. Include: what you’re building, what constraints apply, what patterns to follow, what success looks like. Provide examples of the code style you want when it matters.

You use attention markers for critical instructions. Words like IMPORTANT, CRITICAL, and REMEMBER draw AI attention to key requirements in prompts and documentation. Markdown formatting (bold, headers) provides structural emphasis. These are not decoration; they directly affect whether your instructions are followed.

You adapt your prompting style to the model family you’re using. Different model families respond differently to the same prompting techniques due to differences in training. Anthropic models respond well to XML tags as structural delimiters and tend to follow instructions as strict rules. OpenAI models are more agnostic to formatting but tend toward verbosity and may treat instructions as guidelines rather than hard constraints, often requiring explicit output length limits or “do not” instructions to stay focused. Read the prompting guides for the models you use most: the Anthropic prompting guide and OpenAI prompting guide both document model-specific behaviors that affect real-world results. A prompt that works perfectly on one model family may need restructuring for another.

You ask verification questions before letting the AI execute. “Do you understand what needs to be done?” “What files do you need to review before starting?” “What questions do you have for me?” These prompts force the AI to surface its understanding before acting on it, catching misunderstandings early.

You understand reasoning models and when to allocate deeper thinking. Some models and modes dedicate more computation to step-by-step reasoning before producing output. Claude’s extended thinking, OpenAI’s reasoning models (o1, o3), and similar features across other tools all offer ways to control this reasoning budget.

Use deeper thinking for debugging, architecture decisions, and complex logic. Use lighter modes for routine implementation. Knowing when to pay the extra latency and token cost for reasoning is a practical skill that directly affects output quality.

You use meta-prompting to generate and refine prompts. Meta-prompting is the practice of using the LLM itself to create, critique, and improve prompts. Instead of manually crafting a complex prompt from scratch, ask the model to generate prompt variations, evaluate their quality, and refine based on criteria you define.

This turns prompt creation into an iterative, AI-assisted process. The prompt generator pattern (ask for multiple prompt options), the prompt critic pattern (have the model evaluate its own prompts), and the prompt evolution pattern (refine through multiple generations) are daily tools, not occasional tricks.

You know the established prompting techniques by name and application. The field has a shared vocabulary backed by published research. You should know and apply: zero-shot vs. few-shot prompting (when to include examples and how many), chain-of-thought (eliciting step-by-step reasoning that significantly reduces hallucination rates), ReAct (combining reasoning with tool use, the foundation of agent loops), Reflexion (self-evaluation and correction loops), and image prompting (leveraging multi-modal input for UI work, visual debugging, and diagram interpretation).

You don’t need to cite the papers, but when someone says “use few-shot with chain-of-thought,” you should know what that means and why it works. See Pillar 0: LLM Foundations for the conceptual grounding behind why these techniques work.

You iterate on prompts rather than iterating through conversation. When AI output misses the mark, the instinct is to keep refining in conversation. The better approach for repeated tasks is to go back and improve the original prompt or documentation. This builds reusable context instead of one-off fixes.

  • Vague prompts that leave the AI guessing about requirements, patterns, or scope
  • Never asking the AI to explain its understanding before it starts working
  • Using the same generic prompting style for debugging, code generation, architecture review, and planning
  • Not including screenshots or images when describing UI work (multi-modal input exists, use it)
  • Spending time iterating in conversation when the real fix is improving your rules file or spec