Guide

How to Write Effective Prompts

Prompt engineering is the practice of structuring inputs to large language models to get reliable, high-quality output. This guide covers the techniques that make the biggest practical difference.

1. Role Assignment

Assigning a role to the model shapes the vocabulary, depth, and framing of its response. A model told it is a senior security engineer will apply different heuristics than one given no context.

Before

Without role
Explain memory safety in C++.

After

With role
You are a senior systems programmer with 15 years of C++ experience. Explain memory safety to a mid-level developer who is transitioning from Python. Cover RAII, smart pointers, and undefined behavior. Use short code examples.

The role does three things: sets expertise level, calibrates vocabulary to the audience, and implicitly constrains scope to what a person in that role would actually say.

Tip: Be specific. “Expert software engineer” is less useful than “senior Go engineer at a fintech company focused on latency-sensitive trading systems.”

2. Task Specification

Vague instructions produce vague results. The model will pick the most likely interpretation, which may not be yours. State your goal, constraints, and non-goals explicitly.

Vague
Summarize this document.
Specific
Summarize the following 5000-word product spec in 3 bullet points. Each bullet must be under 20 words. Focus on what changes, not what stays the same. Ignore the appendix.

Key constraints to specify:

  • Length (word count, line count, number of items)
  • Scope (what to include, what to exclude)
  • Audience (developer, executive, end user)
  • Tone (formal, concise, conversational)
  • Format (bullet list, table, prose, JSON)

3. Few-Shot Examples

Few-shot prompting provides input/output examples before the real input. It is the most reliable way to communicate format and style without writing an exhaustive specification.

Few-shot structure
Classify the sentiment of customer feedback as POSITIVE, NEGATIVE, or NEUTRAL.

Input: "Shipping was fast and the product works great."
Output: POSITIVE

Input: "The packaging was damaged but the item inside was fine."
Output: NEUTRAL

Input: "Completely broken on arrival, terrible experience."
Output: NEGATIVE

Input: "{{feedback}}"
Output:

Three to five examples usually covers the space. More examples help with edge cases but increase cost. One example is often enough for format-only guidance.

Note: Examples are also used as implicit constraints. If every example uses a single word output, the model will generally follow that convention even for ambiguous inputs.

4. Chain of Thought

For tasks that require reasoning—math, logic puzzles, multi-step decisions—asking the model to reason step by step before giving a final answer significantly improves accuracy. This is called chain-of-thought (CoT) prompting.

Without CoT
Is this Python function O(n²)?

def find_pairs(arr):
  result = []
  for i in range(len(arr)):
    for j in range(i+1, len(arr)):
      if arr[i] + arr[j] == 0:
        result.append((arr[i], arr[j]))
  return result
With CoT
Is this Python function O(n²)?
Think step by step:
1. Identify each loop and its range
2. Determine how the ranges relate to n
3. Multiply loop complexities
4. State the final complexity with justification

def find_pairs(arr):
  result = []
  for i in range(len(arr)):
    for j in range(i+1, len(arr)):
      if arr[i] + arr[j] == 0:
        result.append((arr[i], arr[j]))
  return result

Common CoT triggers: “think step by step”, “reason through this”, “explain your reasoning before answering”, “work through each part”.

Tip: For tasks where you only want the final answer, add “Put your final answer in a clearly labeled section at the end.” This keeps the reasoning visible but easy to extract.

5. Output Formatting

Specifying the exact output format reduces post-processing and makes prompts suitable for programmatic use. The model can produce JSON, Markdown, CSV, and most other text formats reliably.

JSON output

JSON format instruction
Extract the key fields from the job posting below. Return a JSON object with these exact keys:
{
  "title": string,
  "company": string,
  "location": string,
  "remote": boolean,
  "salary_min": number | null,
  "salary_max": number | null,
  "required_skills": string[],
  "years_experience": number | null
}

Return only the JSON object. No explanation.

Job posting:
{{posting}}

Structured Markdown

Markdown sections
Write a technical incident report with these sections:
## Summary
One sentence description of what happened.

## Timeline
Bullet list with timestamps.

## Root Cause
What caused the incident.

## Impact
Who was affected and for how long.

## Remediation
Steps taken to resolve.

## Prevention
Changes to prevent recurrence.

Incident details:
{{details}}
Warning: If you need valid JSON, end your prompt with “Return only valid JSON. No markdown code fences.” Models sometimes wrap JSON in triple backticks, which breaks parsers.

6. Common Mistakes

Negations without alternatives

Problematic
Don't use bullet points.
Better
Use numbered paragraphs, one sentence each.

Models follow what you want better than what you don't want.

Asking for multiple things in one prompt

Problematic
Summarize this article, extract action items, identify the author's bias, and suggest three follow-up questions.
Better
Split into four separate prompts, or structure them as explicit numbered tasks with separate output sections.

Compound prompts cause models to underweight some tasks.

Assuming context the model does not have

Problematic
Fix the bug in the same style as the rest of the codebase.
Better
Fix the bug. Follow this style: 2-space indent, single quotes, no semicolons, arrow functions preferred.

The model only knows what is in its context window.

No stopping condition

Problematic
List relevant papers on this topic.
Better
List exactly 5 papers on this topic, ordered by relevance. Stop at 5.

Without a stopping condition, the model may generate indefinitely or stop arbitrarily.

7. Ready-to-Use Templates

The PromptIndex library contains tested templates for the most common use cases. Browse by category or start with these: