How to Write Effective Prompts
Prompt engineering is the practice of structuring inputs to large language models to get reliable, high-quality output. This guide covers the techniques that make the biggest practical difference.
Contents
1. Role Assignment
Assigning a role to the model shapes the vocabulary, depth, and framing of its response. A model told it is a senior security engineer will apply different heuristics than one given no context.
Before
Explain memory safety in C++.
After
You are a senior systems programmer with 15 years of C++ experience. Explain memory safety to a mid-level developer who is transitioning from Python. Cover RAII, smart pointers, and undefined behavior. Use short code examples.
The role does three things: sets expertise level, calibrates vocabulary to the audience, and implicitly constrains scope to what a person in that role would actually say.
2. Task Specification
Vague instructions produce vague results. The model will pick the most likely interpretation, which may not be yours. State your goal, constraints, and non-goals explicitly.
Summarize this document.
Summarize the following 5000-word product spec in 3 bullet points. Each bullet must be under 20 words. Focus on what changes, not what stays the same. Ignore the appendix.
Key constraints to specify:
- Length (word count, line count, number of items)
- Scope (what to include, what to exclude)
- Audience (developer, executive, end user)
- Tone (formal, concise, conversational)
- Format (bullet list, table, prose, JSON)
3. Few-Shot Examples
Few-shot prompting provides input/output examples before the real input. It is the most reliable way to communicate format and style without writing an exhaustive specification.
Classify the sentiment of customer feedback as POSITIVE, NEGATIVE, or NEUTRAL.
Input: "Shipping was fast and the product works great."
Output: POSITIVE
Input: "The packaging was damaged but the item inside was fine."
Output: NEUTRAL
Input: "Completely broken on arrival, terrible experience."
Output: NEGATIVE
Input: "{{feedback}}"
Output:Three to five examples usually covers the space. More examples help with edge cases but increase cost. One example is often enough for format-only guidance.
4. Chain of Thought
For tasks that require reasoning—math, logic puzzles, multi-step decisions—asking the model to reason step by step before giving a final answer significantly improves accuracy. This is called chain-of-thought (CoT) prompting.
Is this Python function O(n²)?
def find_pairs(arr):
result = []
for i in range(len(arr)):
for j in range(i+1, len(arr)):
if arr[i] + arr[j] == 0:
result.append((arr[i], arr[j]))
return resultIs this Python function O(n²)?
Think step by step:
1. Identify each loop and its range
2. Determine how the ranges relate to n
3. Multiply loop complexities
4. State the final complexity with justification
def find_pairs(arr):
result = []
for i in range(len(arr)):
for j in range(i+1, len(arr)):
if arr[i] + arr[j] == 0:
result.append((arr[i], arr[j]))
return resultCommon CoT triggers: “think step by step”, “reason through this”, “explain your reasoning before answering”, “work through each part”.
5. Output Formatting
Specifying the exact output format reduces post-processing and makes prompts suitable for programmatic use. The model can produce JSON, Markdown, CSV, and most other text formats reliably.
JSON output
Extract the key fields from the job posting below. Return a JSON object with these exact keys:
{
"title": string,
"company": string,
"location": string,
"remote": boolean,
"salary_min": number | null,
"salary_max": number | null,
"required_skills": string[],
"years_experience": number | null
}
Return only the JSON object. No explanation.
Job posting:
{{posting}}Structured Markdown
Write a technical incident report with these sections:
## Summary
One sentence description of what happened.
## Timeline
Bullet list with timestamps.
## Root Cause
What caused the incident.
## Impact
Who was affected and for how long.
## Remediation
Steps taken to resolve.
## Prevention
Changes to prevent recurrence.
Incident details:
{{details}}6. Common Mistakes
Negations without alternatives
Don't use bullet points.
Use numbered paragraphs, one sentence each.
Models follow what you want better than what you don't want.
Asking for multiple things in one prompt
Summarize this article, extract action items, identify the author's bias, and suggest three follow-up questions.
Split into four separate prompts, or structure them as explicit numbered tasks with separate output sections.
Compound prompts cause models to underweight some tasks.
Assuming context the model does not have
Fix the bug in the same style as the rest of the codebase.
Fix the bug. Follow this style: 2-space indent, single quotes, no semicolons, arrow functions preferred.
The model only knows what is in its context window.
No stopping condition
List relevant papers on this topic.
List exactly 5 papers on this topic, ordered by relevance. Stop at 5.
Without a stopping condition, the model may generate indefinitely or stop arbitrarily.
7. Ready-to-Use Templates
The PromptIndex library contains tested templates for the most common use cases. Browse by category or start with these: