Techniques for getting the best results from language models.
Prompt Patterns
Zero-Shot
Just ask — no examples needed
The simplest approach: give the model a task directly. Works surprisingly well with capable models.
Prompt: "Translate the following English text to French: 'Hello, how are you?'"
Few-Shot
Show examples to teach a pattern
Include a few input-output examples so the model infers a pattern it couldn't know from instructions alone.
Prompt:
"Convert words to star code:
'hello world' → ⭐⭐⭐⭐⭐ ⭐⭐⭐⭐⭐⭐
'cat' → ⭐⭐⭐
'I am happy' → ⭐ ⭐⭐ ⭐⭐⭐⭐⭐
'The cat eats' → ?"
Chain-of-Thought
Think step by step
Asking the model to reason through a problem before answering improves accuracy on complex tasks.
Prompt: "A store has 50 apples. They sell 12 in the morning and receive 30 more. How many do they have?"
Without CoT: "80" With CoT: "50 - 12 = 38. 38 + 30 = 68. Answer: 68"
Role Prompting
Assign a persona
Telling the model to act as an expert in a domain primes it to use relevant knowledge and tone.
Prompt: "You are a senior Python developer. Review this code for best practices and security issues."
Structured Output
Force a specific format
Specify the exact output format (JSON, CSV, markdown table) for programmatic use.
Prompt: "Extract all product names and prices from this text. Return as a JSON array with keys 'name' and 'price'."
Self-Consistency
Ask multiple times, pick the best
Generate several answers and take the most common or highest-quality one. Improves reliability on reasoning tasks.
ReAct (Reason + Act)
Think, act, observe, repeat
Alternate between reasoning about a problem and taking actions (searching, calculating) to gather information.
Prompt: "Thought: I need to find the population of Tokyo. Action: search('Tokyo population 2024') Observation: Tokyo has 37 million people. Thought: Now I can answer the question."
Prompt Tips
Best Practice
Be specific and detailed
Vague prompts get vague answers. Specify format, length, tone, audience, and constraints.
❌ "Write about AI."
✅ "Write a 200-word blog post about AI in healthcare for a general audience. Use a friendly tone and include one real-world example."
Best Practice
Use delimiters for clarity
Separate instructions from data using quotes, XML tags, or dashes to help the model distinguish them.
Prompt: "Summarize the text in <instructions> tags: <data>{paste article here}</data>"
Best Practice
Provide context
The more background you give, the better the model can tailor its response. Include relevant details, constraints, and goals.
Best Practice
Iterate and refine
First prompts are rarely perfect. Try variations, add examples, adjust constraints, and combine techniques.
Anti-Pattern
Avoid ambiguous instructions
"Make it better" or "fix this" without specifics leads to unpredictable results. State exactly what you want changed.
Anti-Pattern
Don't overload the context window
Pasting entire books or massive documents wastes tokens and can cause the model to miss key information. Summarize or use RAG.
Template Examples
Analysis Template
Structured analysis prompt
Prompt:
"Analyze the following text and provide:
1. Key topics (bullet list)
2. Overall sentiment (positive/negative/neutral) with reasoning
3. Three most important quotes
4. A one-sentence summary
Text: {text}"
Coding Template
Code generation with constraints
Prompt:
"Write a {language} function that {task}.
Constraints:
- Handle edge cases
- Include type hints
- Add docstring
- Keep it under {N} lines
- No external dependencies"
Critique Template
Self-reflection prompt
Prompt:
"Here is a draft response. Critique it for:
- Accuracy
- Clarity
- Completeness
- Tone
Then rewrite it incorporating your feedback."
Playground
Interactive
🧪 Test Your Prompts Live
Try any prompt technique with your configured model. Paste a prompt template, fill in the variables, and see the result.