WebPiki
tutorial

Prompt Engineering Tips That Actually Work

Practical techniques for getting better results from ChatGPT, Claude, and other LLMs. Role setting, few-shot, chain of thought, and more.

Effective communication between humans and AI

You've asked an AI something and gotten a useless answer. You said "just handle it" and it literally just... handled it. In some unhelpful way. Prompt engineering sounds fancy, but the core idea is simple: telling the AI what you actually want. Clearly.

The "clearly" part is harder than it sounds. A few patterns make a big difference in output quality from the same model.

Role Setting — One Line That Changes Answer Depth

Start with "You are a senior backend developer with 10 years of experience" and the response shifts. LLMs adjust based on context, and an explicit role makes them lean into domain-specific terminology and perspectives.

You are a senior frontend developer.
What should I watch out for when migrating from React 18 to 19?

You'll get more specific, actionable advice than a bare "What should I watch out for?" question. That said, "You are a genius scientist" won't produce genius answers. Realistic professional roles work best.

Pair this with output format specification. LLMs tend to be verbose when you don't constrain the format. Specifying what you want makes the output cleaner.

Compare these frameworks in a table.
Columns: framework name, language, pros, cons, learning curve
Rows: Django, FastAPI, Express, NestJS

Role + format. These two alone cut your "redo this" rate in half.

Few-Shot and Chain of Thought

People learn faster with examples. So do LLMs. Show 2-3 input-output pairs and the model follows the pattern. That's few-shot prompting.

Write commit messages in this format:

Input: Added forgot password link to login page
Output: feat(auth): add forgot password link to login page

Input: Fixed crash when cart item quantity is zero
Output: fix(cart): prevent crash when item quantity is zero

Input: Added email duplicate check to signup API
Output:

More examples improve accuracy, but 2-3 is usually enough. Particularly effective for tasks with fixed formats — coding conventions, documentation styles, structured data extraction.

For complex problems, Chain of Thought works well. Just adding "Think through this step by step" improves accuracy.

Analyze this code for bugs.
Trace the execution flow step by step and identify potential issues.

When the model skips intermediate reasoning and jumps to an answer, it's more likely to be wrong. Making it show its work reduces logical errors. Math, debugging, and complex analysis see the biggest gains.

Few-shot is "do it like this." Chain of Thought is "think carefully." Both are one-line additions that produce noticeably different results.

Constraints and Delimiters

Without constraints, LLMs will write long answers and drift off topic. Setting boundaries feels rigid but directly improves output quality.

Answer for Python 3.12.
No external libraries.
Code only, no explanation.
Summarize in under 200 words.

You can stack multiple constraints at once. More specific conditions actually produce more accurate output. "Just handle it" is the vaguest instruction possible. "Handle it within these constraints" is the clearest.

When your prompt includes code or data, delimiters matter. Without separation between instructions and data, the model might interpret data as instructions.

Review the code below. Check for security vulnerabilities.

---
[code block]
---

Triple dashes, triple backticks, XML tags — the specific delimiter doesn't matter. What matters is clearly separating "this is the instruction" from "this is the data." Especially important when analyzing long text inputs.

Front-Load Your Prompt

The common pattern: fire off a vague prompt → get an ambiguous result → "No, I mean..." → revise → still not right. After 3-4 rounds of this, you've spent more time than writing a good prompt upfront would have taken.

The difference is stark:

// Weak prompt
Make me a React component

// Strong prompt
Create an email input form component in React + TypeScript.
- Style with Tailwind CSS
- Email format validation (regex)
- Error message display
- onSubmit callback prop
- Accessibility (aria-label, keyboard navigation)

Context, constraints, format — all in one shot. It feels like extra work writing the prompt, but total task time drops. Thirty extra seconds on the prompt saves five minutes of revisions.

The flip side: cramming too much into one prompt also backfires. "Design the API, write the DB schema, and create the test suite" produces mediocre results across the board. Break complex work into stages.

Step 1: "Design the API endpoints for user management" Step 2: "Based on that design, create the DB schema" Step 3: "Write integration tests for this API"

High density per prompt, narrow scope per prompt.

Self-Verification

Having the AI check its own work is a valid technique.

Review the code you just wrote for edge cases.
Consider empty arrays, null inputs, very large numbers.

Not 100% reliable, but it catches things. After code generation, "Analyze the time complexity and suggest improvements if possible" often yields a better version than the first draft.

Self-verification works best as the final step in a workflow rather than standalone. Code generation → code review → test case writing. Running through multiple roles within one conversation thread improves output quality over single-prompt approaches.

Different Models Have Different Personalities

ChatGPT, Claude, and Gemini produce meaningfully different outputs from the same prompt. Knowing each model's tendencies helps.

For code generation, Claude tends to produce longer code with good consistency. ChatGPT is more natural for code explanation and conversational learning. Complex reasoning tasks lean toward ChatGPT's o3 series. Analyzing long documents plays to Claude's context window advantage.

System prompt capabilities and token limits also vary. Reading the official prompting guide for your primary model is worth the time. Several providers publish their own prompt engineering documentation.

It's a Communication Skill

Various techniques aside, using AI effectively comes down to communicating clearly. Just like giving vague instructions to a colleague produces vague results, the same applies to AI.

One difference from human communication: AI doesn't read between the lines. "They'll figure out what I mean" doesn't work. If you omit context, the model responds based on what it has, not what you intended. The habit of explicitly stating background, constraints, and expected output format is really what separates good prompt engineering from bad.

It's worth building that habit. Thirty seconds of clarity upfront consistently beats five minutes of "that's not what I meant."

#prompt engineering#ChatGPT#Claude#AI#LLM

Related Posts