How to Write a Prompt: Create Effective AI Prompts That Actually Work

Published: Updated: 11 min read
Woman wearing glasses working on a laptop with a transparent AI interface overlay displaying a brain icon, code, and prompt field.

TL;DR:

There's no perfect prompt. Take prompt frameworks with a grain of salt and keep it simple. The real skill is iteration. Start with Context, Task, and Format. Be explicit about your criteria of success and the failure formula for your task. Test with minimal viable prompts. Refine based on what you see. Master single prompts before chaining them together.


Getting consistently great results from AI requires testing, noticing what fails, and knowing how to prompt.

Like me, many people have wasted hours searching for the perfect system.

There's a common belief that the right words exist somewhere. People often believe that a perfect structure or an optimal combination of instructions can unlock consistent brilliance from AI.

Prompt engineers often build elaborate templates, study countless guides, copy frameworks from researchers, and test variations obsessively.

After practice and testing, we all came to the same conclusion. A singular perfect prompt doesn't exist.

The secret is having a solid prompt process.

Once you stop chasing perfection and start embracing iteration, you'll start getting useful results in minutes instead of hours.

The foundation is three elements:

  • Context
  • Task
  • Format

Here's how to apply this to your situations.

What Is a Prompt and Why Does It Matter

A prompt is the instruction you give AI to get a response. Every interaction with ChatGPT, Claude, and Gemini starts with one.

Simple concept. But the quality of your prompt directly determines the quality of your output.

The skill isn't crafting perfection on the first try. The secret is to iterate quickly and know what to adjust to get better output.

The lesson I've learned from 3+ years of prompt engineering is that every hour I spent trying to write the perfect prompt upfront was an hour I should have spent testing. Evaluating. Refining.

I used to drive Uber in Dallas at night. After enough hours behind the wheel, I learned things the GPS didn't know. I learned which shortcuts actually saved time, which routes were safer after midnight, and how to quickly reach the airport when an accident backed up I-35.

That knowledge didn't come from reading a manual. It came from driving. Prompting AI works the same way. You learn through experimentation.

The Core Framework: Context, Task, Format

Most effective prompts share three elements: Context, Task, Format. Get these right, and everything else is refinement. It's not glamorous, but it works.

Context

Context is everything the AI needs to know to do the job well.

WeakStrong
"Write a blog post about productivity""Write a blog post for our SaaS company's audience of remote engineering managers struggling with async communication"
"Summarize this document""Summarize this quarterly report for our board. They care about revenue growth and customer acquisition costs, not technical details"

Who is this for? What's the situation? What background is essential?

Task

The task is what you want done.

WeakStrong
"Help me with this email""Write a follow-up email to a prospect who ghosted after our demo call. Keep it brief. Suggest one specific next step"
"Make this better""Rewrite this paragraph to be more concise. Remove hedging language. Cut word count by 40%"

What action should the AI take? What's the deliverable?

Format

Format is how the output should be structured.

WeakStrong
"Give me some ideas""Give me 5 ideas, each as a single sentence with a bolded headline"
"Explain this concept""Explain this in 3 paragraphs: first for a beginner, second for someone with basic knowledge, third for an expert"

How should it look? What structure should it follow?

How to Write a Prompt Step by Step

Digital search bar representing an AI prompt connected to icons for code, ideas, settings, filtering, and conversation.
An AI prompt acts as the input layer that connects user instructions to tasks like text generation, coding, and data analysis.

The process that actually works, learned through more failures than I'd like to admit.

1. Start with the end in mind

Before writing anything, get clear on what success looks like. What would make you say "yes, this is exactly what I needed"?

Be specific. "Write a good blog post" is vague. "Write a 1,000-word post that explains our new feature to existing customers, addresses their likely concerns, and ends with a clear call to action" is specific.

2. Define your failure formula

Instead of trying to describe everything you want, describe what would make you immediately reject the output.

"Don't be generic" is vague. "Reject if it sounds like a press release" is more actionable.

I call this the failure formula. What's unacceptable? What makes you immediately start over?

Constraints narrow the space faster than aspirations.

3. Identify 3-8 categories of context

Less is more. Don't try to be comprehensive. That path leads to giant mega-prompts that confuse the model.

Instead, identify the major categories of information the AI needs. Ask the AI itself: "What context would you need to do this well?"

Common categories:

  • Audience: Who is this for? What do they already know?
  • Voice: Formal or casual? Technical or accessible?
  • Goal: What should the reader think, feel, or do after?
  • Constraints: Length, format, things to avoid?
  • Examples: What does good look like? What does bad look like?

4. Write a minimum viable prompt

Don't over-engineer. Get something testable first.

Context: [Your context]
Task: [Your task]
Format: [Your format]
Constraints: [Your failure formula items]

That's it. Run it and see what happens.

5. Test and refine

The first output is data, not the final product.

What worked? What didn't? What assumptions did the AI make that you should have stated explicitly?

Adjust. Run again. Repeat.

Try prompting on Lorka

Good vs Bad Prompt Example

There are many ways to prompt an agent, but seemingly similar messages can produce wildly different responses. It comes down to keeping your message clear, relevant, and specific.

Here’s an example.

Weak prompt:

Write something about our new feature.

What you'll get: generic marketing fluff. The AI will guess what your feature does, who cares about it, why it matters. Those guesses will probably be wrong.

Strong prompt:

Context: We're a project management tool for construction companies.
We just launched real-time budget tracking that syncs with QuickBooks.
Our users are construction project managers frustrated with
spreadsheet-based budgeting.

Task: Write a 200-word announcement for our email newsletter.

Format:
- Subject line (under 50 characters)
- 3 short paragraphs
- End with a single call-to-action button text

Constraints:
- Don't use buzzwords like "game-changing" or "revolutionary"
- Don't assume technical knowledge of APIs
- Focus on time saved, not features

What you'll get: a targeted announcement that speaks to your actual audience about your actual product.

Using modern AI chat websites like Lorka allows you to test out different prompts across different model types, such as Gemini or Claude.

This can help you test more variations more quickly.

Best Practices for Writing Prompts

Provide relevant context

Biggest mistake I see: people prompt AI like they're programming a computer. "Do this. Don't do that. Always X. Never Y." As if these are deterministic rules, it will execute exactly.

Remember: AI isn't deterministic. It's probabilistic. It predicts what it thinks a good response looks like.

More words explaining reasoning equals better grounding in semantic space. When you explain WHY you want something, you're giving the model more signal about what good looks like.

Think about giving instructions to a new employee.

You could say: "Always capitalize proper nouns."

Or: "We're a professional publication, and inconsistent capitalization makes us look sloppy to readers. Please capitalize all proper nouns."

The second version takes more words. But the employee understands the intent. If they encounter an edge case you didn't anticipate, they can make a good judgment call.

The same principle applies to AI. Rules without reasoning produce brittle outputs. Reasoning produces flexible, context-aware outputs.

Use natural language

Write like you're speaking to a competent colleague who happens to know nothing about your specific situation. Modern AI models are smart enough to understand clear directions.

Be specific, and show what specific looks like

"Be more specific" is useless advice unless you show what specific means.

Bad: "Write in a professional tone."

Good: "Write like a senior consultant at McKinsey: confident, direct, no hedging, focused on actionable recommendations."

Include examples when possible

A before/after example teaches more than paragraphs of instruction ever will. The model will infer why you like one more than the other and will attempt to replicate that. Keep it less than 3-5 maximum.

Common Prompt Mistakes to Avoid

Treating prompts like Google searches

Google searches are keywords. Prompts are instructions.

"Best CRM software 2024" works for Google.

For AI, you need: "Compare Salesforce, HubSpot, and Pipedrive for a 20-person B2B sales team. Focus on pricing, ease of use, and Gmail integration."

Giving conflicting instructions

"Be comprehensive but concise" leaves the AI stuck. Pick one. Or clarify: "Cover all five topics, but limit each to 2-3 sentences."

Not providing failure criteria

Without knowing what's unacceptable, the AI has no guardrails. Include at least one or two things that would make you immediately reject the output.

Example: “Write a response to this email but don’t be too salesy”

Jumping to complex before mastering simple

I made this mistake painfully.

I dove into multi-agent orchestrations before I'd properly tuned individual prompts. Building systems where Agent A handed off to Agent B, who handed off to Agent C. It looked impressive with beautiful architecture diagrams.

But it failed constantly in ways I couldn't diagnose.

The problem was that I hadn't validated that each individual agent produced reliable output. I was chaining together unreliable links and wondering why the chain broke.

The math is simple: if one prompt produces garbage 20% of the time, three chained prompts produce garbage 48.8% of the time. The errors compound.

Master the single prompt before you chain prompts together.

How to Improve and Refine Prompts

Once you have a working prompt, don't assume it's reliable after one test.

If you plan to use it repeatedly, backtest it across multiple scenarios. Run it several times in a row to check output consistency.

I learned this the hard way, shipping prompts that worked in testing but broke in production. Here's my approach now.

Prompt backtesting:

  • Run the same prompt with different inputs
  • Run it multiple times with the same input
  • Check if outputs are consistent
  • Check if failures are predictable

A prompt that works once might be luck. A prompt that works ten times across different contexts is reliable.

When you find issues:

  • Add constraints for the specific failures you saw
  • Make implicit assumptions explicit
  • Provide examples of the edge cases

You won't know the weird stuff that AI will do until you test it. Go in expecting there to be strange quirks, it's rare for the AI to not have some type of unanticipated behavior.

Advanced Prompting Techniques

Chain-of-thought (with caveats)

Chain-of-thought prompting asks the AI to reason step by step before the final answer. "Let's think through this step by step..."

It works. Especially for complex reasoning. But most guides don't mention the caveats:

Less impactful results on advanced models. GPT-4 and Claude already reason well with built-in thinking modes. Explicit chain-of-thought helps older and open-source models significantly. But the newer models, like Opus 4.5 and Codex 5.2, see minimal gains at best and a loss of context at worst.

Increased token usage. Chain-of-thought uses far more tokens. Higher costs. Slower responses. That matters at scale.

Sometimes it introduces errors. Using more reasoning steps means more opportunities for the model to hallucinate. This can be mitigated by using guardrails or quality control checks.

Few-shot prompting (with limits)

Few-shot prompting provides examples of what you want.

Here are examples of good subject lines:
- "Your July report is ready" (clear, specific)
- "Quick question about Thursday" (conversational, creates curiosity)

Now write a subject line for...

The catch: more examples don't always mean better results. Research shows performance peaks around 3-5 examples, then can actually decline. Quality matters more than quantity.

Model selection matters more than you think

One study by JMIR found prompt engineering improved GPT-3.5 by 10 percentage points. Same techniques on GPT-4? Negligible difference. The variance widens even further when comparing across model providers (e.g., OpenAI, Anthropic, Grok, etc.).

Using platforms like Lorka is the best way to find the best model for the job without burning your budget on countless subscriptions.

FAQ: How to Write a Prompt

No. Write in natural language. The AI understands plain English better than special formatting tricks. Structure helps. Headers. Bullets. But there's no secret code.


Key Takeaways

  • Structure beats magic words. Context. Task. Format. That's the foundation.
  • There's no perfect prompt. Iteration is the skill. Not first-draft perfection.
  • Constraints narrow faster than aspirations. Define what's unacceptable before what's ideal.
  • Explain WHY, not just rules. Reasoning produces flexible outputs.
  • Master singles before chains. Don't build complex systems on unreliable prompts.

Those hundreds of hours I wasted? I don't regret them anymore.
Not because they taught me the perfect prompt. They taught me that perfection was the wrong goal. The real skill is building a process: start rough, test fast, refine based on what you see.
Just like you don't become a great taxi driver by reading GPS data, it only happens over time by testing different routes and experimenting to find what works best.
The AI is capable. Your job is to help it help you. And that happens through iteration, not incantation.




Andy Houston portrait

Written by

Andy Houston

AI & Digital Marketing Specialist

Andy is a digital marketer and AI developer who has been building revenue systems since 2017, from Facebook ad campaigns to full-stack AI applications. He is a digital marketing veteran turned AI engineer with experience scaling businesses through paid media, sales funnels, and data-driven strategy. Since 2022, he has focused on applied AI, building production automation, RAG pipelines, and agentic tools. He thoroughly tests every tool he writes about and brings a practitioner's perspective to each article, grounded in real implementation rather than theory.

Related Articles