From Prompt Engineering to Conversation Design

Mar 26, 2025

Consider this seemingly successful interaction:

User: "Show me some options"
AI: [Returns 5 detailed options]

The thing is, the user is probably not picking the second option because it's the best. They're picking it because reading five detailed responses is exhausting. They're settling, not optimizing – like clicking the second Google result because scrolling feels like too much work.

This simple observation changed how I think about AI interfaces. While everyone's obsessing over prompt engineering (reminds me of when we thought the perfect shade of #0000FF would revolutionize web design), we're missing something bigger: the art of designing natural conversations.

What is Conversation Design?

Conversation design is the systematic approach to creating AI interactions that build understanding over time, rather than just generating good responses.

At its core, conversation design means:

  1. Thinking in flows, not responses

    • How one answer shapes the next question

    • Anticipating where conversations naturally want to go

    • How shared understanding builds over time

  2. Building shared context

    • What needs to be remembered

    • When to reference past information

    • How to confirm understanding without sounding mechanical

  3. Managing cognitive load

    • When to ask questions

    • How much information to reveal

    • Which options to present

  4. Recovering from mistakes

    • How to detect misunderstandings

    • When to admit confusion

    • How to get back on track gracefully

Good conversation design feels natural because it respects how humans actually communicate. Bad conversation design feels like talking to someone who just discovered LinkedIn post formats and won't stop using "Unpopular opinion:" as a prefix.

Why Traditional Metrics Fail

We're still thinking about AI interfaces like we think about websites. With websites, you can optimize each page independently. The home page can be perfect even if the about page needs work. But conversations don't work that way. Every response affects what comes next.

Current metrics fail in predictable ways:

Old Question

Why It Fails

What We Should Ask




Did they click?

Clicks don't show understanding

What did they really need?

How long did they stay?

Longer isn't better

Did we help efficiently?

Did they convert?

Paths aren't linear

Did we solve their problem?

The Architecture of Good Conversations

Good conversations follow a natural progression that most AI systems ignore completely:

  1. "What are you trying to do?" - Understanding the real goal

  2. "Tell me more about that" - Exploring specific needs

  3. "Here's what might help" - Offering relevant solutions

  4. "Did that solve it?" - Confirming success

  5. "You might also want to consider..." - Extending value

This progression isn't arbitrary - it's how humans naturally build understanding. But most AI applications jump straight to answers, missing crucial context and often solving the wrong problem perfectly. (Like that time my chatbot helped someone optimize their database queries when they just wanted to know where to get coffee.)

What Actually Matters

Three fundamental questions should drive conversation design:

1. What Do They Mean?

Not just their words, but their deeper need. Users rarely articulate exactly what they want the first time. Good conversations gradually frame an ambiguous, subjective ask into a mutually agreed upon objective goal.

2. Did We Actually Help?

Success isn't about producing good responses - it's about solving real problems. A brilliantly written answer to the wrong question helps no one. We need to measure actual outcomes, not just response quality.

3. How Well Did We Recover?

Things go wrong in every conversation, often because the original ask gets refined or changed through a natural process of investigation. What matters is how quickly and naturally we detect and fix misunderstandings. The best systems turn potential failures into opportunities for better understanding.

Patterns That Work

After analyzing thousands of conversations, certain patterns consistently improve outcomes:

Progressive Understanding

Instead of this:

User: "Tell me about TensorFlow"
AI: [Dumps complete technical documentation]

Build understanding naturally:

Strategic Memory

The key to effective conversation design isn't perfect recall – it's strategic memory. Think of it as the difference between a database dump and a carefully curated context layer. Great conversations maintain an evolving mental model but remain focused on what matters most.

First, we track guardrails, relevant topics  and goals that color every interaction. When a user mentions they're debugging a production issue, that context shapes everything that follows. We layer this with critical user context – their technical sophistication, preferences, and past interactions – to build a coherent picture of who we're talking to.

Just as important are the open conversation threads: unanswered questions, recent misunderstandings, and successful resolution patterns. When a user has to rephrase their question multiple times, or when showing code examples works better than theoretical explanations, those are signals worth remembering. This isn't about logging everything – it's about maintaining the context that drives understanding.

Instead of this:

Build context progressively:

Graceful Recovery

Every conversation will eventually go off the rails. The measure of a good system isn't avoiding all mistakes – it's how elegantly it recovers from them. This requires moving beyond simple fallback responses to thoughtful error handling.

The key is early detection and specific action. Instead of vague "Could you clarify?" requests, we offer concrete paths forward: "Would you prefer to see this implemented in Python or JavaScript?" This acknowledges confusion while maintaining momentum. Each recovery becomes a learning opportunity, revealing patterns that help prevent future misunderstandings.

The art lies in rebuilding context without losing progress. When backtracking, we preserve valuable insights: "I see I misunderstood earlier – let's keep your performance requirements in mind while we explore a different approach." It's about turning potential failures into opportunities for deeper understanding.

Instead of this:

Recover with context:

Where This Is Going

The next few years will reveal a clear pattern: teams that focus solely on language model capabilities will hit diminishing returns, while those that invest in conversation design will build more useful products. This isn't speculation - we're already seeing it in user behavior.

The teams that solve these problems won't just build better AI products. They'll fundamentally change how people interact with computers. Not because they'll have better language models, but because they'll have better conversations.

Based on research and practical experience in building conversational AI systems at Channel, where we're building tools to help teams measure and optimize conversational experiences. No prompts were harmed in the making of this article.