Flagship Skill · Chatbot flow design
The chatbot flow design skill.
Bots that earn trust by knowing what they don't know.
A senior growth practitioner's playbook for designing conversational flows for website chatbots and AI agents. Intent architecture, knowledge-base grounding, fallback handling, escalation to humans, conversation analytics.
Audience: growth marketers and product marketers shipping chatbot growth tooling, in-house teams designing conversational flows for marketing or support contexts, agencies running chatbot work for clients.
What this skill is for
The growth tooling suite, grouped by where work happens.
Chatbot-flow-design is the conversational surface in Tier 1, alongside multi-step-form-design as the form surface. Distinct from ai-content-collaboration (AI in content workflows) by being AI in customer-facing conversations.
Decide what to build
- lead-magnet-design
Parent-frame methodology. Format selection, audience-fit, follow-up sequence design.
Design specific magnet types
- calculator-design
Interactive calculators with transparent methodology and tiered value.
- quiz-and-assessment-design
Quizzes producing actionable segmentation with matched recommendations.
Build conversion surfaces
- multi-step-form-design
Forms broken into coherent steps that earn completion.
- chatbot-flow-design (this skill)
Conversational flows grounded in knowledge with honest fallback.
Orchestrate the funnel
- funnel-flow-architecture
Cross-tool architecture matching audience and stage.
The keystone distinction
Three positions. Both extremes are failure modes.
Failure mode
Scripted-bot
Rigid decision tree. Press 1 for X, 2 for Y. Fails the moment a user phrases something the script did not anticipate. The chatbot equivalent of an automated phone tree.
Failure mode
Hallucinating-bot
LLM-powered with no structure. Confidently answers questions about pricing, policy, capabilities and frequently makes up answers. Liability risk; trust-eroding.
The discipline
Structured-guided-conversation
LLM-powered with intent architecture, knowledge-base grounding, defined fallback paths, and explicit escalation. Knows what it knows, knows what it does not.
Anatomy of a structured guided conversation
Intent layer, knowledge-base grounding, sample conversation with escalation.
Three zones working together. Named intents define what the bot can handle (with explicit coverage target). RAG grounding maps each intent to its source-of-truth. Sample conversation shows in-scope answer (with citation) and out-of-scope escalation (with full context handoff to the human).
Intent layer
- Pricing question
- Feature inquiry
- Integration question
- Demo request
- Support escalation
Coverage: ~80%. Out-of-scope routes to fallback.
Knowledge-base grounding (RAG)
Citation discipline. Bot cites source; user can verify.
Sample conversation
The framework
Twelve considerations for chatbot flow design.
- 01The chatbot decision (or different channel)
- 02Structured-guided, not scripted or hallucinating
- 03Intent architecture sound (70-90% coverage)
- 04Knowledge-base grounded
- 05Branching adds value (3-5 turn limit)
- 06Fallback patterns multi-layered
- 07Escalation triggers defined
- 08Escalation context handoff clean
- 09Analytics instrumented per intent
- 10Maintenance cadence defined
- 11Brand voice consistent
- 12Audience-fit honest
What is in the skill
Thirteen sections covered in the body.
01
What this skill covers
Conversational flow design for website chatbots and AI agents. Distinct from ai-content-collaboration (AI in content workflows).
02
The chatbot decision
When chatbots earn deployment. Five conditions; honest no-cases when human-first or self-serve serve better.
03
Scripted-bot vs hallucinating-bot vs structured-guided-conversation
The keystone framing. The litmus test for honest scope.
04
Intent architecture
Defining what the bot can and cannot handle. Named intents, hierarchies, boundaries, coverage.
05
Knowledge-base grounding
Retrieval-augmented generation, source-of-truth design, citation discipline.
06
Branching and conditional logic
Intent-driven, context-driven, user-attribute, multi-turn branching.
07
Fallback patterns
Multi-layered: clarification, suggested intents, resource handoff, human escalation.
08
Escalation to human
When, how, with what context handoff. User-initiated, out-of-scope, sentiment-driven, high-stakes.
09
Conversation analytics
Per-intent metrics. Diagnostic uses. Hallucination detection.
10
Common failure modes
10+ patterns: hallucination, rigidity, qualification wrong, support tickets up, mid-conversation abandonment.
11
The framework: 12 considerations
Decision, structured, intents, grounding, branching, fallback, escalation triggers, handoff, analytics, maintenance, voice, audience-fit.
12
Reference files
Nine references covering decision criteria, intent architecture, grounding, branching, fallback, escalation, analytics, anti-patterns, failures.
13
Closing: chatbots earn deployment when they know what they don't know
The bots that compound trust are the ones that ground answers and escalate honestly.
Reference files
Nine references that go alongside the SKILL.md.
references/chatbot-decision-criteria.md
When chatbots earn deployment and when they do not. The conditions that warrant the build.
references/intent-architecture-patterns.md
Defining what the bot can and cannot handle. Named intents, hierarchies, boundaries, coverage.
references/knowledge-base-grounding-patterns.md
Retrieval-augmented generation, source-of-truth design, citation discipline, knowledge-base maintenance.
references/branching-and-conditional-logic.md
How the bot adapts the conversation. Intent-driven, context-driven, user-attribute, multi-turn branching.
references/fallback-pattern-design.md
What happens when intent is unclear or out-of-scope. Multi-layered fallback patterns.
references/escalation-to-human-patterns.md
When, how, with what context. Escalation triggers and handoff quality.
references/conversation-analytics-patterns.md
Per-intent metrics. Diagnostic uses. The data that informs maintenance.
references/chatbot-anti-patterns.md
The patterns that look like chatbots but degrade trust. Signal-pattern-cost framing.
references/common-chatbot-failures.md
10+ failure patterns with diagnoses and cures.
Pairs with these platforms
Three platforms with chatbot-relevant workflows.
The skill is platform-agnostic. These platforms ship workflows that fit chatbot programs: AirOps (workflow automation for bot conversation handling), Notion (knowledge-base source for grounding), PostHog (per-intent analytics and conversation tracking).
Content teams that prefer managed workflow builders to build-it-yourself pipelines
AirOps
AirOps's official MCP and Claude Connector for AEO data and Brand Kits
Open the pageNotion-centric teams
Notion
Briefs as a queryable database
Open the pageProduct-led growth teams
PostHog
Open-source product analytics with experiments
Open the page
Bridges to other skills
Five sister skills that compose with chatbot design.
Adjacent (different AI surface)
ai-content-collaborationAi-content-collaboration is AI in content workflows: AI as creative partner in producing brand-voiced content. This skill is AI in customer-facing conversations: bots that handle FAQ, qualify leads, and escalate to humans.
Engineering handoff
pm-spec-writingWriting the spec for engineers building the bot. This skill is about WHAT the conversation should be; pm-spec-writing is about communicating it to the team that will build it.
Upstream input
discovery-research-synthesisCustomer research informs the bot's intent architecture. Patterns surfaced through discovery often map directly to the intents the bot needs to recognize.
Knowledge-base source
content-strategyThe bot's knowledge base draws from documentation and content the brand publishes. Strong content strategy makes the bot's grounding richer and more current.
Cross-team coordination
integration-orchestratorBot deployment touches multiple teams (marketing, sales, support, engineering). Integration-orchestrator covers the cross-team coordination; this skill covers the conversational design.
Growth Tooling Tier 1, skill 5 of 6
The conversational surface in Tier 1.
Chatbot-flow-design pairs with multi-step-form-design as the conversational and form surfaces in Tier 1.
funnel-flow-architecture closes Tier 1 by zooming out to the cross-tool architecture orchestrating all the tools in the program: lead magnets, calculators, quizzes, forms, chatbots.
The catalog now carries 92 flagships across 8 categories.
Open source under MIT
Read the SKILL.md on GitHub.
The skill source lives in the rampstackco/claude-skills repository. MIT licensed.
Frequently asked questions.
- How is chatbot-flow-design different from ai-content-collaboration?
- Ai-content-collaboration is AI in content workflows: AI as creative partner in producing brand-voiced content. This skill is AI in customer-facing conversations: bots that handle FAQ, qualify leads, and escalate to humans. Different audience (internal vs customer-facing), different design considerations (voice consistency vs intent recognition and fallback).
- What is structured-guided-conversation?
- LLM-powered with intent architecture, knowledge-base grounding, defined fallback paths, and explicit escalation to humans. The bot knows what it knows, knows what it does not, and routes appropriately. Distinct from scripted-bots (rigid trees that fail any unexpected phrasing) and hallucinating-bots (LLM without structure that makes things up). The structure is what makes the bot trustworthy.
- Why does the bot need knowledge-base grounding?
- Without grounding, the LLM generates responses from training data or pattern-matching. Confident-sounding answers; frequently wrong. With grounding (RAG against documentation, pricing, product specs), the bot retrieves before generating. Responses match source-of-truth. Hallucination risk drops significantly. Customer-facing accuracy is preserved.
- When should the bot escalate to a human?
- User-initiated (talk to human); out-of-scope intent the bot recognizes; repeated fallback after 2-3 unclear exchanges; sentiment-driven (frustration); high-stakes topics (account security, cancellations, complaints) by default. Escalation triggers should be defined explicitly. The escalation context handoff (passing conversation history and intent to the human) determines whether escalation feels like progress or starting over.
- How do you handle out-of-scope questions honestly?
- Multi-layered fallback. First clarification (Can you tell me more?). If unclear, suggest intents (I can help with X, Y, or Z). If those do not match, resource handoff (here is the documentation page). If self-serve will not work, human escalation. The bot that admits limitations earns more trust than the bot that fakes confidence on out-of-scope questions.
- What conversation analytics matter?
- Per-intent recognition rate (correctly classified), resolution rate (user got answer they needed), fallback rate (intents missing), escalation rate (some intents may not be bot-handlable), hallucination rate (sample auditing against source-of-truth). Without per-intent analytics, the bot's quality decays unnoticed. Programs that track only conversation count keep shipping the same patterns.