Walkthrough · Growth tooling builds

Design a SaaS onboarding flow

You have a SaaS product where activation rate is below where it should be. Users sign up but do not reach the ah-ha moment. You need to design an onboarding flow that gets first-session users to value without overwhelming them.

  • PM
  • Growth
16 min read

Orchestration sequence

A layered activation system, not a linear sequence.

Activation in SaaS happens across multiple simultaneous layers, not as a single flow. Discovery upstream defines what jobs the user is hiring the product to do. Three activation layers run concurrently inside the product: a sequential first-run wizard, contextual tours surfacing at moments of friction, and a chatbot for on-demand help. Measurement downstream reads activation across all three layers and attributes contribution.

Discovery layer (upstream)

jtbd-framing

Produces job statements that anchor every layer below. The primary job defines the ah-ha moment; the activation criterion follows from it. Without the job framing, activation is just a feature-engagement metric without a why.

Activation layers (concurrent, in-product)
1

Layer 1: Sequential first-run

onboarding-wizard-design

Wizard the user steps through on first session

4-step wizard that gets the user from signup to ah-ha moment. Skip-and-resume mechanics for deferable steps. Critical setup is required; configuration is deferable.

2

Layer 2: Contextual help

interactive-product-tour

Tours triggered at moments of friction

Library of micro-tours triggered by user behavior (entered new section, hit empty state, returned after absence). Power-user vs new-user differentiation enforced. Help surfaces only when needed.

3

Layer 3: On-demand assistance

chatbot-flow-design

Conversational fallback for users in friction

5 named intents grounded in the knowledge base. Confidence-thresholded fallback to documentation lookup; sentiment-driven escalation to human. Bot knows what it knows and routes appropriately.

The three layers run simultaneously. A user might complete the wizard, hit a tour at session 2, and use the chatbot at session 4. All three contribute to activation.

Measurement layer (downstream)

experimentation-analytics

Measures activation rate, ah-ha moment hit rate, drop-off at each surface. Layer attribution shows which layer (wizard, tour, chatbot) contributed most to activation. The data feeds back into layer redesign.

Artifacts at each layer

What the workflow produces, illustrated.

Each layer produces an artifact. Discovery upstream produces jobs and an ah-ha definition. The three activation layers produce a wizard architecture, a tour trigger map, and a chatbot intent architecture. Measurement produces the activation funnel with layer attribution. Real artifacts vary by product and audience; these mockups capture the shape.

Discovery layer output

Jobs and ah-ha moment definition

The jtbd-framing skill produces job statements anchored to specific situations and outcomes. The ah-ha moment follows from the primary job; the activation criterion follows from the ah-ha moment. Without this upstream framing, the activation layers below have nothing to converge on.

Jobs discovery output

What jobs are users hiring this product to do? Anchored to specific situations and outcomes; not generic personas.

Primary job

When I sign up for a new SaaS analytics tool, I want to see my own data in a meaningful chart, so I can decide whether to commit to learning the tool.

Secondary jobs (3)

  • When When the analytics tool I just signed up for shows me a demo dashboard with fake data, I want to I want to skip the demo and see my own data, so I can so I can decide whether the tool fits my actual workflow, not the vendor's marketing pitch.

  • When When I have not connected my data source yet, I want to I want a clear, fast path to do that, so I can so the product can start being useful instead of being friction.

  • When When I have signed up but my teammates have not, I want to I want to invite them in a way they will not ignore, so I can so the tool can do team work, which is most of why I signed up.

Ah-ha moment defined

User sees their own real data in a chart that answers a question they care about.

Activation criterion

First chart with the user's own data created within their first session. Cohort analysis: users who hit this within 24h retain at 4x the rate of users who do not.

Activation layer 1 output

Wizard architecture (this specific wizard)

The onboarding-wizard-design skill produces the 4-step architecture for THIS specific product (the analytics-tool example). Required vs deferable mapping; ah-ha moment at step 3; skip mechanics with consequence-warned exits. The skill landing covers methodology in the abstract; this is what it looks like applied to a real product.

Onboarding wizard: 4 steps

Step 3 of 4 (active)

  • Connect data source

    Critical setup. The product cannot work without this.

  • Pick a starting question

    Frames the first chart. Defaults to user's most likely question based on their data shape.

  • 3

    See first chart (ah-ha moment)

    ah-ha moment

    User's own data in a meaningful chart. The activation criterion.

  • 4

    Invite teammates

    Deferable. Skip leaves user in functional state with later prompt.

Skip and resume mechanics

  • Step 4 (invite teammates): skip allowed; resume via in-product prompt at session 2 if not completed.
  • Steps 1-3: required; skip would produce empty product; consequence-warned if user attempts.

Activation layer 2 output

Tour trigger map

The interactive-product-tour skill produces the trigger map for the tour library. 6 micro-tours; each tied to a specific trigger condition; audience filtering enforced. Tours fire only at moments of friction; power users do not see new-user tours.

Tour system: trigger map

Library of 6 micro-tours. Each fires only at the moment of friction its trigger describes. Power-user vs new-user differentiation enforced via the audience field.

Trigger condition
Mini-tour
Stops
Audience
  • User navigates to dashboard for first time
    Dashboard tour
    3
    All users
  • User clicks empty chart panel
    Empty-state guidance
    2
    New users only
  • User has 0 events 24h after wizard complete
    Re-engagement prompt
    1
    Inactive new users
  • User explores filter panel for the first time
    Filtering walkthrough
    3
    All users
  • Newly-shipped feature first encounter
    What's-new tour
    2
    All users (one-time)
  • User returns after 30+ day absence
    Re-orientation
    2
    Lapsed users only

Differentiation discipline

Audience filtering prevents tooltip-spam for power users. Tours fire only when the user is actually new to the feature or the context the trigger describes. Re-trigger logic respects completion state.

Activation layer 3 output

Chatbot intent architecture

The chatbot-flow-design skill produces the intent architecture and fallback paths. 5 named intents grounded in the knowledge base; confidence-thresholded fallback to documentation lookup; sentiment-driven escalation to human. Bot knows what it knows.

Chatbot intent architecture

5 named intents the bot handles confidently. Out-of-scope routes to documentation lookup; below confidence threshold escalates to human.

Top intents (5)

  • Data connection helpHigh

    Grounded in: Connection docs

  • Chart-builder questionsHigh

    Grounded in: Chart-builder docs

  • Billing questionHigh

    Grounded in: Billing FAQ

  • Feature inquiryModerate

    Grounded in: Feature catalog

  • Account access / SSOModerate

    Grounded in: Account docs

Knowledge-base sources (4)

  • Product documentation (canonical)
  • Connection setup guides
  • Pricing and billing FAQ
  • Release notes (last 90 days)

Each intent grounded in a specific source; bot cites source on response.

Fallback path

  1. 1.Intent classification confidence below 0.7: clarifying question once.
  2. 2.Still unclear: documentation lookup against KB; surface relevant article.
  3. 3.User-initiated escalation OR detected frustration sentiment: route to human with full conversation context.

Escalation criterion: 2 unsuccessful clarifications, sensitive topic (account, billing dispute), or explicit user request.

Measurement layer output

Activation funnel with layer attribution

The experimentation-analytics skill reads activation across all three layers and attributes contribution. Funnel, cohorts, layer attribution, iteration insights. The dashboard surfaces which layer is earning its build cost and where each layer needs redesign.

Activation funnel + layer attribution

Activation defined as: first chart with the user's own data. Funnel measures across all 3 layers; attribution shows which layer contributed most to activation.

Activation funnel (last 30 days)

  • Signup2,400 (100%)
  • Wizard step 1 (connect)2,280 (95%)
  • Wizard step 4 (ah-ha)1,488 (62%)
  • Tour engagement1,140 (47.5%)
  • Chatbot used624 (26%)
  • Activated (first chart with own data)1,320 (55%)

Cohort breakdown (by signup source)

  • Organic signup62%
  • Paid acquisition47%
  • Partner referral71%
  • Self-serve trial53%

Partner-referral cohort activates highest; investigate what is different about their signup motivation.

Layer attribution

  • Wizard68%

    Critical setup; carries primary load.

  • Tours22%

    Catches users who skipped wizard step 4.

  • Chatbot10%

    Recovers users in friction; escalation path.

Iteration insights

Wizard step 4 (ah-ha moment) drops 33%; users who do not complete wizard step 4 activate 18% via tours. Tours are catching the drop-off but partial recovery; v2 should redesign step 4 to be more forgiving when the user's data is sparse. Paid-acquisition cohort underperforms organic by 15 points; the messaging that drove paid signup may not match wizard-step-2 framing.

How a Claude agent runs this

The agent transcript, illustrative.

A Claude agent loaded with these skills can chain through the orchestration. The transcript below illustrates the flow. Tool calls shown in cyan; agent reasoning in slate; user input in white.

claude-code — walkthrough orchestration

User

Our SaaS activation rate is below industry benchmark. Users sign up but do not reach the ah-ha moment. Where do we start?

Agent

Upstream first. Activation depends on knowing what activation means for your audience. The jtbd-framing skill produces the jobs and defines the ah-ha moment from the user's situation.

Skill

jtbd-framing

Output

Primary job: 'When I sign up for a new SaaS analytics tool, I want to see my own data in a meaningful chart, so I can decide whether to commit.' Ah-ha moment: user sees their own real data in a chart that answers a question they care about. Activation criterion: first chart with user's own data created within first session.

User

That's clearer. Now design the wizard.

Agent

Designing the first-run wizard around the ah-ha moment. 4 steps: connect, pick a question, see first chart, invite team. Step 4 is deferable; steps 1-3 are required.

Skill

onboarding-wizard-design

Output

4-step wizard. Step 1: connect data source. Step 2: pick starting question (defaults from data shape). Step 3: see first chart (ah-ha moment). Step 4: invite teammates (skip allowed; resume via in-product prompt session 2). Required-vs-deferable mapping; consequence-warned skip on steps 1-3.

Agent

Tours come next. Library of micro-tours triggered by user behavior; not first-login tours.

Skill

interactive-product-tour

Output

6 micro-tours: dashboard tour, empty-state guidance, re-engagement prompt at 24h-zero-events, filter walkthrough, what's-new tour, lapsed-user re-orientation. Audience filtering enforced (new vs power vs lapsed). Library architecture; each fires only at the moment of friction.

Agent

Chatbot for on-demand help. 5 named intents; out-of-scope routes to documentation; below-confidence escalates to human.

Skill

chatbot-flow-design

Output

5 intents: data connection, chart-builder, billing, feature inquiry, account/SSO. KB grounding: product docs, connection guides, billing FAQ, release notes. Fallback path: clarify -> documentation lookup -> escalation. Confidence threshold 0.7.

User

How do we know it's working and which layer is earning its keep?

Agent

Measurement across all 3 layers. Activation funnel as the outcome; layer attribution as the secondary cut.

Skill

experimentation-analytics

Output

Activation 55% (industry benchmark 50-70). Wizard step 4 drops 33%; tours catch 18% of those who skipped wizard. Layer attribution: wizard 68%, tours 22%, chatbot 10%. Cohort: partner-referral 71% activation, paid-acquisition 47%. Iteration insight: paid messaging may not match wizard step 2 framing; redesign step 2 for paid cohort.

Variations

Three tiers of the same workflow at different scales.

The full skill cluster fits a flagship version of the workflow. Most teams need lighter cuts more often. The three tiers below describe when each cut fits and which skills carry the work.

  • Tier 1

    Activation system

    Complex product, full layered system. Wizard plus tour library plus chatbot plus jobs research and per-layer measurement. Treated as a sustained product investment with dedicated PM and quarterly redesign cadence.

    Time / cost

    10-12 weeks; cross-functional team plus dedicated PM

    Skills involved

    • jtbd-framing
    • onboarding-wizard-design
    • interactive-product-tour
    • chatbot-flow-design
    • experimentation-analytics

    Output shape

    Jobs statements + ah-ha moment definition + 4-step wizard + 6+ micro-tour library + 5+ intent chatbot + activation funnel with layer attribution + quarterly iteration plan.

  • Tier 2

    Standard onboarding flow

    Typical SaaS, wizard plus tour. The default shape for most products: a guided first-run plus contextual help. Chatbot deferred until activation maturity warrants the build.

    Time / cost

    6 weeks; PM plus design plus engineering

    Skills involved

    • jtbd-framing
    • onboarding-wizard-design
    • interactive-product-tour
    • experimentation-analytics

    Output shape

    Jobs framing + ah-ha definition + wizard + tour library + activation metrics; chatbot scoped for v2 if needed.

  • Tier 3

    Light onboarding (wizard-only)

    Simple product where the ah-ha moment is reachable in 2-3 setup steps. Wizard alone is enough; tours and chatbot are over-engineering at this stage.

    Time / cost

    3 weeks; small team

    Skills involved

    • jtbd-framing
    • onboarding-wizard-design
    • experimentation-analytics

    Output shape

    Jobs framing + ah-ha moment + simple wizard + activation rate measurement; tours and chatbot deferred until product complexity warrants.

Frequently asked

Questions this walkthrough surfaces.

How do we decide between wizard, tour, and chatbot for a given help need?
The three layers serve different moments. Wizards are sequential first-run experiences for the critical setup the product cannot work without; they front-load the setup work in a guided flow. Tours are contextual help that surfaces at moments of friction across the lifecycle (entered new section, returned after absence, hit empty state); they teach features the user encounters as they explore. Chatbots are on-demand assistance for users who hit a wall and explicitly want help; they handle the long tail of questions the wizard and tours did not anticipate. Each layer has a different trigger pattern and a different audience signal. The walkthrough shows how the three compose without overlapping into noise.
What if the wizard's ah-ha moment requires data the user has not connected yet?
The wizard's job is to get the user to the moment of seeing their own data. If the data connection has not happened by the time the ah-ha moment fires, the moment is hollow (the user sees a sample chart, not their own; the chart does not earn the trust). Three patterns. First, sequence the wizard so data connection happens before the ah-ha moment (most products do this). Second, design an interim ah-ha moment for the first session (sample data in the user's account context with a clear path to connect their own data) plus a real ah-ha moment in session 2 once data is connected. Third, accept that the activation criterion is a session-2 event and design the first session's success criterion accordingly. The onboarding-wizard-design skill's ah-ha-moment-engineering reference covers the patterns.
How do we measure activation when it depends on multiple layers?
Activation is the outcome (e.g., first chart with the user's own data); the layers are the means. Measure activation rate as a single funnel; attribute layer contribution as a secondary cut. Layer attribution shows which layer (wizard, tour, chatbot) the user passed through before activating, and how much each contributes. Programs with strong wizards typically show 60-75% wizard attribution; tours catch the wizard's drop-off; chatbots recover users in friction. If any layer's attribution is near zero, that layer may not earn its build cost. The activation-metrics-mockup shows the shape.
Can AI agents replace the chatbot layer?
The chatbot-flow-design skill covers this in detail. Briefly: the discipline that distinguishes a useful chatbot from a hallucinating one is the same whether the bot is rule-based or LLM-grounded. Intent architecture, knowledge-base grounding, fallback patterns, escalation discipline. AI agents make the implementation easier (better intent classification, more flexible response generation) but harder to discipline (LLMs hallucinate confidently if not grounded). The shape of the architecture stays the same. Production teams using LLM-based bots still need the intent set, the grounding, the fallback paths, and the escalation triggers.
How often should we redesign the onboarding flow?
Continuous refinement quarterly; major redesign rarely. Refinement comes from layer-attribution data (this layer is overweighted; this trigger is firing wrong; this step in the wizard has unexpected drop-off). Major redesign is warranted when audience composition fundamentally shifts, the product changes such that the activation criterion changes, or multiple refine cycles have not moved activation rate. The funnel-iteration-discipline reference in funnel-flow-architecture covers the pattern; the same principle applies to layered onboarding systems.
How does this walkthrough relate to the activation skills individually?
The skills are the methodology; this walkthrough is the orchestration. Onboarding-wizard-design covers the wizard methodology in isolation. Interactive-product-tour covers tour methodology. Chatbot-flow-design covers chatbot methodology. Each has its own keystone framing and design discipline. The walkthrough's value over the skills alone is showing how the three layers compose: how the wizard's drop-off is caught by tours, how the chatbot handles the long tail neither anticipated, how attribution measurement closes the loop. The skills are tools; the walkthrough is the system that uses them concurrently.

Metrics shown are illustrative. Actual results vary by platform, methodology, and traffic volume.