Flagship Skill · Quiz and assessment design
The quiz and assessment design skill.
Quizzes that earn engagement by earning the next step.
A senior growth practitioner's playbook for designing quizzes, personality assessments, and recommendation tools that produce actionable segmentation rather than generating clicks for clicks' sake. Question architecture, scoring algorithms, result categorization, recommendation mapping.
Audience: growth marketers, product marketers, content marketers, agencies running quiz-based growth tooling for clients.
What this skill is for
The growth tooling suite, grouped by where work happens.
Quiz-and-assessment-design is one of two specific lead-magnet types in Tier 1, alongside calculator-design. Calculators give numbers; quizzes give categories. Both build on lead-magnet-design's parent-frame methodology.
Decide what to build
- lead-magnet-design
Parent-frame methodology. Format selection, audience-fit, follow-up sequence design.
Design specific magnet types
- calculator-design
Interactive calculators with transparent methodology and tiered value.
- quiz-and-assessment-design (this skill)
Quizzes producing actionable segmentation with matched recommendations.
Build conversion surfaces
- multi-step-form-design
Forms broken into coherent steps that earn completion.
- chatbot-flow-design
Conversational flows grounded in knowledge with honest fallback.
Orchestrate the funnel
- funnel-flow-architecture
Cross-tool architecture matching audience and stage.
The keystone distinction
Three positions. Both extremes are failure modes.
Failure mode
Clickbait-quiz
"What kind of pizza are you?" energy. Generates engagement (shares, comments, brief virality), segments nothing useful. Fun but not strategic.
Failure mode
Vanity-result
Elaborate-feeling result that flatters the taker but does not drive any specific next step. The result describes; it does not prescribe.
The discipline
Actionable-segmentation
Result places the taker into a defined category with a specific recommendation matched to that category. Tells them what to DO next, not just what they ARE.
Anatomy of an actionable quiz
Question, scoring, segment-to-recommendation mapping.
Three zones working together. Questions designed to distinguish segments meaningfully. Scoring algorithm that maps answers to balanced segment distribution. Result-to-recommendation mapping where each segment gets a distinct, actionable next step.
Question 4 of 7
What is your sales process maturity?
- Founder-led, no defined process
- Some defined steps, owner driven
- Defined sales playbook, multiple reps
- Multi-stage enterprise sales process
Scoring (multi-dimensional)
This question contributes weighted points across 4 segment dimensions. Combined with earlier questions, the scoring lands the taker in the matched segment.
Each segment receives 10-30% of takers in honest distribution; balanced segmentation signals the questions distinguish.
5 segments → 5 matched recommendations
- Solo founder pre-launch→ Starter plan, 30-day trial
- Growing team without process→ Growth plan + setup support
- Mid-market with multiple reps→ Business plan + custom demo
- Enterprise complex sales→ Enterprise + dedicated manager
- Not yet ready→ Self-serve foundation library
The framework
Twelve considerations for quiz design.
- 01The quiz decision (or different format)
- 02Actionable, not clickbait or vanity
- 03Question architecture sound (5-12 questions)
- 04Scoring algorithm fits segmentation
- 05Result categories distinguishable and balanced
- 06Segment naming memorable and brand-consistent
- 07Recommendations specific to each segment
- 08Lead capture honest
- 09Result delivery in context
- 10Drop-off measured per question
- 11Lead quality measured by segment
- 12Audit cadence (segment balance, recommendations)
What is in the skill
Thirteen sections covered in the body.
01
What this skill covers
Quiz-specific methodology. Distinct from lead-magnet-design (parent frame) and calculator-design (sister tool type).
02
The quiz/assessment decision
When this format earns investment. Five conditions; honest no-cases when comparison tables or worked examples serve better.
03
Clickbait-quiz vs vanity-result vs actionable-segmentation
The keystone framing. The litmus test: can the taker name their next step?
04
Question architecture
Question count, types, ordering, phrasing. Question-segment mapping discipline.
05
Scoring algorithms
Direct mapping, weighted scoring, multi-dimensional, branching. Choice criteria and tradeoffs.
06
Result categorization
Segment count, naming, distinguishability, balance.
07
Result-to-recommendation mapping
The action attached to each category. Mapping discipline and worked examples.
08
Lead capture integration
When and where to ask for email. Pattern choices and tradeoffs.
09
Quiz anti-patterns
Clickbait, vanity-result, forced-result, interrogation, leading questions, black-box, no-recommendation.
10
Common failure modes
8+ patterns: shared widely but poor leads, engagement without conversion, segments feel similar, recommendations generic.
11
The framework: 12 considerations
Decision, actionable, questions, scoring, categories, naming, recommendations, capture, delivery, drop-off, segment quality, audit.
12
Reference files
Nine references covering investment criteria, question architecture, scoring, categorization, recommendation mapping, lead capture, anti-patterns, distinctions, failures.
13
Closing: quizzes earn engagement when they earn the next step
Strong quizzes produce action; takers do the recommendation matched to their segment.
Reference files
Nine references that go alongside the SKILL.md.
references/quiz-investment-criteria.md
When a quiz is the right tool, and when a comparison table or other format would serve.
references/question-architecture-patterns.md
Question count, types, ordering, phrasing, and segment mapping discipline.
references/scoring-algorithm-patterns.md
Direct mapping, weighted scoring, multi-dimensional, branching. Choice criteria and tradeoffs.
references/result-categorization-patterns.md
Segment count, naming, distinguishability, balance.
references/result-to-recommendation-mapping.md
The action attached to each category. Mapping discipline and worked examples.
references/lead-capture-integration-patterns.md
When and where to ask for the email. Pattern choices and tradeoffs.
references/quiz-anti-patterns.md
The patterns that look like quizzes but degrade trust. Signal-pattern-cost framing.
references/clickbait-vs-actionable-distinctions.md
Detailed treatment of the keystone framing with worked examples and counter-examples.
references/common-quiz-failures.md
8+ failure patterns with diagnoses and cures.
Pairs with these platforms
Three platforms with quiz-relevant workflows.
The skill is platform-agnostic. These platforms ship workflows that fit quiz programs: Webflow (host the quiz landing page and result delivery), PostHog (event tracking on per-question completion and per-segment outcomes), Notion (recommendation portfolio documentation).
Content teams and developers building content-focused sites with design ownership
Webflow
Webflow's official MCP for Data API + Designer API
Open the pageProduct-led growth teams
PostHog
Open-source product analytics with experiments
Open the pageNotion-centric teams
Notion
Briefs as a queryable database
Open the page
Bridges to other skills
Five sister skills that compose with quiz design.
Parent-frame methodology
lead-magnet-designParent-frame methodology covering when to invest in any magnet, format selection, audience-fit, and follow-up sequence. Quizzes are one specific magnet type; this skill provides the quiz-specific methodology lead-magnet-design presupposes.
Sister tool type
calculator-designCalculators give numbers; quizzes give categories. Both can serve as lead magnets but the methodology differs: calculators emphasize calculation transparency; quizzes emphasize categorization quality and recommendation matching.
Downstream surface
landing-page-copyThe quiz landing page wraps the quiz with copy that frames the value of completing. This skill is the quiz itself; landing-page-copy is the page around it.
Adjacent (different scope)
discovery-research-synthesisDiscovery-research-synthesis covers internal research projects with defined batches. This skill is user-facing assessment that produces audience-facing categorization, not internal research synthesis.
Upstream context
content-strategyContent strategy decides which topics earn investment. Quizzes are one of those investments. The strategy informs whether a quiz fits the topic; the quiz design informs how.
Growth Tooling Tier 1, skill 3 of 6
The second specific magnet type in Tier 1.
Quiz-and-assessment-design completes the specific-magnet-type pair in Growth Tooling Tier 1, alongside calculator-design. Both build on lead-magnet-design's parent-frame methodology.
Tier 1 ships 6 skills total, completed by multi-step-form-design, chatbot-flow-design, and funnel-flow-architecture.
The catalog now carries 92 flagships across 8 categories.
Open source under MIT
Read the SKILL.md on GitHub.
The skill source lives in the rampstackco/claude-skills repository. MIT licensed.
Frequently asked questions.
- How is quiz-and-assessment-design different from calculator-design?
- Calculators give numbers (annual savings, recommended plan, monthly cost); quizzes give categories (segment plus matched recommendation). The methodology differs: calculators emphasize calculation transparency and methodology disclosure; quizzes emphasize question architecture, scoring algorithms, result categorization, and recommendation matching. Both can serve as lead magnets but the design discipline is distinct.
- What makes a quiz actionable rather than vanity?
- An actionable-segmentation quiz produces a result that places the taker into a defined category WITH a specific recommendation matched to that category. The taker comes away knowing what to DO next, not just what they ARE. Vanity-result quizzes produce flattering descriptions without next steps; clickbait quizzes produce engagement without segmentation. Actionable-segmentation is the discipline that compounds business value.
- How many questions and segments should a quiz have?
- Most quizzes work well with 5-10 questions producing 4-8 segments. Roughly 1.5-2 questions per segment. Fewer than 5 questions rarely produces meaningful segmentation; more than 12 starts feeling like a survey. More than 8 segments often produces over-segmentation where adjacent segments are indistinguishable.
- What is the recommendation-portfolio precondition?
- Without a portfolio of segment-matched recommendations, the quiz cannot deliver actionable segmentation. Before building a quiz, verify the brand has distinct, valuable recommendations for each intended segment. A quiz with 5 segments and only one recommendation is decorative; the segmentation is fake. The recommendation-portfolio precondition is the make-or-break check before scoping a quiz.
- Should we offer a not-for-you segment?
- Often yes. Honest segmentation sometimes includes a segment for takers who are not the brand's audience. The recommendation acknowledges the lack of fit and offers something useful (a referral, a different brand's resource, a self-serve path). The taker who would have churned anyway leaves with respect for the brand. The brand earns honest reputation that compounds across audience segments.
- When does email capture happen in a quiz?
- Two main patterns. Pattern A (email after questions, before result): high conversion but lead-trap risk if the result is hidden. Pattern B (email after result for personalized PDF): honest exchange; higher lead quality but lower conversion. Pattern B is the default for B2B and considered purchases; Pattern A may work for entertainment-driven consumer quizzes. The pattern choice should be tested with both conversion-rate and downstream-conversion metrics.