Flagship Skill ยท Discovery research synthesis
The discovery research synthesis skill.
Synthesis is where research earns its keep.
A senior PM's playbook for turning research artifacts into decisions. Customer interviews, user research notes, support ticket reviews, sales call transcripts, survey data, in-app feedback synthesized into the product direction they are meant to inform.
Audience: senior PMs, product directors, in-house product teams, agencies running discovery work for clients, researchers handing off to product teams.
What this skill is for
The PM suite, grouped by where work happens.
The PM catalog now spans 15 skills across four phases: upstream discovery, strategy and planning, execution, and measurement. This skill is upstream: the synthesis discipline that turns research into the decisions strategy and execution depend on.
Upstream: Discovery & Strategy
- discovery-research-synthesis
One-off research synthesis. (this skill)
- jtbd-framing
Jobs-to-be-Done framing technique.
- user-feedback-aggregation
Continuous feedback streams.
- ux-research
Structured research projects.
Strategy & Planning
- okr-design
Outcome targets for the quarter.
- roadmap-planning
Initiatives sequenced by priority.
- pm-spec-writing
Per-piece spec discipline.
Execution
- experiment-design
Rigorous A/B testing.
- feature-flagging
Rollout mechanics.
- beta-program-management
Beta cohorts that produce signal.
- feature-launch-playbook
Launch as discipline.
Measurement
- product-analytics-setup
Instrumentation discipline.
- experimentation-analytics
Reading experiment results.
- data-warehouse-experimentation
Warehouse-native experimentation.
- experimentation-platform-orchestrator
Platform decision.
The keystone distinction
Three positions. Both extremes are failure modes.
Most discovery research never produces decisions. The team conducts interviews; transcripts pile up; a researcher hands product the raw artifacts (data-dump) or builds a polished readout deck (insight-theater); the deck gets a 30-minute review and is never referenced again. The discipline is in the synthesis.
Failure mode
Data-dump
Research artifacts handed to product team raw. "Here are 47 transcripts; figure it out." No synthesis, no signal-finding, no implications drawn.
Failure mode
Insight-theater
Overpolished synthesis dressed as insight. Pretty quote walls, designed personas, no decision-driving conclusions. The deck that gets a polite review and is never referenced.
The discipline
Actionable-synthesis
Synthesis that drives decisions. Each pattern has a so-what attached; each finding leads to a product implication; the document gets referenced for months.
The litmus test. Three months after a synthesis ships, can team members name the decisions the synthesis informed? If yes, the synthesis was actionable. If team members remember the deck but cannot name the decisions, it was insight-theater. If nobody remembers anything, it was data-dump.
The synthesis flow
From research artifacts to decisions, in six stages.
The sequence is non-negotiable. Teams that skip stages produce findings that look like synthesis but are not: untagged transcripts, unclustered tags, unnamed patterns, patterns without implications, implications without decisions.
Inputs
- Customer interviews
- Support tickets
- Sales call transcripts
- Survey responses
- In-app feedback
Six-stage sequence
- 1. Transcribe and prepare
- 2. Tag at artifact level
- 3. Cluster across artifacts
- 4. Name patterns
- 5. Infer implications
- 6. Name the so-what
Decisions
- Roadmap input
- Spec input
- Kill-the-feature signals
- Strategic shift recommendations
Total synthesis time for a typical discovery cycle (12 interviews + 250 support tickets + 20 sales calls): 60-80 hours of focused work over 1.5-2 weeks. Teams that underestimate synthesis time at the start of discovery cycles often discover the gap after research ships and synthesis stalls.
The framework
Twelve considerations for discovery synthesis.
When designing or auditing a synthesis output, walk these 12 considerations.
- 01Actionable-synthesis, not data-dump or theater
- 02Research types matched to questions
- 03Six-stage sequence: no stage skipped
- 04Tags emerge from artifacts, not pre-built
- 05Tag proliferation; clustering collapses
- 06Patterns named with conviction
- 07Implications specific and falsifiable
- 08So-what tied to real decisions
- 09Document, not deck
- 10Synthesis is opinionated
- 11Review loop before publish
- 12Data gaps named, not buried
What is in the skill
Thirteen sections covered in the body.
01
What this skill is for
One-off research synthesis work. Distinction from user-feedback-aggregation (ongoing streams), jtbd-framing (a framing technique within synthesis), pm-spec-writing (downstream consumer).
02
Data-dump vs insight-theater vs actionable-synthesis
The keystone framing. Litmus test: three months after synthesis ships, can team members name the decisions it informed?
03
Discovery research types
Five types: customer interviews (depth), support tickets (volume), sales call analysis (prospect view), survey data (validation), in-app feedback (in-the-moment friction).
04
The synthesis sequence
Six stages: transcribe, tag, cluster, name patterns, infer implications, name the so-what. Non-negotiable; skipping stages breaks the chain.
05
Tagging and clustering discipline
Five principles: tag at artifact level, allow tag proliferation, cluster on patterns not overlap, name after clustering, tag dissent and contradictions.
06
Pattern naming
Patterns are not category labels. Names commit specific positions. Avoid bloat (4-8 patterns typical, not 30). Avoid truisms. Name with conviction.
07
From pattern to product implication
The bridge from observation to decision. Implications are the writer's analytical work, not user prescriptions. Each pattern can have multiple implications. Each implication acknowledges cost.
08
Writing for decisions, not deck performance
Document over deck. Patterns lead, evidence in appendix. Opinionated voice. Decision-input section. Optimized for reference use, not for the readout meeting.
09
The synthesis review and validation loop
Participant review where possible. Adjacent-team review. Quantitative validation. The challenge-the-synthesis session. Iteration before publishing.
10
When to halt and gather more data
Pattern thinness, segment under-representation, contradictory clusters, decision time pressure. The 'we have enough' trap.
11
Common failure modes
11+ patterns: data-dump, insight-theater, category bloat, truism patterns, misrecognition, vague implications, hedged synthesis, quote walls, wishlist synthesis.
12
The framework: 12 considerations
Actionable not data-dump or theater, sequence non-negotiable, tags from data, patterns named with conviction, implications falsifiable, decision-driving so-whats, document not deck, opinionated, reviewed, gaps acknowledged.
13
Closing: synthesis is where research earns its keep
Discovery research is expensive; synthesis is where the investment converts into decisions. Programs that take synthesis seriously earn returns; programs that data-dump or theater do not.
Reference files
Nine references that go alongside the SKILL.md.
references/research-types-and-when-each-fits.md
Five research types with synthesis implications. Composition patterns. Wrong-pairs to avoid. Pitfalls per type. The when-to-commission framework.
references/synthesis-sequence-walkthrough.md
Six-stage sequence with worked example. Stage outputs, time investment per stage, common skip-failures. The non-negotiability principle.
references/tagging-and-clustering-discipline.md
Five principles for tagging and clustering. Tagging tactics that work. Clustering tactics that work. Common failures.
references/pattern-naming-patterns.md
Category-vs-pattern distinction. Implication-legibility test. Avoid-bloat principle. Avoid-truism principle. Conviction principle. Pattern name structures.
references/from-pattern-to-product-implication.md
The bridge from observation to decision. Falsifiability. Multi-implication patterns. Cost acknowledgment. The not-act implication.
references/writing-for-decisions-not-decks.md
Document-not-deck principle. Patterns lead. Parallel-section discipline. Opinionated voice. Decision-input section. Reference-after-shipping test.
references/synthesis-review-and-validation.md
Participant review, adjacent-team review, quantitative validation, the challenge-the-synthesis session, iteration discipline.
references/when-to-gather-more-data.md
Pattern thinness, segment under-representation, contradictory clusters, decision time pressure, the we-have-enough trap.
references/common-discovery-synthesis-failures.md
14+ failure patterns with diagnoses and cures. The cross-cutting documentation-vs-decision pattern.
Pairs with these platforms
Three platforms with synthesis-relevant workflows.
The skill is platform-agnostic. These platforms ship workflows that fit synthesis programs: Notion (synthesis docs and tagging at scale), AirOps (workflow-driven synthesis pipelines with AI assistance), Mixpanel (quantitative validation of qualitative patterns).
Notion-centric teams
Notion
Briefs as a queryable database
Open the pageContent teams that prefer managed workflow builders to build-it-yourself pipelines
AirOps
AirOps's official MCP and Claude Connector for AEO data and Brand Kits
Open the pageProduct teams and analysts asking questions of product event data
Mixpanel
Mixpanel's official hosted MCP for product analytics
Open the page
Bridges to other PM-suite skills
Five sister skills that compose with synthesis.
Framing scope
jtbd-framingA framing technique applied within synthesis. Job statements, struggling moments, hire/fire criteria emerge during the synthesis sequence.
Continuous feedback scope
user-feedback-aggregationOngoing feedback streams (support, NPS, in-app, sales). This skill covers one-off research projects; user-feedback-aggregation covers continuous streams.
Downstream consumer
pm-spec-writingSpecs reference synthesized insights as input. Strong specs ground design decisions in the patterns synthesis surfaced.
Downstream consumer
roadmap-planningRoadmap uses synthesized priorities as input. Each roadmap candidate maps to the jobs and patterns synthesis identified.
Quantitative validation
experiment-designPatterns from qualitative synthesis often warrant quantitative validation through rigorous A/B testing.
Direction 7 closes
The first of five PM skills closing Direction 7.
Discovery research synthesis is the first of five PM skills shipped together in Direction 7 Dispatch B. The other four: jtbd-framing, okr-design, beta-program-management, and user-feedback-aggregation.
Together with Dispatch A (Tier 2 content suite), Direction 7 closes with 9 new skills total. The catalog now carries 86 flagships across creative direction, content, design, SEO, project management, marketing, and operations.
Next: Walkthroughs Direction (use-case-first orchestration pages).
Open source under MIT
Read the SKILL.md on GitHub.
The skill source lives in the rampstackco/claude-skills repository. MIT licensed.
Frequently asked questions.
- What does 'actionable synthesis' actually require?
- The six-stage synthesis sequence: transcribe and prepare, tag at the artifact level, cluster across artifacts, name patterns, infer product implications, name the so-what tied to a specific decision. Each stage is non-negotiable. Teams that skip stages produce findings that look like synthesis but do not drive decisions: untagged transcripts, unclustered tags, unnamed patterns, patterns without implications, implications without decisions. The litmus test: three months after a synthesis ships, can team members name the decisions the synthesis informed?
- How is this skill different from jtbd-framing?
- JTBD-framing is one specific framing technique often applied within synthesis (situation, motivation, outcome statements; struggling moments; hire/fire criteria). This skill is broader: it covers the synthesis discipline regardless of which framing techniques are applied. JTBD is a tool used within synthesis; this skill is the synthesis itself. The two compose: discovery synthesis often uses JTBD as one of several framing approaches.
- How is this skill different from user-feedback-aggregation?
- Discovery research synthesis covers one-off research projects: a defined batch of artifacts (12 customer interviews + 250 support tickets + a sales call audit), a defined synthesis output, a defined timeline. User-feedback-aggregation covers always-on feedback streams: support, NPS, in-app, sales calls, social, councils, all flowing continuously. Different cadences, different tooling needs, different synthesis discipline. The two compose: discovery research often draws from feedback streams plus commissions targeted research.
- What does 'data-dump vs insight-theater vs actionable-synthesis' mean?
- Data-dump: research artifacts handed to product team raw. 'Here are 47 transcripts; figure it out.' No synthesis, no signal. Insight-theater: overpolished synthesis dressed as insight. Pretty quote walls, designed personas, no decision-driving conclusions. The deck that gets a 30-minute review and is never referenced again. Actionable synthesis: synthesis that drives decisions. Each pattern has a so-what attached; each finding leads to a product implication; the document gets referenced in roadmap discussions for months.
- How long does substantive synthesis take?
- 60-80 hours of focused synthesis work for a typical discovery cycle (12 interviews + 250 support tickets + 20 sales calls). Roughly 1.5-2 weeks of dedicated PM time across the six stages: transcribe and prep (15-20 hours), tagging (25-40 hours), clustering (4-8 hours), pattern naming (2-4 hours), implications (3-6 hours), so-what (2-4 hours). Most teams underestimate synthesis time at the start of discovery cycles; budgeting it explicitly is the discipline.
- What if the synthesis reveals the research did not collect enough data?
- Name the gap explicitly. Do not pretend the data is sufficient when it is not. Honest options: synthesize for the represented segments and flag missing ones; pause to gather more research if a load-bearing decision warrants it; ship synthesis with named gaps and post-launch instrumentation that catches wrong decisions. The 'we have enough' trap is the failure mode: sunk-cost reasoning suppressing honest gap-naming.