Flagship Skill · User feedback aggregation

The user feedback aggregation skill.

Feedback as continuous signal, not periodic survey.

A senior product leader's playbook for collecting and synthesizing user feedback across channels. Support tickets, NPS surveys, in-app feedback, sales calls, social mentions, customer councils, all aggregated into triaged synthesis the team can actually act on.

Audience: senior PMs, product directors, customer success and support managers running feedback programs, in-house teams aggregating feedback across many channels.

What this skill is for

The PM suite, grouped by where work happens.

User-feedback-aggregation sits in upstream work: continuous feedback streams that inform discovery, strategy, and execution downstream. Distinct from discovery-research-synthesis (one-off research projects with defined batches).

Upstream: Discovery & Strategy

  • discovery-research-synthesis

    One-off research synthesis.

  • jtbd-framing

    Jobs-to-be-Done framing technique.

  • user-feedback-aggregation (this skill)

    Continuous feedback streams.

  • ux-research

    Structured research projects.

Strategy & Planning

  • okr-design

    Outcome targets for the quarter.

  • roadmap-planning

    Initiatives sequenced by priority.

  • pm-spec-writing

    Per-piece spec discipline.

Execution

Measurement

The keystone distinction

Three positions. Both extremes are failure modes.

Failure mode

Loudest-voice

Whoever complains the most gets their feature. Vocal minorities steer roadmap; silent majority's needs go unaddressed.

Failure mode

Averaged-noise

Every signal weighted equally. 1 enterprise = 1 trial user = 1 angry social mention. Noise drowns signal.

The discipline

Triaged-synthesis

Signal weighted by source quality, frequency, and decision relevance. Different feedback types have different weights for different decisions.

The feedback funnel

From six channels to decisions, through triaged synthesis.

Six channels

Support tickets
NPS surveys
In-app feedback
Sales calls
Social mentions
Customer councils
↓ Categorization & tagging at scale

Triage layer

Signal weighted by source quality × frequency × decision relevance. Frequency-intensity matrix per item.

↓ Synthesis loop (weekly, monthly, quarterly)

Decision outputs

  • Roadmap input
  • Spec adjustment
  • No-op-but-tracked

Each channel surfaces different signal at different reliability. The triage layer applies channel-source weighting, the frequency-intensity matrix, and synthesis cadences. The output is decision input the team uses; without the triage layer, channels overflow into noise.

The framework

Twelve considerations for feedback aggregation.

  1. 01Triaged-synthesis, not loudest or averaged
  2. 02Channel taxonomy known
  3. 03Channel-source weighting per decision
  4. 04Categorization at scale
  5. 05Frequency-intensity matrix
  6. 06Synthesis cadence (daily, weekly, monthly, quarterly)
  7. 07Cross-channel triangulation
  8. 08Closing the loop with users
  9. 09Drift detection
  10. 10Tooling that scales
  11. 11Decisions traceable to feedback
  12. 12Source bias acknowledged

What is in the skill

Thirteen sections covered in the body.

  1. 01

    What this skill is for

    Continuous user-feedback aggregation. Distinct from discovery-research-synthesis (one-off projects) and beta-program-management (beta-bounded).

  2. 02

    Loudest-voice vs averaged-noise vs triaged-synthesis

    The keystone framing. Signal weighted by source, frequency, decision relevance.

  3. 03

    Feedback channels and what each surfaces

    Six common channels. Each surfaces different signal at different reliability. Triangulation across channels.

  4. 04

    Channel-source weighting

    Decision-relative weighting. Different sources warrant different weights for different decisions.

  5. 05

    Categorization and tagging discipline at scale

    Taxonomy from data. Multi-tag discipline. AI-assisted at high volume. Periodic taxonomy review.

  6. 06

    Frequency vs intensity

    The two dimensions. The four-quadrant matrix and prioritization implications.

  7. 07

    From feedback to product decision

    The synthesis loop. Cadences (daily, weekly, monthly, quarterly). Decisions traceable to feedback.

  8. 08

    Closing the loop with users

    When feedback shapes product, telling users matters. The over-promising risk.

  9. 09

    Detecting drift in feedback patterns over time

    Drift patterns: increasing/decreasing volume, new patterns emerging, sentiment shifts. Drift as early signal.

  10. 10

    Feedback aggregation tooling considerations

    Tooling categories without specific endorsements. Build-vs-buy tension. Volume considerations.

  11. 11

    Common failure modes

    11+ patterns: loudest-voice, no decisions, channels overflow, NPS as compliance, no closing the loop.

  12. 12

    The framework: 12 considerations

    Triaged not loudest or averaged, channel taxonomy, weighting per decision, categorization, frequency-intensity, cadences, drift detection.

  13. 13

    Closing: feedback as continuous signal, not periodic survey

    Strong feedback aggregation is invisible discipline; weak aggregation is loud ceremony.

Reference files

Nine references that go alongside the SKILL.md.

  • references/channel-types-and-what-each-surfaces.md

    Six common channels with strengths, weaknesses, biases, and synthesis implications. Cross-channel triangulation. Channel reliability calibration.

  • references/channel-source-weighting.md

    Decision-relative weighting. Worked examples. The averaged-noise and loudest-voice failures. How to set weights.

  • references/categorization-and-tagging-at-scale.md

    Taxonomy from data. Multi-tag discipline. Tagging at volume. Periodic taxonomy review.

  • references/frequency-vs-intensity.md

    Two dimensions. Four-quadrant matrix with worked dispositions. Frequency and intensity assessment.

  • references/from-feedback-to-product-decision.md

    Synthesis loop. Cadences. Decisions traceable to feedback. Roadmap input. Spec input. Stakeholder communication.

  • references/closing-the-loop-with-users.md

    When and how to communicate feedback-driven changes. The over-promising risk. Communication channels and timing.

  • references/detecting-drift-in-feedback.md

    Drift patterns. Investigation discipline. Drift as early signal. Drift in segments. Drift in cross-channel patterns.

  • references/tooling-considerations.md

    Tooling categories without specific endorsements. Criteria for selection. Build-vs-buy tension. AI-assisted feedback work.

  • references/common-feedback-aggregation-failures.md

    14+ failure patterns with diagnoses and cures. The cross-cutting collection-vs-decision-input pattern.

Browse all reference files on GitHub

Pairs with these platforms

Three platforms with feedback-aggregation workflows.

The skill is platform-agnostic. These platforms ship workflows that fit feedback-aggregation programs: Notion (feedback aggregation docs and tagging), Mixpanel (in-app feedback tracking and behavioral signal pairing), AirOps (synthesis workflows that scale with volume).

Bridges to other PM-suite skills

Five sister skills that compose with feedback aggregation.

  • One-off research scope

    discovery-research-synthesis

    One-off research projects with defined batches and outputs. This skill is the always-on feedback streams; the two compose for programs running both.

  • Beta-specific feedback

    beta-program-management

    Feedback bounded to beta participants and beta period. This skill spans all users continuously, including beta cohorts and beyond.

  • Downstream consumer

    pm-spec-writing

    Specs reference feedback patterns as input. Strong specs ground design decisions in the patterns aggregation surfaced.

  • Framing scope

    jtbd-framing

    JTBD framing applies within feedback synthesis. Struggling moments and hire/fire criteria emerge from feedback streams.

  • Downstream consumer

    roadmap-planning

    Roadmap uses feedback patterns as input. Each candidate maps to the patterns aggregation identified.

Direction 7 closes

The fifth and final PM skill closing Direction 7.

User-feedback-aggregation is the fifth and final skill in Direction 7 Dispatch B. Together with discovery-research-synthesis, jtbd-framing, okr-design, and beta-program-management, plus the Tier 2 content suite (Dispatch A: long-form-content-frameworks, content-refresh-system, content-repurposing, content-distribution), Direction 7 closes with 9 new skills total.

The catalog now carries 86 flagships across creative direction, content, design, SEO, project management, marketing, and operations.

Next: Walkthroughs Direction (use-case-first orchestration pages with text and visual mockups for AB testing, feature launch, content hub builds, and other recurring product workflows).

Open source under MIT

Read the SKILL.md on GitHub.

The skill source lives in the rampstackco/claude-skills repository. MIT licensed.

Frequently asked questions.

How is user-feedback-aggregation different from discovery-research-synthesis?
Discovery-research-synthesis covers one-off research projects: a defined batch of artifacts, a defined synthesis output, a defined timeline. User-feedback-aggregation covers always-on feedback streams: support, NPS, in-app, sales calls, social, councils, all flowing continuously. Different cadences, different tooling needs, different synthesis discipline. The two compose: discovery research often draws from feedback streams plus commissions targeted research.
What does 'triaged synthesis' actually require?
Channel-source weighting: different sources warrant different weights for different decisions. Categorization at scale: taxonomy that emerges from data, supported by tooling. Frequency-vs-intensity matrix: high-frequency-high-intensity is top priority; low-frequency-low-intensity is noise. Synthesis cadences: daily critical, weekly patterns, monthly review, quarterly strategic. Decisions traceable to feedback: each major product decision can be explained by specific feedback patterns. Without these, feedback aggregation is collection without decision input.
What is the loudest-voice failure mode?
A small number of customers complain frequently and loudly. Their feedback dominates synthesis because they generate the most volume. Their preferences shape the roadmap. The silent majority's needs go unaddressed. The product optimizes for the customers most likely to complain rather than the customers most strategic for the business. The cure: weight by source quality and representativeness, not by volume. A complaint repeated 50 times by one user is not 50x the signal of one complaint by 50 distinct users.
How should channels be weighted differently for different decisions?
Decision-relative weighting. For an enterprise admin role decision: enterprise sales calls and enterprise support tickets weight high; small-team NPS weights low. For a free-tier onboarding decision: free-tier in-app feedback and support tickets weight high; long-tenured customer feedback weights low. The weighting is qualitative judgment, not arithmetic. The synthesis names the weighting choices explicitly so readers can engage.
How do you close the loop with users?
Tell users when their feedback drove change. 'We shipped X this week. Many of you reported Y was painful; X addresses that.' Communication channels: public (release notes, blog posts, public roadmap), targeted (email to users who reported the issue, in-app notifications, customer success outreach), community (forums, social). Without closing the loop, users feel ignored and engagement decays. With closing the loop, future feedback flows because users see their input matter. The over-promising risk: communicating future changes that do not ship erodes trust faster than not promising.
What is the frequency-vs-intensity matrix?
Two dimensions of feedback signal. Frequency: how often the same feedback recurs across users. Intensity: how strongly users feel. Four quadrants. High-frequency, high-intensity: top priority. High-frequency, low-intensity: papercuts (address in batches). Low-frequency, high-intensity: affected users deeply impacted but few; often segment-specific. Low-frequency, low-intensity: noise; capture but do not action. The matrix informs prioritization weight; treating volume alone as priority misses both the intensity dimension and the segment-importance dimension.