Flagship Skill · Product analytics setup

The product analytics setup skill.

Instrument product analytics correctly the first time.

A senior PM and analyst's playbook. Event taxonomy design, property patterns, naming conventions, schema versioning, funnel design, cohort definitions, retention measurement, North Star selection, dashboard hygiene, instrumentation debt, and the failure modes that produce data nobody trusts. Built for PMs, growth analysts, founders, and product analysts.

Audience: PMs and analysts setting up product analytics from scratch or auditing an existing setup. Adjacent: data teams owning the warehouse and growth teams owning the measurement strategy.

What this skill is for

Most product analytics setups are inherited mistakes plus dashboard sprawl.

The team launches a feature; instrumentation gets bolted on under deadline pressure; naming drifts; properties are inconsistent; six months later nobody can answer simple questions because the answer depends on which event you trust.

This skill is the discipline that prevents that. It assumes you have answered the strategic questions about what to measure (see the analytics-strategy companion below). It assumes you have a tool connected (Mixpanel, Amplitude, PostHog, or warehouse-native via BigQuery, Snowflake, dbt). The hard part is the systematic execution: naming conventions, property design, schema versioning, funnel construction, cohort definitions, retention measurement.

The output is a tracking plan. A list of events with their properties, the canonical user identity, named cohorts, named funnels, the retention measurement choice, the named North Star, the dashboard owners. The plan lives in code and gets reviewed like any other product spec.

What is in the skill

Fourteen sections covered in the body.

The SKILL.md spans the full instrumentation lifecycle from event taxonomy design through schema versioning to quarterly audits. Each section names a discipline and a common failure mode.

  1. 01

    The instrumentation hierarchy

    Events to properties to identities to cohorts to funnels to retention. Each level depends on the one below being correct. Garbage events produce garbage funnels. Discipline is bottom-up.

  2. 02

    Event taxonomy design

    Past tense, object-action format, granular but not redundant. Verbs are events; states are properties. Thirty to fifty events for typical SaaS; below twenty under-instrumented; above one hundred is UI noise.

  3. 03

    Property design

    Event-level vs user-level. Type discipline: strings for enums, numbers for actual numbers, booleans for actual booleans, timestamps in ISO 8601 always. Money in cents with currency separate.

  4. 04

    Naming conventions

    snake_case throughout, past tense, object-action for events. Boolean prefix is_/has_/can_. Money in smallest unit. Timestamps with _at suffix. Pick one and enforce via CI lint.

  5. 05

    Schema versioning

    Additive vs breaking changes. _v2 suffix when semantics change. Data contract in code (TypeScript interface or JSON Schema). CI lint rejects schema violations before they hit production.

  6. 06

    Funnel design

    Anchor on high-intent events. Document time windows. No auto-firing events as steps. Break long funnels into 2 to 3 shorter ones. Common shapes: activation, conversion, engagement, feature adoption.

  7. 07

    Cohort definitions

    Acquisition, behavioral, property, combined. Define in code (or saved in tool) for reusability. Version when criteria change. Compare cohorts apples-to-apples at matched ages.

  8. 08

    Retention measurement

    Bracket retention beats N-day retention for stability. Steep early drop with plateau is normal. Flat-line low retention may be a power-user product, not a problem. Compare against business reality.

  9. 09

    North Star and supporting metrics

    NSM rules: reflects user value, captures the action that grows the business, measurable consistently, hard to game. One NSM, three to five inputs, five to ten health metrics. Bad NSMs: signups, revenue, DAU.

  10. 10

    The trustable dashboard principle

    A dashboard is trustable when each metric has a clear definition, a known data source, known caveats, and is reproducible. The stale-dashboard failure mode: built two years ago, schema changed, silently broken.

  11. 11

    Instrumentation debt

    Skip instrumentation: save 1 hour. Six months later: 20 hours to retroactively fill the gap plus untrackable cost in lost decisions. Every PR includes instrumentation. Quarterly schema audits.

  12. 12

    Common failures

    Twelve patterns: cannot trust data, funnel says false drop-off, retention curve panic, tool vs warehouse mismatch, slow dashboards, MAU disagreement, button-click noise, name versioning, iOS attribution, simple-question-cannot-answer.

  13. 13

    The framework: 12 considerations

    Event taxonomy, property design, naming conventions, schema versioning, identity stitching, funnel design, cohort definitions, retention measurement, North Star, dashboard hygiene, instrumentation debt, single source of truth.

  14. 14

    When in doubt, instrument less

    Most setups are over-instrumented, not under-instrumented. Tracking every button click drowns the signal. Default to less. The data you do not have can be added later; over-collected data costs you forever in performance and signal-to-noise.

Reference files

Nine references that go alongside the SKILL.md.

The references hold the canonical event spec, property patterns, naming style guide, versioning patterns, funnel templates, cohort templates, NSM selection guide, audit playbook, and pattern catalog of failures. Each is a self-contained doc the team can lift into a project without reading the rest.

  • references/event-taxonomy-template.md

    Canonical event spec for typical SaaS: account events, user events, activation events, engagement events, conversion events, retention events. For each: when it fires, who fires it, required properties, common pitfalls.

  • references/property-design-patterns.md

    Event-level vs user-level patterns. Type discipline: strings, numbers, booleans, timestamps, arrays. Worked example: product_viewed event with right vs wrong design side-by-side.

  • references/naming-convention-reference.md

    Complete style guide. snake_case, past tense, object-action. Boolean prefix is_/has_/can_. Money in smallest unit. Timestamp suffix _at. Period suffix _days. Do/don't side-by-side table. Enforcement via CI lint.

  • references/schema-versioning-patterns.md

    Additive vs breaking changes. _v2 suffix transition pattern with 90-day default. Data contract in TypeScript interface or JSON Schema. Three migration patterns: rename, split, merge. CI lint rules.

  • references/funnel-design-templates.md

    Four funnel shapes: activation, conversion, engagement, feature adoption. For each: standard steps, time window, anchor selection, drop-off interpretation. Six common funnel design mistakes.

  • references/cohort-definition-patterns.md

    Acquisition, behavioral, property, combined cohorts. SQL examples for warehouse-native plus tool-specific examples for Mixpanel, Amplitude, PostHog. Three cohort discipline rules. When to introduce vs deprecate.

  • references/north-star-metric-selection.md

    Four NSM rules. Examples by product type (engagement-driven, transaction-driven, conversion-driven, content-driven). Six anti-patterns. Supporting metrics framework: 1 NSM + 3 to 5 inputs + 5 to 10 health metrics. Migration patterns when changing NSM.

  • references/instrumentation-audit-checklist.md

    Quarterly playbook with five categories: schema review, volume sanity check, dashboard freshness review, owner assignment audit, deprecation candidates. Each with steps, what to check, action items. The audit deliverable format.

  • references/common-failures.md

    Twelve failure patterns with name, symptom, root cause, fix, prevention. Cannot-trust-data, false-funnel-drop-off, retention-curve-panic, tool-vs-warehouse-mismatch, slow-dashboards, MAU-disagreement, button-click-noise, name-versioning-drift, iOS-attribution-gap, simple-question-cannot-answer, identity-stitching-mismatch, silent-schema-change.

Browse all reference files on GitHub

Bridge to the strategic companion

Strategy first; execution second.

Two skills cover product analytics. Reading this skill without the strategic companion is like writing code without a spec; reading the strategic companion without this skill is like writing a spec without a team to build it.

analytics-strategy covers the strategic layer: what to measure and why, KPI hierarchy, dashboard architecture, attribution models, business-level taxonomy. Read it first to decide what matters.

product-analytics-setup (this skill) covers the execution layer: how to instrument the product correctly. Event taxonomy, property design, naming conventions, schema versioning, funnel construction, cohort definitions, retention measurement. Read it second to instrument what the strategy decided.

The two compose. Most teams that have measurement problems have one without the other: a team that knows what to measure but cannot trust the data, or a team with clean data but no clear measurement priorities. Both skills together produce trustable data plus clear priorities.

Where this skill fits in the track

The first skill in the PM gap-closing track.

The PM gap-closing track covers three skills focused on the data and analytics work product teams actually need to ship. product-analytics-setup is the foundation; the other two follow in subsequent dispatches.

product-analytics-setup (this skill) covers instrumentation execution: how to set up trackable product analytics that produce trustable answers.

data-warehouse-experimentation covers the warehouse layer of experimentation: how to run experiments against warehouse data with the right statistical rigor and audit trail. Skill landing page lands when the SKILL.md ships.

feature-launch-playbook covers the launch lifecycle: pre-launch readiness, staged rollout, monitoring, decision rules, post-launch retrospective. Skill landing page lands when the SKILL.md ships.

Together the three close the PM data gap: instrument correctly, experiment rigorously, launch with a plan. The /integrations catalog covers the platform-specific tactical layer underneath each.

Open source under MIT

Read the SKILL.md on GitHub.

The skill source lives in the rampstackco/claude-skills repository alongside dozens of other skills covering the full lifecycle of brand and product work. MIT licensed.

Frequently asked questions.

How is this different from analytics-strategy?
analytics-strategy (Growth category) is strategic: what to measure and why, KPI hierarchy, dashboard architecture, attribution models. product-analytics-setup (Product category, this skill) is execution: how to actually instrument the product correctly. The two compose. Read analytics-strategy first to decide what matters; read this skill to instrument it.
Why do most teams over-instrument?
Tracking every button click feels safer than deciding what matters. Six months later the team has 800 events, dashboards take 30 seconds to load, and the signal-to-noise ratio is bad. The discipline of saying 'we do not need to track that' is harder than 'let us track that just in case.' The closing argument names this: default to less.
What is instrumentation debt?
The compounding cost of cutting corners during instrumentation. Ship a feature without events: save 1 hour. Six months later, retroactive instrumentation costs 20 hours plus the lost decisions during the gap. Like technical debt; real and compounds. The discipline is including instrumentation in every PR for new functionality and running quarterly schema audits.
Why versioning events with the _v2 pattern?
Schema changes are inevitable. Renaming an event without versioning breaks every dashboard and saved query that references the old name. The _v2 pattern lets the new event ship alongside the old one for 90 days; dashboards migrate at their own pace; old data remains queryable for historical analysis. The discipline applies only to breaking changes (rename, remove, change type, change semantics); additive changes can ship freely.
How does this pair with the integrations catalog?
The skill is platform-agnostic for the discipline. The /integrations pages cover the platform-specific MCP commands and data models. Open the skill for the strategic instrumentation decisions; open the integration page for the platform-specific tactics (Mixpanel event configuration, BigQuery schema discovery, dbt Semantic Layer setup, Hex thread management).
What is the right number of events to track?
Thirty to fifty events is the sweet spot for a typical SaaS product. Below twenty is under-instrumented; above one hundred is usually UI noise. The number matters less than the discipline behind it: every event maps to a question the team needs to answer, every event has a stable name, every event fires consistently. A tight 30-event setup with clear semantics outperforms a sprawling 200-event setup.