Walkthrough · Analytics setup
Set up product analytics from scratch
You have a product but no analytics, or analytics that nobody trusts. You need to lay a measurement foundation that product decisions can rest on without re-instrumenting everything in 12 months.
- PM
- Growth
- Engineering
Skill cluster
The skills this walkthrough orchestrates.
Each skill in the catalog is a methodology unto itself. Walkthroughs show how multiple skills compose for a specific use case. Click a card to read the skill in detail.
Skill
product-analytics-setup
Defines the event taxonomy, dashboards, and instrumentation discipline. Anchor skill of the workflow.
Skill
data-warehouse-experimentation
Sets up the warehouse layer for advanced analysis beyond out-of-box analytics.
Skill
discovery-research-synthesis
Surfaces the product decisions analytics needs to support; upstream context for the taxonomy.
Skill
pm-spec-writing
Translates analytics requirements into specs engineering can implement against.
Skill
experiment-design
Downstream user of the analytics infrastructure; events become experiment metrics once the foundation is solid.
Skill
integration-orchestrator
Coordinates the cross-team work across PM, engineering, data, and stakeholders.
Orchestration shape
Linear foundation. Each stage the next stages depend on.
Analytics setup is foundation work. Stage 1 frames the questions; the spec answers the questions; instrumentation makes the spec real; activation lets the data drive decisions. Skipping a stage produces analytics nobody trusts because the foundation underneath was never set.
Stage 1
Frame what we need to know
Surface the product decisions analytics is meant to support. Without this framing, instrumentation drifts toward what is easy to track rather than what the team needs.
- discovery-research-synthesis
Output
Decision questions inventory
Stage 2
Spec the events
Translate the questions into an event taxonomy. Each event has a defined trigger, properties, and ownership. Versioned so future changes do not break existing queries.
- pm-spec-writing
- product-analytics-setup
Output
Event taxonomy spec
Stage 3
Instrument and ship
Engineering implements; PM and data verify. Cross-team coordination matters because surfaces span web, mobile, and server-side. Each event QA'd before it counts as live.
- integration-orchestrator
- product-analytics-setup
Output
Instrumentation checklist + verified dashboards
Stage 4
Activate
Warehouse layer comes online for advanced analysis. Experiment design becomes possible because the events the test depends on are now reliable.
- data-warehouse-experimentation
- experiment-design
Output
Warehouse schema + first experiment running
Foundation note
Everything that comes next depends on this. Roadmap planning, experimentation, OKR scoring, customer success metrics, retention analysis. A weak foundation produces analytics work that re-runs every 12 months because the foundation was never solid enough to build on.
Artifacts at each stage
What the workflow produces, illustrated.
Five artifacts span the four foundation stages: a decision questions inventory, an event taxonomy spec, an instrumentation checklist, the core dashboard, and a warehouse schema. Together they describe what an agent hands off between stages and what the team operates against once the foundation is live.
Stage 1 output
Decision questions inventory
Discovery-research-synthesis surfaces the product decisions analytics is supposed to support. The inventory is the input the event taxonomy answers to.
Decision questions inventory · produced by discovery-research-synthesis
What analytics needs to answer
7 product decisions surfaced. Today: 0 fully answerable, 3 partial, 4 with no reliable signal.
| Question | Data type | Priority | Today |
|---|---|---|---|
| Which onboarding flow drives higher 30-day retention? | Cohort comparison | High | No signal |
| Where do users abandon the free trial? | Funnel | High | Partial |
| What predicts upgrade from free to paid within 60 days? | Behavioral cohort | High | No signal |
| Which feature usage correlates with renewal? | Cohort + correlation | Medium | No signal |
| How does power-user activity differ from at-risk users? | Segmentation | Medium | Partial |
| What is the daily/weekly/monthly active user count? | Time-series | High | Partial |
| Which referral channels produce the highest LTV cohorts? | Cohort + attribution | Medium | No signal |
Read: Onboarding cohort comparison, trial-abandonment funnel, and upgrade-predictor cohorts are the high-priority gaps. Event taxonomy in Stage 2 must support these specifically rather than tracking what is easy to instrument.
Stage 2 output
Event taxonomy spec
pm-spec-writing and product-analytics-setup co-produce the taxonomy: events, properties, triggers, surfaces, naming conventions, version. Engineering implements against this spec; data verifies against this spec.
Event taxonomy · produced by pm-spec-writing + product-analytics-setup
v1.0Naming convention
snake_case · present-tense verb noun · subject in name (account, user, subscription) · outcome-focused not implementation-focused
account_created
ServerTrigger: Server-side, after account row insert
Properties
- plan
- source
- referrer
- company_size
onboarding_step_completed
Web + iOS + AndroidTrigger: Frontend, on step completion event
Properties
- step_id
- step_index
- time_on_step_seconds
- skipped
feature_used
Web + iOS + AndroidTrigger: Frontend, on feature interaction
Properties
- feature_id
- feature_category
- context
trial_started
ServerTrigger: Server-side, on trial activation
Properties
- plan_id
- trial_length_days
subscription_changed
ServerTrigger: Server-side, on Stripe webhook
Properties
- from_plan
- to_plan
- change_type
- monthly_revenue_delta
Example payload
{
"event": "onboarding_step_completed",
"user_id": "u_42b9",
"anonymous_id": "anon_8e1c",
"timestamp": "2026-03-14T14:22:08Z",
"properties": {
"step_id": "team_invite",
"step_index": 3,
"time_on_step_seconds": 47,
"skipped": false
}
}Stage 3 output
Instrumentation checklist
integration-orchestrator and product-analytics-setup track the events times surfaces grid through implementation and QA. No event counts as live until the QA cell turns green.
Instrumentation checklist · produced by integration-orchestrator + product-analytics-setup
Events x surfaces. Owner per event. QA verification before counted as live.
| Event | Web | iOS | Android | Server | Owner | QA |
|---|---|---|---|---|---|---|
account_created | ✓ instrumented | ✓ instrumented | ✓ instrumented | ✓ instrumented | Backend | passed |
onboarding_step_completed | ✓ instrumented | · in-progress | · in-progress | - | Frontend | pending |
feature_used | ✓ instrumented | ✓ instrumented | · in-progress | - | Frontend | pending |
trial_started | - | - | - | ✓ instrumented | Backend | passed |
subscription_changed | - | - | - | ✓ instrumented | Backend | passed |
Status: 18 of 20 cells live (3 n/a). 2 cells in progress on mobile surfaces. QA sign-off pending on onboarding and feature events; trial and subscription events verified.
Stage 3 output
Core dashboard
Once events are live, product-analytics-setup builds the core dashboard: KPI tiles, activation funnel, cohort retention. The dashboard is what the team checks daily and what the data feeds into roadmap and OKR conversations.
Core dashboard · produced by product-analytics-setup
last 30 daysActive users (DAU)
8,420
+4.2% vs prior period
Onboarding completion
62%
+1.8pp vs prior period
Trial-to-paid conversion
11.4%
-0.6pp vs prior period
30-day retention
47%
+2.1pp vs prior period
Activation funnel
Account created
12,340
100%
Onboarding step 1
11,220
91%
Onboarding step 2
10,010
81%
First key action
7,720
63%
Trial value moment
5,640
46%
Upgraded to paid
1,290
11%
Cohort retention
| Cohort | Day 0 | Day 7 | Day 14 | Day 30 |
|---|---|---|---|---|
| W-4 | 100% | 68% | 54% | 47% |
| W-3 | 100% | 71% | 58% | 49% |
| W-2 | 100% | 73% | 61% | - |
| W-1 | 100% | 74% | - | - |
Stage 4 output
Warehouse schema
data-warehouse-experimentation activates the warehouse layer: source-of-truth tables plus a transformation layer that materializes the cohorts, funnels, and retention curves the team queries repeatedly. The warehouse is what makes experimentation reliable at scale.
Warehouse schema · produced by data-warehouse-experimentation
Source-of-truth tables in the warehouse. Transformation layer (dbt models) materializes derived tables (cohorts, funnels, retention) from these.
Transformation note: The warehouse holds source-of-truth events. Derived tables (cohort_membership, funnel_progression, retention_curves) are dbt models that materialize the analyses the team runs repeatedly. The analytics platform reads from the source events; advanced analysis and experimentation read from the dbt layer.
How a Claude agent runs this
The agent transcript, illustrative.
A Claude agent loaded with these skills can chain through the orchestration. The transcript below illustrates the flow. Tool calls shown in cyan; agent reasoning in slate; user input in white.
User
We need analytics on our SaaS product but don't know where to start. The marketing team has GA, but product has nothing reliable. What do we instrument first?
Agent
Starting upstream of instrumentation. We need to surface what analytics is supposed to answer before we pick events.
Skill
discovery-research-synthesis
Output
Decision questions inventory: 7 product decisions surfaced. 3 high-priority gaps (onboarding cohort comparison, trial-abandonment funnel, upgrade-predictor cohort). The taxonomy needs to support these specifically rather than tracking what is easy.
Agent
Translating the questions into an event taxonomy with engineering review.
Skill
pm-spec-writing
Output
Spec drafted. 12 events across account lifecycle, onboarding, feature usage, and subscription state. Properties defined per event. Naming convention: snake_case, present-tense. Schema versioning v1.0. Engineering and data both signed off.
Agent
Coordinating implementation across web, mobile, and server-side.
Skill
integration-orchestrator
Output
Workstreams set: backend owns server-side events; web team owns client events; mobile teams own iOS and Android. Weekly check-ins; QA gates per surface; staged rollout in test environments before production.
Agent
Verifying instrumentation against the spec; building the core dashboard.
Skill
product-analytics-setup
Output
18 of 20 cells live (3 are n/a; 2 in progress on mobile). Core dashboard built: 4 KPI tiles, activation funnel, cohort retention table. Trial-abandonment funnel now answerable. Events feed validated against sample sessions.
User
Foundation looks solid. What do we unlock next?
Agent
Activating the warehouse layer and the first experiment.
Skill
data-warehouse-experimentation
Output
Warehouse schema live: events, users, accounts, sessions tables. dbt models for cohort_membership, funnel_progression, retention_curves. Source events sync nightly; analytics platform and warehouse stay aligned.
Agent
First experiment now runs against the foundation.
Skill
experiment-design
Output
First test: onboarding flow A vs B. Sample size based on the now-reliable activation rate baseline. Guardrails on retention. Decision rule documented. The test runs; analytics infrastructure supports it without rebuild.
Variations
Three tiers of the same workflow at different scales.
The full skill cluster fits a flagship version of the workflow. Most teams need lighter cuts more often. The three tiers below describe when each cut fits and which skills carry the work.
Tier 1
Enterprise setup
Full stack with analytics platform + warehouse + dbt transformation layer + experimentation platform integration. Dedicated analytics owner. Cross-functional governance. Built to support multi-team analysis at scale.
Time / cost
3-month setup; PM + dedicated analytics engineer + cross-team coordination
Skills involved
- discovery-research-synthesis
- pm-spec-writing
- product-analytics-setup
- integration-orchestrator
- data-warehouse-experimentation
- experiment-design
Output shape
Decision-questions inventory + 20-30 event taxonomy + warehouse schema + dbt model layer + dashboards + experimentation platform integration + governance documentation.
Tier 2
Standard setup
Analytics platform + warehouse layer for cross-source analysis. PM-led with engineering and data support. Working default for growing SaaS products.
Time / cost
1-month setup; PM + engineer + part-time data engineer
Skills involved
- discovery-research-synthesis
- pm-spec-writing
- product-analytics-setup
- integration-orchestrator
- data-warehouse-experimentation
Output shape
Decision-questions inventory + 12-15 event taxonomy + warehouse sync + core dashboard + first experiment running.
Tier 3
Lightweight setup
Out-of-box analytics platform only. No warehouse layer. Suits small teams that need foundation fast and can revisit warehouse decisions later.
Time / cost
1-week setup; PM + 1 engineer
Skills involved
- pm-spec-writing
- product-analytics-setup
- integration-orchestrator
Output shape
Lightweight 8-12 event taxonomy + core dashboard + funnel views. Warehouse and experimentation deferred.
Frequently asked
Questions this walkthrough surfaces.
- Do we need a data warehouse if we have a good analytics platform?
- Eventually yes; immediately no. A strong analytics platform answers most product questions out of the box: funnels, cohorts, retention, segmentation. The warehouse layer earns its place when teams need cross-source analysis (analytics + billing + support + custom application data), advanced statistical work (proper experimentation analysis with CUPED, ratio metrics with the delta method), or audit-grade source-of-truth data the analytics platform alone cannot guarantee. The Standard variation in this walkthrough adds the warehouse; the Lightweight variation defers it. Most growing SaaS products eventually adopt the warehouse layer; few regret doing so earlier.
- How do we handle existing tracking that nobody trusts?
- Treat it as background context, not foundation. Document what is currently instrumented (often a mix of marketing scripts, partial product events, and old library leftovers). Decide what to migrate to the new taxonomy and what to deprecate. Avoid the temptation to keep the old events alongside the new ones in parallel; that produces dual-source confusion that erodes trust further. Plan a clean cutover with a clear migration date; communicate it; old events stop firing on the cutover date.
- What's the right event count to start with?
- Start with the smallest taxonomy that answers the highest-priority decision questions. For most SaaS products that lands at 8-15 events covering account lifecycle (created, activated, upgraded, churned), core feature usage (the 3-5 features that matter most), and key business moments (trial started, payment succeeded, support contacted). Resist the temptation to instrument everything at v1; broad instrumentation produces low-quality data that nobody trusts. Add events as questions surface that the existing taxonomy cannot answer.
- How do we coordinate between product, engineering, and data?
- Integration-orchestrator covers the discipline. Three patterns work. Single PM owner who specs the taxonomy and reviews each event before it counts as live. Shared spec document that engineering implements against and data verifies. Cross-functional review at each stage gate (taxonomy approved -> instrumentation begins; QA passed -> events count as live; warehouse layer approved -> advanced analysis begins). The failure mode to avoid: parallel work without checkpoints; engineering ships events that do not match the spec, and the team discovers it three months later when an analysis fails.
- What if our team is small? Do we still need this much structure?
- The structure scales down cleanly. Lightweight variation is one engineer plus PM, one week, out-of-box analytics, 8-12 events. Even a 5-person team benefits from the decision-questions framing because it surfaces what the analytics needs to answer; without that framing, small teams instrument what is convenient and discover six months later that the data does not support the decisions they actually need to make.
- How does this relate to the product-analytics-setup skill?
- Product-analytics-setup is the methodology for setting up analytics well: event taxonomy, property design, naming conventions, schema versioning, funnel design, cohort definitions, retention measurement, North Star selection, dashboard hygiene. This walkthrough is the broader orchestration: discovery-research-synthesis surfaces what the analytics needs to answer, pm-spec-writing translates into specs, integration-orchestrator coordinates cross-team work, product-analytics-setup runs through the methodology, data-warehouse-experimentation extends to advanced analysis, and experiment-design becomes possible once the foundation is solid. The skill is one tool; the walkthrough is the workflow that uses six tools together.
Metrics shown are illustrative. Actual results vary by platform, methodology, and traffic volume.