Flagship Skill · Ads performance analytics

The ads performance analytics skill.

Read paid media dashboards without fooling yourself.

A data-team-mentor's playbook. Attribution models, platform reporting quirks, multi-platform reconciliation, ROAS vs LTV horizon traps, cohort analysis, statistical noise, incrementality testing, geo experiments, platform self-attribution bias, and the interpretation failures that produce expensive lessons. Built for marketers, growth analysts, agency analysts, and founders evaluating paid media reports.

Audience: marketers and growth analysts about to make a budget decision based on dashboard numbers. Adjacent: data teams owning attribution and finance teams evaluating channel ROI.

What this skill is for

The dashboard is the moment of truth for paid media decisions.

The numbers on a paid media dashboard determine whether you scale, hold, or kill. They also expose every platform's self-attribution bias, every modeled conversion shortcut, every cross-platform double-count. Most "scale this campaign" decisions trace back to misreading the dashboard.

This skill is the discipline that prevents misreading. It assumes the campaign was strategically sound and the creative was tested properly. The hard part is knowing what each number actually means, what it does not, and how to reconcile platform-reported metrics with the truth in your warehouse.

The output is a defensible decision: scale, hold, or kill, with a written rationale that survives scrutiny in a room of skeptical stakeholders six months later. The rationale names the attribution model, the conversion window, the cohort horizon, the incremental rate, and the warehouse-attributed CAC. None of the numbers are platform-reported alone.

What is in the skill

Thirteen sections covered in the body.

The SKILL.md spans the full result-interpretation lifecycle from reading the panel through reconciling against the warehouse to running incrementality tests. Each section names a common interpretation failure and the discipline that prevents it.

  1. 01

    The result panel

    What every trustworthy paid media platform should expose: spend / impressions / clicks, conversions with definition and window visible, attribution breakdown, frequency, audience saturation, time series, cost metrics with math defined, conversion path data, filters and exports. Anything missing is a signal to treat numbers with extra skepticism.

  2. 02

    Platform-reported vs reality

    Conversion windows, view-through attribution, modeled conversions, self-attribution bias. Each platform's dashboard is optimized to make the platform look effective; this is not a moral failing, it is a structural incentive. Reconcile against the warehouse.

  3. 03

    Attribution models in practice

    Last-click, first-click, linear, time-decay, U-shaped, data-driven attribution, marketing-mix modeling. None is right; all are approximations. Pick one and read the others as sanity checks. By stage: last-click for early, DDA for mid, MMM for mature.

  4. 04

    Multi-platform reconciliation

    Sum-of-platform-reports is always greater than reality because every platform claims credit for conversions other platforms also touched. The pattern: trust warehouse for total, trust platforms for relative ranking, never trust platform sums.

  5. 05

    ROAS vs LTV horizon

    ROAS is short-term revenue per spend. LTV is long-term customer lifetime revenue. Decisions on ROAS can be wrong if LTV varies by channel. Use cohort-based LTV by acquisition channel; compare on payback period or LTV-CAC ratio, not raw ROAS.

  6. 06

    Cohort vs daily metrics

    Daily tells you what happened today. Cohort tells you whether today's customers are different from last month's. Three cuts: by acquisition month, by channel, by campaign. Two consecutive months of cohort LTV decline is the signal to act.

  7. 07

    Statistical noise

    Most week-over-week changes are noise. Day-of-week, holiday, seasonal, exogenous variance compound. The signal-to-noise threshold: 30%+ in metrics that vary 10 to 20% naturally. Below that, pre-commit a test window before drawing conclusions.

  8. 08

    Incrementality testing

    What would have happened without the ad. Geo holdout, ghost bidding, conversion lift studies, PSA tests. Most paid media is 30 to 70% incremental. Branded search 5 to 20%, retargeting 20 to 40%, prospecting 50 to 90%. Run quarterly on highest-spend channels.

  9. 09

    Geo experiments and holdouts

    Geo holdout (turn off in one region, measure baseline). Geo lift (scale 2x, see if conversions scale linearly). Switchback (alternate weeks). Pre-and-post (weak; confounded). The right setup: matched markets, statistical power upfront, pre-committed analysis window.

  10. 10

    Platform self-attribution bias

    Each platform's pixel fires on conversion; the platform claims credit. Platforms have no incentive to underreport. Detection: when platform-reported exceeds warehouse by 30%+, you have heavy double-counting. The fix is warehouse as canonical plus quarterly incrementality tests.

  11. 11

    Common interpretation failures

    Twelve patterns: ROAS drop on noise, platform vs warehouse mismatch, PMax branded harvest, week-1 retargeting surge, marginal A/B winners, projected LTV mistakes, frequency masked by free conversions, last-click bias, scaling saturation, brand-on-direct-ROAS, mix shift, agency reporting unverified.

  12. 12

    The framework: 12 considerations

    Result panel completeness, platform vs reality, attribution model, multi-platform reconciliation, ROAS vs LTV, cohort vs daily, statistical noise, incrementality, geo testing, self-attribution bias, decision rule, single source of truth. Output: scale, hold, or kill.

  13. 13

    The courage to call it incremental zero

    Most accounts have at least one campaign that looks profitable on the platform but is incremental zero in the warehouse. The discipline of finding and killing it is the highest-impact paid media analytics work. The platform will not tell you; the warehouse plus quarterly incrementality tests will.

Reference files

Seven references that go alongside the SKILL.md.

The references hold the metric definitions, attribution comparison, platform quirks, incrementality playbook, reconciliation patterns, cohort templates, and failure pattern catalog. Each is a self-contained doc the team can lift into a project without reading the rest.

  • references/metric-definitions-glossary.md

    CTR, CPC, CPM, CPA, ROAS (revenue not profit), LTV, AOV, frequency, reach, impressions, conversion window, view-through, modeled conversion, blended CAC, MER. Definitions, formulas, and the common pitfalls in usage. Pin definitions before debating numbers.

  • references/attribution-model-comparison.md

    Last-click, first-click, linear, time-decay, U-shaped, DDA, MMM. For each: definition, fit, bias direction, when not to use. Worked example showing the same conversion path under each model. Decision matrix by business stage. Three rules for reporting which model produced which number.

  • references/platform-reporting-quirks.md

    Per-platform behaviors. Google PMax black box and branded cannibalization. Meta iOS impact, Conversions API necessity, view-through overcount. LinkedIn 30-day click defaults and B2B metrics. TikTok video-completion attribution. Programmatic viewability gates. Cross-platform interference and the typical 1.3 to 3x platform vs warehouse gap.

  • references/incrementality-testing-playbook.md

    Five methods: geo holdout, ghost bidding (Google), conversion lift (Meta), PSA tests, switchback. Setup, duration, analysis pattern, expected incremental rate ranges per channel type. Step-by-step geo holdout playbook. Cadence: quarterly on highest-spend channels.

  • references/dashboard-reconciliation-patterns.md

    Three-layer reporting model: platform metrics for in-flight, warehouse multi-touch for cross-platform, MMM for budget allocation. Blended CAC formula and worked example. Board-deck pattern with worked slide showing platform-reported vs warehouse-attributed vs incremental. Reconciliation cadence.

  • references/cohort-analysis-templates.md

    Three cohort cuts: by acquisition month (LTV growth over rolling 12 months), by acquisition channel (which channels deliver higher-LTV customers), by acquisition campaign (campaign-level LTV signals). Retention curve patterns: plateau, smile, decay. When to act on cohort signals.

  • references/common-interpretation-failures.md

    Twelve failure patterns with name, symptom, root cause, fix, and prevention. ROAS week-over-week noise, platform vs warehouse mismatch, PMax branded cannibalization, week-1 retargeting surge, 12% A/B noise, projected LTV pitfalls, frequency masked by free conversions, last-click bias, scaling saturation, brand on direct ROAS, mix shift, unverified agency reporting.

Browse all reference files on GitHub

Bridge to the experimentation suite

Conceptual cousin: read result panels without fooling yourself.

The discipline of reading dashboards without fooling yourself shows up in two places: paid media result panels and product experiment result panels. The statistical patterns are the same; the application is different.

experimentation-analytics covers reading product experiment result panels: confidence intervals, p-values, multiple testing, sequential testing, CUPED variance reduction, ratio metrics with the delta method, heterogeneous treatment effects, network effects, and dashboard reconciliation. Built for product managers and data analysts reading experiment results.

ads-performance-analytics (this skill) covers reading paid media dashboards: attribution models, platform reporting quirks, multi-platform reconciliation, ROAS vs LTV, cohort analysis, incrementality testing, and platform self-attribution bias. Built for marketers and growth analysts reading campaign results.

Read both for the full picture on result interpretation across the work. The dashboard reconciliation patterns in particular show up identically in both skills, which is the point: the platforms inflate the same way the experiment platforms used to inflate, and the warehouse discipline is the answer in both cases.

Where this skill fits in the suite

The third skill in the marketing suite. Completes the trio.

The marketing suite covers the paid media discipline across three skills. ads-performance-analytics completes the trio.

paid-media-strategy covers strategy and operations: hypothesis discipline for spend, channel selection, budget allocation, audience targeting, bid strategy, campaign types, what NOT to spend on, attribution reality, and the failure modes that produce expensive lessons. Read it when designing a plan or auditing an account.

ads-creative-development covers creative production: hook patterns, format selection, video pacing, variation systems, sequential testing, fatigue detection, brand-voice alignment, platform creative norms. Read it when producing creative at scale.

ads-performance-analytics (this skill) covers result interpretation: attribution models, platform-reported vs reality, multi-platform reconciliation, ROAS vs LTV, cohort analysis, incrementality testing, platform self-attribution bias. Read it when reading dashboards about to inform a decision.

Together the three cover the full paid media lifecycle from strategy through creative production to result interpretation. The integrations catalog at /integrations covers the platform-specific tactical layer underneath.

Open source under MIT

Read the SKILL.md on GitHub.

The skill source lives in the rampstackco/claude-skills repository alongside dozens of other skills covering the full lifecycle of brand and product work. MIT licensed.

Frequently asked questions.

How is this different from a platform analytics tutorial?
A tutorial teaches you how to read the platform's dashboard. This skill teaches you how to interpret it under shipping pressure: which numbers are inflated, which are reliable, which are platform self-attribution, and which to trust against the warehouse. The math is in the references; the discipline is in the body.
Why default to the warehouse over the platform?
Every platform's dashboard is optimized to make the platform look effective. Each platform claims credit for conversions other platforms also touched. Sum-of-platforms is always greater than reality. The warehouse is the only neutral source that does not have a structural incentive to over-attribute. Use platform numbers for in-flight tuning of platform levers; use warehouse numbers for board decisions.
How does this pair with the rest of the marketing suite?
Three skills cover the paid media lifecycle. paid-media-strategy covers strategy and operations: hypothesis discipline, channel selection, budget allocation, audience targeting, bid strategy. ads-creative-development covers creative production: hook patterns, formats, variations, testing, fatigue. ads-performance-analytics (this skill) covers result interpretation: attribution, reconciliation, cohort analysis, incrementality. Read whichever fits the current phase of the work.
How does this relate to experimentation-analytics?
Conceptual cousins. experimentation-analytics covers reading experiment result panels without fooling yourself: confidence intervals, p-values, sequential testing, CUPED, ratio metrics. ads-performance-analytics covers reading paid media dashboards without fooling yourself: attribution, reconciliation, incrementality, cohort. The statistical patterns are the same; the application is different (experiments vs paid media). Read both for the full picture on result interpretation.
What is incrementality and why does the skill emphasize it?
Incrementality is the share of conversions that would not have happened without the ad. Most paid media is 30 to 70% incremental, not 100%. Branded search bidding is often 5 to 20% incremental because the user would have found you organically. Retargeting is often 20 to 40% incremental. The platform's reported attribution does not measure incrementality; it claims credit for conversions that include people who would have converted anyway. Without quarterly incrementality testing, optimization runs against numbers that systematically overcount. The size of the over-count is the size of the budget waste.
What is the courage to call it incremental zero?
Some campaigns look profitable in the platform but are incremental zero in the warehouse. Branded search bidding when you rank one organically. PMax cannibalizing free brand traffic. Retargeting users who already added items to cart. The platform reports these as conversions; the warehouse reveals they would have happened without the ad. Killing those campaigns is the discipline. Most accounts have at least one. The skill closing argues that finding and killing them is the highest-impact analytics work in paid media, because the alternative is paying for outcomes you would have gotten free.