Flagship Skill · Feature launch playbook

The feature launch playbook skill.

Shipping, releasing, and launching are three different things.

A veteran PM-leader's playbook for launching features well, not just shipping them. Positioning, internal alignment, customer comms, sales enablement, support readiness, rollout strategy, monitoring with pre-defined rollback triggers, and post-launch measurement against spec hypotheses. Built around the discipline that distinguishes shipping from releasing from actually launching.

Audience: all PMs, not just data-savvy ones. Adjacent: engineering managers, marketing managers, customer success leaders.

The keystone distinction

Three different things. Most teams conflate them.

The vocabulary upgrade is the most useful PM-leader habit this skill teaches. Use precise language. "Engineering shipped on Tuesday. We are releasing to 25 percent on Thursday. We are launching publicly next month." The vocabulary forces honest accounting of what has and has not happened.

Engineering completion

Shipping

Engineering work is complete. Code is on production. No users can access it yet, or only internal users via a flag. The PR is merged and deployed.

User access

Releasing

Users can access it. Could be 1 percent rollout, could be 100 percent. The feature is live in some technical sense. The flag is on for at least one user.

The discipline

Launching

Positioning, internal alignment, customer comms, sales enablement, support readiness, rollout strategy, monitoring, and post-launch measurement are all in flight. The feature has been introduced to the people who would benefit, with the context they need to use it.

Most "feature failed" diagnoses turn out to be "feature was unlaunched." This skill is structured around the launch dimension because that is where most teams under-invest.

What is in the skill

Fifteen sections covered in the body.

The SKILL.md spans the full launch lifecycle from the keystone distinction through pre-launch readiness, launch day execution, and post-launch measurement. Each section names a discipline and a common failure mode.

  1. 01

    Ship vs release vs launch

    The keystone distinction. Shipping is engineering completion. Releasing is user access. Launching is positioning, alignment, comms, enablement, readiness, rollout, monitoring, and measurement. Most teams conflate the three; precise vocabulary forces honest accounting.

  2. 02

    Launch tiers

    Tier 1 (full launch), Tier 2 (focused launch), Tier 3 (release note). Match effort to feature. Treating every release as Tier 1 produces comms fatigue; treating every release as Tier 3 produces unlaunched features.

  3. 03

    Pre-launch: positioning

    One-page canvas: target user, problem, current alternative, user-visible promise, proof points, anti-positioning. The most common positioning failure is a vague target user.

  4. 04

    Pre-launch: internal alignment

    Single launch brief distributed 2 to 4 weeks before Tier 1 launch, 1 week before Tier 2. Stakeholders: engineering, sales, customer success, marketing, support, executive sponsor. The brief answers what each function needs to do differently.

  5. 05

    Pre-launch: customer comms plan

    Channel mix matched to feature and tier: in-app, email, blog, release notes, social, webinar, sales-led briefing. Comms calendar with owners and gates. Sequencing: highest-touch first, internal before external.

  6. 06

    Pre-launch: sales enablement

    B2B-specific. Battlecard, demo script, deal coaching, training session. The shadow launch failure: feature ships, sales hears via Slack, no battlecard, customers churn over misrepresentation.

  7. 07

    Pre-launch: support readiness

    Training, FAQ, escalation paths, monitoring access. Train support before customer-facing comms. Support is the early-warning system for launch quality.

  8. 08

    Launch day: rollout strategy

    Four patterns: all-at-once, gradual percentage, flag-gated cohort, phased multi-week. Decision framework: blast radius times confidence times external commitments. Most launches default to gradual.

  9. 09

    Launch day: monitoring

    Six dimensions: error rates, latency, feature usage, funnel completion, support volume, customer satisfaction. Pre-defined rollback triggers fire automatically; the team executes without re-litigating.

  10. 10

    Launch day: comms execution

    Calendar executes with health-check gates. The comms misfire failure: blog post auto-publishes while rollout is paused. Make external comms manually triggered, gated on rollout health.

  11. 11

    Post-launch: measurement

    Four dimensions: adoption (reach), engagement (stickiness), outcome (effect), side effects (safety). Tied to spec hypotheses. Time horizons: 1 to 2 weeks for adoption, 4 to 8 weeks for outcome.

  12. 12

    Post-launch: iteration

    Weekly triage. Distinguish marketing, usability, value, and segment problems; each maps to a different fix. Declared-victory failure: launch metric hits target in week 1, reverts by week 4.

  13. 13

    Common failure modes

    Twelve patterns: unlaunched, sales confused, support overwhelmed, rollout backfire, comms misfire, declared victory, no measurement, missed segments, late enterprise comms, attribution confound, wrong feature, no internal capitalization.

  14. 14

    The framework: 12 considerations

    Tier the launch, position it, align internally, plan comms, enable sales, ready support, pick rollout, define monitoring with rollback triggers, sequence comms gated on health, tie measurement to hypotheses, iterate post-launch, declare success on stable trend.

  15. 15

    Most failed launches are unlaunched

    When a feature fails, the most common diagnosis is not that the feature was wrong. It is that the launch was incomplete. Sales did not know. Customers were not told. Support could not help. Audit the launch before declaring the feature failed.

Reference files

Ten references that go alongside the SKILL.md.

The references hold the launch tier framework, the positioning canvas, the internal launch brief structure, the comms playbook, the B2B sales enablement template, the support readiness checklist, the rollout patterns, the monitoring checklist, the measurement framework, and the failure pattern catalog. Launch is operationally broad; the references reflect the breadth.

  • references/launch-tier-decision.md

    Tier 1 / 2 / 3 framework. Three-question decision: will sales need to talk about this, will customers be confused without comms, will the metric show up in board reporting. Worked examples for SSO, pricing change, export-to-PDF, keyboard shortcut.

  • references/positioning-canvas.md

    One-page positioning template with six sections: target user, problem, current alternative, user-visible promise, proof points, anti-positioning. Worked examples for B2B SaaS, B2C, and developer features. The vague-target-user failure mode.

  • references/internal-alignment-checklist.md

    Stakeholders by org type (small startup, mid-size, enterprise). Launch brief structure with six sections. Distribution timing by tier. The pre-launch meeting, the launch-day standup, when the brief reveals misalignment.

  • references/customer-comms-playbook.md

    Channel-by-channel: in-app, email, blog, release notes, social, webinar, sales-led briefing. Decision factors per channel. Comms calendar template. Sequencing principles. Channel-mix rules of thumb by feature type.

  • references/sales-enablement-template.md

    B2B-specific. Battlecard with six sections. Demo script structure (11 minutes). Deal coaching for four common scenarios. Training session structure. The shadow launch failure mode.

  • references/support-readiness-checklist.md

    Training, FAQ, escalation paths, monitoring access. The training-before-comms rule. The post-launch FAQ refresh cycle. Support as quality signal: when support volume reveals a feature problem.

  • references/rollout-strategy-patterns.md

    Four patterns matched to feature types. Decision matrix combining blast radius, confidence, and external commitments. Cohort-specific rollout sequencing. What rollback trigger means; pre-defined rules removing the debate.

  • references/monitoring-readiness-checklist.md

    Six monitoring dimensions: error rates, latency, feature usage, funnel completion, support volume, customer satisfaction. Yellow / orange / red thresholds. Pre-defined rollback triggers with examples. The on-call rotation. Health check before each rollout step.

  • references/post-launch-measurement-framework.md

    Four measurement dimensions tied to spec hypotheses. Time horizons by signal type. The declared-victory-on-launch-spike failure. The no-measurement-plan failure. When the metric did not move: four diagnoses with different fixes.

  • references/common-launch-failures.md

    Twelve failure patterns with name, symptom, root cause, fix, prevention. Cross-references to the other reference files. The pattern across all twelve: treating launch as a moment instead of a multi-week discipline.

Browse all reference files on GitHub

Pairs with these platforms

Five data and analytics platforms for post-launch measurement.

Post-launch measurement requires the data layer. Open the integration microsite for the platform you are measuring with: BigQuery and Snowflake for warehouse-native measurement, dbt for shared metric definitions across launches, Mixpanel for product analytics on the launched feature, Hex for reproducible analysis notebooks. The skill is the measurement framework; the integration page is the platform-specific tactics.

Bridges to sister Product skills

Five sister skills compose into the full PM operational discipline.

This skill does not stand alone. Each sibling below covers a part of the PM operational discipline; together they cover spec writing, instrumentation, experimentation methodology, warehouse-native measurement, rollout infrastructure, and the launch playbook itself.

  • Spec

    pm-spec-writing

    Where launch hypotheses come from. Specs that translate vague ideas into specific, actionable dev briefs with measurable hypotheses about what the feature will move.

  • Instrumentation prerequisite

    product-analytics-setup

    Without instrumentation, you cannot measure the launch. Event taxonomy, property design, naming conventions, schema versioning, North Star selection. The discipline that makes the launch measurement plan possible.

  • Launch as experiment

    experiment-design

    When the launch IS an experiment (gradual rollout used as a holdout test), the methodology applies. Hypothesis, sample size, MDE, primary metric, what NOT to test.

  • Measurement methodology

    data-warehouse-experimentation

    For teams measuring launches via warehouse-native methods. SQL assignment, exposure logging, dbt metric definitions, statistical analysis, CUPED variance reduction.

  • Rollout infrastructure

    feature-flagging

    The flag-management discipline this skill assumes. Flag types, naming, lifecycle, targeting rules, rollout strategies, stale flag cleanup, governance.

Where this skill fits in the track

The third skill. Completes the PM gap-closing track.

The PM gap-closing track covers three skills. The first two went data-deep; this one goes operationally broad. Together they close the gap between PM specs and shipped features that actually capture value.

product-analytics-setup covers instrumentation execution: how to set up trackable product analytics that produce trustable answers. The data foundation that every launch measurement plan needs.

data-warehouse-experimentation covers running experiments out of the warehouse: SQL assignment, exposure logging, dbt metric definitions, statistical analysis, CUPED variance reduction. The methodology for testing whether a launch worked.

feature-launch-playbook (this skill) covers the operational launch discipline: positioning, internal alignment, customer comms, sales enablement, support readiness, rollout strategy, monitoring, and post-launch measurement. The playbook that turns "feature exists" into "feature lands."

The track is complete. Together the three close the PM data gap: instrument correctly, experiment rigorously, launch with a plan. The /integrations catalog covers the platform-specific tactical layer underneath each.

Open source under MIT

Read the SKILL.md on GitHub.

The skill source lives in the rampstackco/claude-skills repository alongside dozens of other skills covering the full lifecycle of brand and product work. MIT licensed.

Frequently asked questions.

What is the difference between shipping, releasing, and launching?
Shipping means engineering work is complete; the code is on production. Releasing means users can access it; could be 1 percent rollout, could be 100 percent. Launching means positioning, internal alignment, customer comms, sales enablement, support readiness, rollout strategy, monitoring, and post-launch measurement are all in flight. Most teams conflate the three. The distinction is the keystone of this skill: a feature that ships without launching captures a fraction of its potential value because nobody knows it is there.
Does every feature need the full launch playbook?
No. Match effort to feature scope via launch tiers. Tier 1 (full launch) is for net-new products, major features, pricing changes, breaking changes; full playbook plus executive announcement. Tier 2 (focused launch) is for meaningful improvements; subset of the playbook with in-app comms, blog post, support readiness, rollout, measurement. Tier 3 (release note) is for incremental polish; changelog plus light monitoring. Treating every release as Tier 1 produces comms fatigue; treating every release as Tier 3 produces unlaunched features.
Why does the skill emphasize pre-defined rollback triggers?
When a launch goes sideways, the team often debates whether to roll back for an hour while the issue compounds. Pre-defined triggers remove the debate; the rule fires automatically and the team executes the rollback. The cost of a wrong rollback (paused launch and re-rollout) is small; the cost of a delayed rollback (compounding customer impact, reputation damage, harder recovery) is large. Bias toward fast rollback by writing the trigger down before launch.
What is the comms misfire failure?
A blog post auto-publishes at the announced launch time, but the rollout was paused two hours earlier due to an issue. Customers click through to a feature that 75 percent of them do not have access to. The launch story breaks. The fix is to make external comms manually triggered, gated on a rollout health check. The comms timing is the planned timing; the actual fire is conditional on the rollout passing the health check.
How does this skill compose with the other Product skills?
pm-spec-writing defines the launch hypotheses; this skill validates them. roadmap-planning provides the launch context. product-analytics-setup is the instrumentation prerequisite; without it you cannot measure the launch. experiment-design and data-warehouse-experimentation provide the methodology for testing whether the launch worked. feature-flagging provides the rollout infrastructure this skill depends on. Five skills compose into the full PM operational discipline.
Why is post-launch measurement at week 4, not week 1?
The launch-week spike is unrepresentative. Adoption is high in week 1 because of the launch announcement; many users try the feature once and never return. By week 4 the metric stabilizes at its post-launch level. Declaring victory in week 1 leads to roadmap decisions built on a launch that did not actually land; the metric reverts to baseline by week 4 and the team has already moved on. The four-week checkpoint is the minimum reliable measurement window for adoption and engagement.