Walkthrough · Product launch
Launch a feature
You want to ship a new feature to users in a controlled, measured way that produces post-launch learning.
- PM
- Engineering
Skill cluster
The skills this walkthrough orchestrates.
Each skill in the catalog is a methodology unto itself. Walkthroughs show how multiple skills compose for a specific use case. Click a card to read the skill in detail.
Skill
pm-spec-writing
Drafts the spec: problem, user stories, success criteria, scope, risks.
Skill
roadmap-planning
Sequences the launch within the broader roadmap; surfaces dependencies.
Skill
integration-orchestrator
Coordinates work across PM, engineering, design, content, customer success.
Skill
content-brief-authoring
Briefs the launch comms (blog post, email, in-app, social).
Skill
feature-flagging
Implements the rollout flag; phased ramp; rollback path.
Skill
beta-program-management
Runs the soft-launch beta cohort; structured feedback collection.
Skill
feature-launch-playbook
Owns the public launch discipline: alignment, enablement, monitoring, rollback triggers.
Skill
content-distribution
Distributes the launch comms across owned, earned, and paid channels.
Skill
experimentation-analytics
Reads post-launch metrics against pre-launch hypotheses; surfaces what worked.
Skill
editorial-qa
Pre-publish QA on every comms artifact before it ships.
Orchestration sequence
How the skills fire across 4 phases.
Each phase produces an artifact the next phase depends on. The sequence is what turns a high-level prompt into a shipped outcome.
- Phase 1
Discovery + Spec
Translate the high-level prompt into a build-ready spec. Sequence within the broader roadmap; surface dependencies.
pm-spec-writing
Drafts the spec; reviews with engineering and design.
roadmap-planning
Sequences the launch; surfaces dependencies on adjacent work.
- Phase 2
Build + Setup
Coordinate the build across teams. Brief the launch comms in parallel. Configure the feature flag.
integration-orchestrator
Coordinates handoffs between PM, engineering, design, content, customer success.
content-brief-authoring
Briefs each comms artifact; sets voice, scope, anti-patterns.
feature-flagging
Implements the flag with phased ramp and rollback path.
- Phase 3
Beta + Soft launch
Run the calibrated beta cohort. Comms ship through QA. Soft launch validates without the coordinated public moment.
beta-program-management
Recruits cohort, collects structured feedback, applies graduation criteria.
editorial-qa
Reviews each comms artifact pre-publish; brief adherence, voice, fact accuracy.
feature-launch-playbook
Coordinates internal alignment, sales enablement, support readiness.
- Phase 4
Public launch + Measure
Public moment lands. Comms distribute across channels. Post-launch monitoring runs against pre-launch hypotheses; retro feeds the next launch.
content-distribution
Channel-fit distribution; owned, earned, and paid amplification of organic content.
experimentation-analytics
Reads post-launch metrics; ships retro doc that informs next launch.
Artifacts at each stage
What the workflow produces, illustrated.
Five artifacts span the four phases of the launch. Each is the output of one or more skills firing; together they tell the story of the launch from spec through post-launch metrics.
Phase 1 output
Product spec
The pm-spec-writing skill produces a structured spec: problem statement, user stories, success criteria, scope, open questions and risks. The spec is the contract between PM, engineering, design, and the launch motion downstream.
Product spec · v1.2
Saved searches with notification
Owner: PM. Reviewed by engineering and design. Status: ready to build.
Problem statement
Power users return to the same search queries weekly. Today they re-type queries each time and miss new matches between visits. Support tickets and feedback survey both surface this friction with frequency.
User stories
- As a power user, I want to save a search query so I do not have to re-type it.
- As a power user, I want to be notified when new results match a saved search.
- As a user managing searches, I want to edit, rename, and delete saved searches.
Success criteria
- Saved-search adoption among power users>= 35%
- Notification CTR>= 18%
- Return rate within 7 days (saved-search users)+12% vs control
In scope
- Save / edit / delete
- Email notifications
- Web app surface
Out of scope
- In-app push
- Mobile native app
- Slack integration
Future
- Mobile push
- Daily digest
- Shared team searches
Open questions and risks
- How often should notifications send? Default daily or weekly? Decision before build.
- Risk: notification volume could overwhelm power users with broad queries. Mitigation: rate limit per saved search per day.
Phase 1 output
Roadmap view
Roadmap-planning sequences the launch in the broader plan; dependencies surface explicitly. The view shows the feature in Q2 with its dependencies and downstream blocks.
Product roadmap · FY view
This feature is highlighted in Q2. Dependencies surfaced below the item.
Q1
Performance work
doneSearch ranking v2
done
Q2 (current)
Saved searches with notification
in progress- Depends on: search ranking v2
- Blocks: shared team searches (Q4)
Onboarding redesign
in progressAdmin role overhaul
in review
Q3
Mobile app v3
readyEnterprise SSO
scoping
Q4
TBD pending Q3 outcomes
tentative
Phase 2-3 output
Rollout plan
Feature-flagging plus feature-launch-playbook produce the rollout plan: phased ramp from internal to 5% beta to 25% to 100%, gates between phases, rollback criteria explicit.
Rollout plan · saved-searches
Phased rollout with gates between phases. Rollback criteria explicit.
Phase progression
- 012 days
Internal
Engineering and PM only. Smoke test for catastrophic bugs and obvious UX failures.
Gate to next: Smoke test passes; no P0 bugs in 24h.
- 0210 days
5% beta
Closed beta cohort, 200 power users. Structured feedback collection. Rollback ready.
Gate to next: Beta feedback positive; <2% support ticket rate increase.
- 0314 days
25%
Broader rollout. Behavioral signal at scale. Watch guardrails daily.
Gate to next: Adoption {'>='} 25% of eligible users; guardrails clean for 7 days.
- 04Ongoing
100%
Full GA. Marketing announcement, content distribution, post-launch monitoring.
Gate to next: Public launch comms ship; 14-day post-launch monitoring window opens.
Rollback criteria
- Any P0/P1 bug affecting more than 5% of users in any phase: rollback immediately.
- Notification spam complaints exceed 1% of saved-search users: pause and re-evaluate notification rate limits.
- Adoption below 10% at the 25% phase after 14 days: do not proceed to 100%; investigate fit.
Phase 3-4 output
Launch comms calendar
Content-brief-authoring briefs each artifact; editorial-qa reviews them; content-distribution ships them on schedule. The calendar surfaces 9 artifacts across 4 owners with status tracking through QA.
Launch comms calendar
9 artifacts across 4 owners. Status tracked through editorial-qa before any T+0 send.
| Artifact | Timing | Owner | Status |
|---|---|---|---|
Internal announcement Slack post to all-hands channel; sets context for support team. | T-7 days | PM | published |
Beta participant email Thanks beta cohort; outlines what changes for them at GA. | T-2 days | PM + CS | scheduled |
Blog post (announcement) Public-facing announcement with positioning, screenshots, and use cases. | T+0 | Content | in review |
Email newsletter send Newsletter announcement to active subscribers with inline CTA. | T+0 | Content | draft |
In-app announcement In-product banner for first session post-launch; dismissible. | T+0 | PM + Eng | scheduled |
Social posts (LinkedIn, X) Coordinated social wave; pull-quote from blog post; product screenshots. | T+0 to T+3 days | Content | draft |
Help docs update Help center articles for the new feature; FAQ entries; video walkthrough. | T-1 day | Content + CS | in review |
Sales enablement deck One-pager and pitch slide for sales conversations; competitive talking points. | T-3 days | PMM + Sales | scheduled |
Customer success outreach Targeted outreach to top 50 power-user accounts; personal note from CSMs. | T+1 day | CS | draft |
Phase 4 output
Post-launch metrics
Experimentation-analytics reads the post-launch metrics against the pre-launch hypotheses. The dashboard shows adoption by cohort, feature usage, and guardrails. The read informs the retro that feeds the next launch.
Post-launch metrics · day 21 of 28
Saved searches feature
Adoption (power users)
38.4%
Target >=35% · +3.4pp above target
Notification CTR
21.7%
Target >=18% · +3.7pp above target
7-day return rate
+14.2%
Target +12% vs control · +2.2pp above target
Adoption by cohort
Power users (10+ sessions/wk)
38.4%
Active users (3-9 sessions/wk)
21.5%
Casual users (1-2 sessions/wk)
7.8%
New signups (last 30 days)
12.1%
Guardrails (pre-launch baseline vs post-launch 21 days)
- Daily active users+4.8%
- Session duration+2.1%
- Support ticket volume+8.3% (within tolerance)
- Churn rate (30-day)-0.4pp
Post-launch read: All KPIs above target. Support ticket bump traceable to a known notification-rate edge case; fix scheduled in next sprint. Recommend lifting the 14-day monitoring window and moving to steady-state.
How a Claude agent runs this
The agent transcript, illustrative.
A Claude agent loaded with these skills can chain through the orchestration. The transcript below illustrates the flow. Tool calls shown in cyan; agent reasoning in slate; user input in white.
User
We're launching saved searches with notifications next quarter. Help me plan it.
Agent
Starting with the spec. We'll need a problem statement, success criteria, and scope before we can plan the rest.
Skill
pm-spec-writing
Output
Spec drafted: problem (power users re-typing queries weekly, missing matches between visits), 3 user stories, success criteria (35% adoption among power users, 18% notification CTR, +12% return rate), in/out/future scope, open questions and risks. Ready for engineering review.
Agent
Sequencing the launch in the roadmap. The feature depends on search ranking v2 (already shipped) and blocks shared team searches (Q4).
Skill
roadmap-planning
Output
Roadmap updated: saved-searches anchored in Q2 with explicit dependency callouts. Adjacent items (onboarding redesign, admin role overhaul) sequenced to avoid resource conflict.
User
Move into build coordination. We have engineering, design, content, and CS involved.
Agent
Coordinating the cross-functional handoffs and briefing the comms in parallel.
Skill
integration-orchestrator
Output
Workstreams defined per team. Handoff gates between phases (build complete -> comms ready -> beta -> public launch). Slack channel + weekly sync set.
Agent
Briefing comms artifacts now so content can run in parallel with engineering.
Skill
content-brief-authoring
Output
9 comms artifacts briefed: internal announcement, beta email, blog post, newsletter, in-app banner, social posts (LinkedIn + X), help docs, sales deck, CSM outreach. Voice, scope, anti-patterns set per artifact.
User
We're ready for soft launch. Set up the beta.
Agent
Configuring the flag and recruiting the beta cohort.
Skill
beta-program-management
Output
200-user closed beta cohort recruited from power users. Structured feedback channels set: weekly survey, async feedback form, 5 mid-beta interviews. Graduation criteria explicit. Beta runs 10 days then ramp to 25%.
User
Beta looks good. Ship the public launch.
Agent
Coordinating the public moment. QA the comms artifacts, then distribute on schedule.
Skill
feature-launch-playbook
Output
Internal alignment confirmed; sales deck shipped to sales 3 days pre-launch; support docs and training complete. Rollback triggers documented. Launch readiness checklist green.
Agent
Distributing comms now.
Skill
content-distribution
Output
Blog post live on owned blog; newsletter sent; in-app banner active; social wave underway across LinkedIn and X. Sales and CSM teams notified at launch moment.
User
What's the post-launch read at day 21?
Agent
Pulling the metrics against pre-launch hypotheses.
Skill
experimentation-analytics
Output
All KPIs above target. Adoption 38.4% (target 35%); notification CTR 21.7% (target 18%); 7-day return rate +14.2% (target +12%). Support ticket bump traceable to known notification-rate edge case; fix in next sprint. Recommend lifting monitoring window and moving to steady-state.
Variations
Three tiers of the same workflow at different scales.
The full skill cluster fits a flagship version of the workflow. Most teams need lighter cuts more often. The three tiers below describe when each cut fits and which skills carry the work.
Tier 1
Major launch
Anchor launch with full comms wave, beta program, marketing campaign, sales enablement, post-launch retro. Often a quarterly or annual highlight.
Time / cost
8-16 weeks; cross-functional team; full QA cycles
Skills involved
- pm-spec-writing
- roadmap-planning
- integration-orchestrator
- content-brief-authoring
- feature-flagging
- beta-program-management
- feature-launch-playbook
- content-distribution
- experimentation-analytics
- editorial-qa
Output shape
Full spec + roadmap + rollout plan + 9-artifact comms calendar + beta synthesis + public launch + 30-day retro doc.
Tier 2
Standard launch
Moderate-sized feature with light comms wave, soft launch via feature flag (no formal beta), targeted distribution.
Time / cost
4-8 weeks; PM-led with engineering and content support
Skills involved
- pm-spec-writing
- feature-flagging
- feature-launch-playbook
- content-brief-authoring
- content-distribution
- editorial-qa
- experimentation-analytics
Output shape
Spec + rollout plan + 3-4 comms artifacts + post-launch metrics read.
Tier 3
Quiet launch
Small change or quality improvement that ships without comms. Internal stakeholders informed; feature flag controls rollout; metrics monitored.
Time / cost
1-2 weeks; PM + engineering only
Skills involved
- pm-spec-writing
- feature-flagging
- experimentation-analytics
- editorial-qa
Output shape
Brief spec + flag config + post-launch metric read; no comms wave.
Frequently asked
Questions this walkthrough surfaces.
- How is this walkthrough different from the feature-launch-playbook skill alone?
- The feature-launch-playbook skill covers the launch discipline: shipping vs releasing vs launching, internal alignment, sales enablement, support readiness, rollout strategy, monitoring with rollback triggers. This walkthrough is the broader orchestration: pm-spec-writing produces the spec, roadmap-planning sequences the work, beta-program-management runs the soft launch, content-brief-authoring scopes the comms, content-distribution executes them, editorial-qa gates them, and feature-launch-playbook ties the launch coordination together. The skill is one tool; the walkthrough is the workflow that uses ten tools together.
- Do I need all 10 skills for a launch?
- No. The Tier 1 walkthrough uses all ten; the Tier 2 cuts to seven; the Tier 3 cuts to four. Most launches are Tier 2 or Tier 3. Pick the cut that matches the launch stakes: a major launch that anchors the year warrants the full cluster; a small feature improvement that affects one segment may need only spec, flag, and analytics. The Variations section below describes the cuts.
- When should I run an AB test as part of a launch vs skip it?
- Run an AB test when the launch's success criteria are quantitative and the change can be tested against a meaningful control. Skip the test when the change is foundational and rolling back would be expensive (use staged rollout with monitoring instead), when sample size is too small for the test to be conclusive, or when the change is a quality fix that does not warrant testing. The run-an-ab-test walkthrough covers the testing discipline; this walkthrough composes it as one stage when applicable.
- What is a 'soft launch' vs a 'public launch' in this walkthrough?
- Soft launch: feature available to a calibrated cohort or via gradual ramp without coordinated public announcement. Beta program management runs this stage. Public launch: coordinated comms (blog, email, social, in-app), sales enablement, support readiness. Feature-launch-playbook owns the public launch discipline. Most launches benefit from both: soft launch validates with limited blast radius; public launch creates the coordinated moment that makes the feature land in the market.
- How do I avoid post-launch support overload?
- Three disciplines. First, support readiness pre-launch: docs, FAQ, support team training, escalation paths. Second, monitoring with explicit thresholds: define the support-volume increase that triggers investigation. Third, mid-launch communication: if friction surfaces in the first 48 hours, communicate to users transparently rather than letting tickets accumulate. The launch-comms calendar covers what ships when; the rollout plan covers the gates between phases.
- What does post-launch retro look like?
- 30 days post-launch: aggregate the metrics against pre-launch hypotheses, surface what worked vs what did not, document lessons that inform the next launch. The retro is owned by the PM with input from engineering, design, content, and customer success. Write it as a document, not a deck; circulate to stakeholders; reference in the next launch's planning. Pair with after-action-report skill for the broader retro discipline.
Metrics shown are illustrative. Actual results vary by platform, methodology, and traffic volume.