Flagship Skill · Integration orchestrator

The integration orchestrator skill.

Sequence the work that answers to the brief. Phases, gates, lock points, handoffs, QA verification.

Built for product managers running creative-direction-driven projects with AI agents in the loop. The skill produces a phased delivery plan that maps to the team's actual tools (Jira, Linear, Notion, Figma, GitHub) and specifies which AI agents run at which gates with MCP and CLI integration.

What this skill is

The temporal layer below the brief.

A brief tells a team what to make. Creative direction tells them what it should feel like. Neither tells them when each decision locks, who reviews what, or what stops the next phase from starting before the prior one is done. By week three, the team is working in parallel and nobody knows which decisions are still open and which are locked.

Most teams have briefs and creative direction but no orchestration plan. The result is brief drift, parallel- work conflicts, identity tokens shifted after copy was drafted against the old ones, engineering shipping before design had a chance to review, and the "everyone says Done but nothing was actually verified" problem.

This skill fills the gap. The output is a phased delivery plan with calendar weeks, gate definitions with measurable pass criteria, lock-point register, handoff specs, and a QA verification gate spec with Playwright or Chrome MCP invocations. Distinct from creative-brief (the project shape) and creative-direction (the aesthetic direction); this skill produces the temporal map both depend on.

What it produces

A plan the PM runs Tuesday morning.

The orchestration plan is a markdown document, typically ORCHESTRATION.md, at the project root. It is not theory and not methodology theater. The plan is decision material the PM pastes into a calendar Monday morning, sets up tooling Tuesday, and starts running the project Wednesday.

Per project, the plan contains a phased timeline with calendar weeks; per-phase gate specs covering trigger, approver, measurable pass criteria, what is blocked, and what is already locked; a lock-point register tracking which artifacts are immutable when; handoff specs with artifact requirements between every phase transition; a tool-stack implementation guide with concrete Jira, Linear, Notion, Figma, and GitHub setup including MCP commands or CLI invocations where applicable; a QA verification gate spec naming which automated tests run and what happens on failure; a cross-skill dependency graph; and a risk register specific to the chosen cadence.

The discipline that makes the plan work is specificity. Generic plans fail. "QA gate must pass" is not a plan; "Playwright critical flows pass, console errors at baseline 0, accessibility floor WCAG AA, Lighthouse mobile at or above 85, visual regression on top 8 pages within tolerance" is a plan.

The framework

Seven considerations for orchestration.

Each filters the choices that follow. The deep version of each lives in the reference files; the landing surfaces the structure.

1. Phasing

How the project decomposes into phases (discovery, direction, identity, production, QA, launch). Different project types use different phase sets. A campaign skips identity (it inherits the locked brand identity); a landing page collapses discovery and direction into a single brief phase; a rebrand replaces discovery with audit. The shape matters less than the explicit answer to two questions per project: which phases run sequentially and which overlap.

2. Gates

The approval moments that govern phase transition. Standard gates are brief, direction, identity, voice, copy, design, QA verification, and launch readiness. Each has a trigger, an approver, measurable pass criteria, what is blocked, and what is locked from prior gates. Gates work when criteria are measurable. "QA passed" is too vague; specific test types and thresholds make the gate enforceable.

3. Lock points

The moments artifacts become immutable. Brief locks after direction approval; identity tokens lock after identity gate; voice locks after voice approval; copy locks per page after copy approval. Once locked, an artifact can only change via formal change request that triggers re-review of dependent work. Without explicit lock points, every artifact stays conceptually editable forever, which is the source of brief drift.

4. Handoffs

The moments work transfers between phases or skills. Standard handoffs include discovery to brief, brief to identity, brief to voice, identity plus voice to copy, identity plus copy to design, design to engineering, and engineering to launch. Plus a cross-cutting adjacent- observation handoff for things noticed while working nearby that get filed in a triage queue rather than derailing the current task.

5. Team-size modulation

How the cadence scales with team size. Solo work uses self-review checklists per phase. Small teams (2-3) use single reviewers per gate with async handoffs. Medium teams (4-7) use multiple reviewers and within-phase mini-gates to flatten review demand. Large teams (8+) use formal phase reviews, a dedicated brief-owner role, and explicit cross-track sync ceremonies because parallel tracks drift apart without them.

6. Tool-stack implementation

How the orchestration implements in real platforms. In Jira, phases become Epics with custom fields; in Linear, Projects with cycles and label-driven views; in Notion, brief-as-database-row with relation properties; in Figma, library structure with frame-level review status; in GitHub, branch protection plus PR templates plus CODEOWNERS. Where MCPs exist, the skill specifies which setup operations the agent can run and which remain manual UI work.

7. Automation and QA verification

The most consequential discipline. Tasks only move from QA to Done after automated verification passes. The status taxonomy is Todo, In Progress, Waiting, Blocked, Done; Blocked is a first-class status, distinct from Waiting, that prevents agents from spinning on un-resolvable work. Playwright MCP runs critical-flow tests; Chrome MCP runs human-readable walkthroughs; failure routes to Blocked, files an adjacent observation if relevant, and pages the human via the configured channel.

Common cadences

Seven project types, seven templates.

Each cadence is a skeletal phase map with calendar weeks, gate definitions, and tool-stack implementation. The cadence-patterns reference file expands each.

  • New brand build. 6 to 8 weeks across 5 phases (discovery, direction, identity, production, launch).
  • Rebrand. 4 to 6 weeks across 4 phases. Audit replaces discovery because positioning is known.
  • Single landing page. 1 to 2 weeks across 3 phases (brief, production, QA-launch). Identity assumed locked.
  • Campaign. 2 to 4 weeks across 4 phases. Brand identity and voice assumed locked.
  • Identity refresh. 3 to 4 weeks across 3 phases. Skips discovery and positioning.
  • Website refresh. 4 to 8 weeks across 4 phases. Direction phase produces a refreshed brief if positioning has shifted.
  • Microsite or product launch. 3 to 5 weeks across 4 phases. Fixed launch date drives backward planning.

Failure patterns

The failures the plan prevents.

  • QA gate is self-reported. "Done" tickets get moved to Done without verification. Bugs land in production. The fix: automated verification that fires before status transitions to Done; failure routes to Blocked, not back to In Progress.
  • Brief becomes a write-once doc. The brief is drafted, signed, then nobody references it again. Downstream work drifts. The fix: lock the brief at a gate and treat any reference to it as a reading, not an edit.
  • Identity locks before voice. The typography is finalized before the voice is approved. The result: typography and voice register conflict (friendly voice in austere type, or serious voice in casual type). The fix: voice approval gate fires before or alongside identity lock.
  • Engineering starts before design freeze. Components get built against design that is still moving; rework downstream when design lands somewhere else. The fix: design approval gate before engineering starts on the affected surface.
  • Reviews stack at end of phase. Everyone defers to the final day; the gate becomes a 10-hour review marathon and approvers approve under fatigue. The fix: within-phase mini-gates that flatten review demand.
  • Agents spin on un-resolvable work. A Claude Code task cannot resolve itself; the agent retries, burns context, runs out of tokens. The fix: Blocked is a first-class status; agents move tasks to Blocked when stuck and stop.

Composes with

The skills and platforms this skill orchestrates.

The orchestrator runs above other skills and inside other platforms. Skills it sequences: creative-direction (sets the brief this orchestrates) and creative-brief (the operational sibling that defines project shape).

Platforms it implements against: Jira, Linear, Notion, Figma, GitHub, and the agile sprint cadence. Each platform's implementation notes include MCP integration guidance covering Atlassian's remote MCP server, Linear's official MCP, Notion's MCP, GitHub's MCP server, and Figma's Dev Mode MCP, so teams running AI agents can pipe orchestration output directly into their tooling.

References & further reading

Where to go next.

Read the creative direction framework for the brief this skill orchestrates. Browse the full skills catalog for sibling skills. The integrations hub covers per-platform setup notes that pair with this skill.

The skill source lives on GitHub: SKILL.md plus seven reference files in the references directory: cadence patterns, gate definitions, handoff protocols, platform implementation, team-size modulation, automation and QA tooling, and a complete worked example. MIT licensed and stack-agnostic.

Frequently asked

Common questions.

  • How is this different from project management software?

    PM software (Jira, Linear, Asana) is the platform; this skill produces the plan that runs IN those platforms. The skill doesn't replace the platform; it fills the gap between 'we have a brief' and 'we have an actual sequenced plan implementable in our platform'. The output specifies phases, gate definitions, lock points, handoff specs, and platform-specific setup so the PM can configure the chosen tool to match.

  • How is this different from creative-brief?

    creative-brief produces the project shape (scope, audience, deliverables, constraints, success criteria). This skill produces the project sequencing (phases, gates, handoffs, timeline). They are complementary; the orchestrator skill takes the brief as input and produces the temporal map that the brief alone cannot answer (when does identity lock, what's blocked while a gate is open, what does Done actually mean).

  • Does the skill output integrate directly with our tools?

    Partially. The skill specifies setup operations using MCPs where supported (Atlassian, Linear, Notion, GitHub MCPs) and CLI invocations where MCP is the wrong choice. Some operations (workflow scheme creation, Figma library setup, GitHub branch protection rulesets) remain manual UI setup; the skill specifies what to do without claiming to automate it end-to-end. The output is a plan a PM can execute, not a one-click bootstrap.

  • Can I use this if my team only uses one of those platforms?

    Yes. The skill output adapts to the platforms in your stack. A team using only Notion gets a Notion-specific implementation; a team using Linear plus GitHub gets a Linear plus GitHub implementation. The framework (phases, gates, handoffs) is platform-agnostic; the implementation guide is platform-specific. Multi-platform stacks (Notion plus Linear plus GitHub, or Jira plus GitHub) are common; the skill specifies the linkage conventions between platforms.

  • What if our project is already mid-flight?

    The skill produces a plan from current state forward. Inputs include 'existing constraints' (what's already locked, what's already shipped, what's still open). The output handles re-orchestrating mid-flight as well as greenfield setup. A common mid-flight scenario: the brief was approved but downstream work has drifted; the orchestrator output specifies what to lock now, what to re-review, and how to prevent further drift through the rest of the timeline.

  • Does the skill cover QA tooling like Playwright?

    Yes. The framework's 7th consideration is automation and QA verification. The reference file on automation-and-qa-tooling covers Playwright MCP for browser-driven QA, Claude in Chrome for human-readable verification, Windows MCP for desktop apps, and CLI alternatives where MCP token cost is high. The skill output specifies which verification tooling fires at which gate, and what happens when verification fails (typically: status moves to Blocked, an adjacent observation gets filed, the human is paged via the configured channel). The 10 categories of QA coverage are documented end to end.