Walkthrough · Content program

Build a content hub

You want to build a topical-authority hub on a chosen subject, with a pillar piece anchoring 8 to 15 cluster pieces designed to compound over a 3 to 5 year horizon.

  • Content
  • Growth
  • PM
16 min read

Orchestration shape

Hub-and-spoke, not phased linear.

Building a hub is spatial work, not sequenced project work. Stages describe what produces what. AI collaboration is a layer beneath production rather than a gated stage of its own.

Stage 1 · Decide and research

Decide whether the topic is worth the hub investment. Research the keyword space.

  • content-strategy
  • seo-keyword

Stage 2 · Architect

Design the hub: pillar piece, cluster pieces, internal-link architecture, URL structure.

  • pillar-content-architecture

Stage 3 · Produce (where the hub gets built)

The pillar piece and cluster pieces get briefed, written, and QA'd. This stage is where the calendar and capacity questions bind. AI collaboration is the workflow layer underneath, not a separate stage.

  • content-brief-authoring
  • content-and-copy
  • editorial-qa

Cross-cutting layer

AI participates across briefing, drafting, and QA. The workflow discipline is decided once and applied throughout.

  • ai-content-collaboration

Stage 4 · Distribute

The hub goes to audiences via owned, earned, and paid channels. Distribution shape follows the format mix and audience segmentation the strategy named.

  • content-distribution

The four stages produce a hub that compounds. Stage 1 and 2 are one-time decisions; stage 3 runs through the cluster pieces over weeks; stage 4 runs continuously as each piece publishes and afterward as the hub matures.

Artifacts at each stage

What the workflow produces, illustrated.

Five artifacts span the four stages: a topic decision, a keyword cluster, the hub architecture itself, a cluster brief, and the distribution flow. Together they tell the story of building a hub from the topic decision through the hub going live.

Stage 1 output

Topic decision

The content-strategy skill produces the decision artifact: five-criterion evaluation, scored, with rejected alternatives documented and a specific decision and mitigations.

Topic decision · produced by content-strategy

"Modern feature flag adoption for engineering teams"

Five-criterion evaluation. 4 of 5 met. Recommendation: proceed with mitigations on editorial capacity.

Evaluation

  • Search volume justifies the build

    Pillar keyword 9.4K monthly; cluster long-tails sum to 28K.

  • Topic facets identified (8+ angles)

    12 distinct facets identified; 8 are deep enough for cluster pieces.

  • Commercial relevance to product

    Direct: feature-flag platform users are our buyer profile.

  • Competitive feasibility

    Top 3 competitors have shallow coverage; gap on engineering-team angle.

  • Editorial commitment available

    Risk: only 1 senior writer with the engineering depth needed.

Rejected alternatives

  • "Feature flags for marketers." Search volume thin; commercial relevance weak; rejected.
  • "Experimentation platforms compared." High volume but heavily saturated by analyst sites; rejected.
  • "Trunk-based development." Adjacent topic; better as a cluster piece off the chosen pillar.

Decision: Proceed with the engineering-teams angle. Editorial-capacity mitigation: pair the senior writer with one mid-level writer using ai-content-collaboration discipline; ship cluster pieces in waves of 3 over the first quarter.

Stage 1 output

Keyword cluster

The seo-keyword skill produces the cluster output: pillar keyword, 8 cluster groups, long-tail keywords with volumes, and gap callouts where competitor coverage is thin.

Keyword cluster · produced by seo-keyword

Pillar: feature flags (9,400 monthly). 8 cluster groups identified. 3 marked with thin competitor coverage.

Adoption strategy

  • feature flag adoption strategy1,400
  • rolling out feature flags to engineering team880
  • feature flag onboarding process510

Naming and lifecycle

GAP
  • feature flag naming conventions2,100
  • stale feature flags cleanup1,300
  • feature flag lifecycle1,800

Targeting and rollouts

  • feature flag targeting rules1,200
  • percentage rollout feature flag950
  • ring deployment feature flags320

Testing with flags

  • ab testing with feature flags3,200
  • feature flag canary testing1,400

Governance and policy

GAP
  • feature flag governance880
  • feature flag policy template260
  • audit trail feature flags390

Architecture patterns

  • feature flag architecture1,900
  • feature flags microservices1,100

Tools comparison

  • feature flag tool comparison2,400
  • open source feature flags1,800

Engineering culture

GAP
  • trunk based development feature flags1,100
  • continuous deployment feature flags1,400

Read: 8 clusters; 3 with thin competitor coverage (gap clusters). Naming and lifecycle, governance, and engineering culture are the highest-impact first cluster pieces to ship.

Stage 2 output

Hub architecture

The pillar-content-architecture skill produces the hub architecture: a specific hub being built (not the abstract anatomy on the skill landing). Pillar at center; eight cluster nodes around it; internal links shown solid (pillar-cluster) and dashed (cluster-cluster).

Hub architecture · produced by pillar-content-architecture

1 pillar + 8 clusters. Solid lines = pillar-cluster links. Dashed lines = cluster-cluster cross-links along the editorial calendar.

Hub-and-spoke diagram with one pillar piece at center and eight cluster pieces arranged around it.PILLARFeature flagscomplete guideAdoptionNamingTargetingTestingGovernanceArchitectureToolsCulture
  • 01 Adoption strategy
  • 02 Naming and lifecycle
  • 03 Targeting and rollouts
  • 04 Testing with flags
  • 05 Governance and policy
  • 06 Architecture patterns
  • 07 Tools comparison
  • 08 Engineering culture

Stage 3 output

Cluster brief

The content-brief-authoring skill produces a per-piece brief: target keyword, intent, heading structure, required entities, internal links, word count, voice, and success criteria. The brief is the contract the writer answers to.

Cluster brief · produced by content-brief-authoring

Naming and lifecycle for feature flags

Cluster piece 02 of 08. Owner: senior writer. Target ship: week 4.

Target keyword

feature flag naming conventions

2,100 monthly searches; KD 38; SERP intent: how-to + reference.

Search intent

Engineers researching how to name flags so the codebase stays maintainable as the flag count grows past 50.

Heading structure

  • H1 Naming conventions for feature flags that scale past 50 flags
  • H2 Why naming matters when the flag count grows
  • H2 The four-part naming pattern (type, scope, feature, owner)
  • H3 Type prefix conventions
  • H3 Scope and ownership
  • H2 Lifecycle metadata in the name
  • H2 Migrating an existing codebase to the new convention
  • H2 When to break the convention deliberately

Required entities

  • 4-part naming pattern
  • Lifecycle states
  • Migration script example
  • Linter rule example

Internal links

  • Pillar: feature flags complete guide
  • Cluster: stale flags cleanup
  • Cluster: governance and policy

Word count target

2,400-2,800

Voice

Senior engineer to peer

Success criterion

Top-5 ranking within 6 months

Stage 4 output

Distribution flow

The content-distribution skill produces the channel-fit map: owned, earned, and paid channels with per-channel cadence and audience-fit notes. The map is what the team executes against as each cluster piece ships.

Distribution flow · produced by content-distribution

The hub goes to audiences via owned, earned, and paid channels with per-channel cadence and audience-fit notes.

Owned

  • Newsletter feature on pillar

    Week 1

    Audience fit: Subscriber base

  • Blog placement and series page

    Continuous

    Audience fit: Organic search

  • Owned social: LinkedIn long-form post

    Per cluster ship

    Audience fit: Engineering audience

Earned

  • Syndication to engineering newsletters

    Pillar week + 2

    Audience fit: Cross-audience reach

  • Hacker News + dev community share

    Pillar publish day

    Audience fit: Practitioner discovery

  • PR pitch to industry analysts

    Hub anniversary content

    Audience fit: Analyst influence

Paid

  • Boosted LinkedIn post on pillar

    Week 1-3 post-launch

    Audience fit: Senior engineering buyer

  • Sponsored newsletter sends

    Quarterly

    Audience fit: Targeted segment reach

Channel-fit logic: Owned-heavy because the hub is a long-horizon investment. Earned adds reach without ongoing cost. Paid is selective amplification of the strongest cluster pieces, not blanket promotion.

How a Claude agent runs this

The agent transcript, illustrative.

A Claude agent loaded with these skills can chain through the orchestration. The transcript below illustrates the flow. Tool calls shown in cyan; agent reasoning in slate; user input in white.

claude-code — walkthrough orchestration

User

I want to own the topic 'feature flags' for engineering teams. Help me build a content hub.

Agent

Starting with content-strategy to evaluate whether the topic earns the hub investment.

Skill

content-strategy

Output

Topic decision: 5-criterion evaluation. 4 of 5 met (search volume, facets, commercial relevance, competitive feasibility). 1 risk on editorial capacity. Recommendation: proceed with capacity mitigation.

Agent

Researching the keyword space.

Skill

seo-keyword

Output

Pillar: feature flags (9.4K monthly). 8 cluster groups identified spanning naming, governance, targeting, testing, architecture, tools, culture, adoption. 3 clusters flagged as competitor-coverage gaps.

Agent

Designing the hub architecture.

Skill

pillar-content-architecture

Output

1 pillar + 8 clusters. Internal linking plan with pillar-cluster solid links and cluster-cluster cross-links along the editorial calendar. URL structure: /feature-flags (pillar) and /feature-flags/[cluster-slug].

User

Brief the first three cluster pieces. Start with the gap clusters.

Agent

Briefing the gap clusters first; they have the highest competitive opportunity. Workflow uses ai-content-collaboration discipline since one writer is mid-level.

Skill

content-brief-authoring

Output

3 briefs produced: naming and lifecycle (target 2.4-2.8K words), governance and policy (2.0-2.4K words), engineering culture (2.4-3.0K words). Each brief includes intent analysis, heading structure, required entities, internal links, success criteria.

User

First cluster piece is drafted. Run it through QA and ship if clean.

Agent

Running editorial-qa with full sequence including AI-content audit since the draft used AI in the workflow.

Skill

editorial-qa

Output

Brief adherence: passed. Voice: minor drift in section 3, returned for revision. Fact accuracy: 2 statistics need source backing. AI-content audit: 1 hedge-stacking pattern flagged. Net: revise sections 3 and 5, add 2 sources, ship after second pass.

Agent

Hub is live. Distribution plan ready.

Skill

content-distribution

Output

Channel-fit map produced. Owned-heavy (newsletter, blog, owned social) for compounding base. Earned via syndication to 3 engineering newsletters and Hacker News for pillar launch. Paid: selective LinkedIn boosts on 2 highest-engagement clusters. Cadence calibrated to pillar publish + cluster ship cadence.

Variations

Three tiers of the same workflow at different scales.

The full skill cluster fits a flagship version of the workflow. Most teams need lighter cuts more often. The three tiers below describe when each cut fits and which skills carry the work.

  • Tier 1

    Authority hub

    Strategic topic the program intends to own for years. Full pillar + 15+ cluster pieces. Ongoing refresh discipline; quarterly content audit; multi-channel distribution wave. The topic anchors the brand's editorial position.

    Time / cost

    12+ months to build; full team plus dedicated editor; ongoing capacity for refresh

    Skills involved

    • content-strategy
    • seo-keyword
    • pillar-content-architecture
    • content-brief-authoring
    • content-and-copy
    • editorial-qa
    • ai-content-collaboration
    • content-distribution

    Output shape

    Pillar + 15+ clusters + refresh cycle + full distribution wave + measurable topical authority signal in 18-24 months.

  • Tier 2

    Standard hub

    Topical-authority hub on a strategic but not flagship topic. Pillar + 10-12 cluster pieces. Light refresh discipline; routine distribution.

    Time / cost

    6 months to build; PM-led with 2-3 writers; standard editorial cadence

    Skills involved

    • content-strategy
    • seo-keyword
    • pillar-content-architecture
    • content-brief-authoring
    • content-and-copy
    • editorial-qa
    • content-distribution

    Output shape

    Pillar + 10-12 clusters + standard distribution + topical signal in 12-18 months.

  • Tier 3

    Quick hub

    Lightweight hub on a focused topic where 5 clusters cover the space. Minimum architecture investment; ship and iterate.

    Time / cost

    4 months to build; 1-2 writers; lean QA

    Skills involved

    • content-strategy
    • seo-keyword
    • pillar-content-architecture
    • content-and-copy
    • editorial-qa

    Output shape

    Pillar + 5 clusters + organic-channel distribution + topical signal in 9-12 months.

Frequently asked

Questions this walkthrough surfaces.

How do I decide between a hub and a programmatic-SEO set?
A hub is editorially-led: handful of pillar plus 8-15 cluster pieces, each individually written and edited, designed to compound topical authority over years. Programmatic SEO is data-led: hundreds to thousands of pages generated from structured data templates, designed for long-tail capture at volume. Hubs win when the topic rewards depth, the audience reads, and the team can sustain editorial quality. Programmatic SEO wins when the data has real depth, query intent is queryable at template level, and the team has QC capacity for sampling. Many programs run both: hubs for the strategic topics, programmatic for the long-tail surface.
What if my topic only has 5 cluster facets?
Sometimes 5 is enough. The minimum that makes a hub worth the architectural overhead is roughly 5-6 cluster pieces; below that the architecture costs more than it returns. If the research surfaces only 5 facets and the topic is strategically important, ship a 5-cluster hub and plan to extend it as the topic evolves. If the research surfaces only 2-3 facets and they are the obvious ones, the topic may not warrant a hub: write the strongest single piece and skip the architecture.
How long until a hub starts ranking?
Pillar pieces on competitive topics often take 6-12 months to reach top-5 rankings. Cluster pieces ranking on long-tail keywords surface faster, sometimes within 3-4 months. The hub effect (cluster pieces strengthening pillar authority through internal linking) accumulates over the first year. Programs that expect immediate results almost always abandon hubs prematurely. Programs that commit to the multi-year horizon typically see compounding traffic from years 2-3 onward.
Can AI write the cluster pieces and we just QA them?
Possible but the editorial-qa burden is heavy. The teams getting the best results from AI-assisted hub production use the workflow patterns from ai-content-collaboration: AI drafts against detailed briefs, humans rewrite substantially in voice, voice prompts are loaded with sample text from the brand's existing strong pieces, and editorial-qa runs the AI-content audit as a halt-condition. Pure AI-write-then-QA without these disciplines produces slop that ranks briefly and decays.
What's the relationship between this walkthrough and the pillar-content-architecture skill?
Pillar-content-architecture is the methodology for designing one hub well: pillar selection, cluster planning, internal linking, page anatomy, refresh discipline. This walkthrough is the broader orchestration: content-strategy decides whether the topic warrants a hub at all, seo-keyword surfaces the keyword space, pillar-content-architecture designs the structure, content-brief-authoring briefs each piece, content-and-copy writes them, editorial-qa gates them, ai-content-collaboration covers the workflow layer, and content-distribution channels the hub. The skill is one tool; the walkthrough is the workflow that uses eight tools together.
Do we need editorial-qa on every cluster piece?
Yes. Cluster quality determines hub quality; one weak cluster piece dilutes the hub's authority. The QA cadence can flex: strong-voice writers may need lighter QA on familiar topics; AI-assisted pieces always need full QA; pieces in unfamiliar territory need expert review at minimum. The discipline is consistent QA application across the hub, not individual-piece judgment calls. Programs that QA the pillar carefully but skip QA on clusters often discover six months later that the hub's average quality dragged the pillar down.

Metrics shown are illustrative. Actual results vary by platform, methodology, and traffic volume.