Walkthrough · Site audit and improvement

Audit and improve a SaaS website

You have a SaaS website that is underperforming. Conversion is flat, organic traffic is decaying, the design feels dated, and nobody can name what to fix first. You need a diagnostic and prescriptive workflow that finds the highest-impact issues and ships fixes in priority order.

  • PM
  • Growth
  • Marketing
  • Brand
  • Engineering
22 min read

Orchestration shape

Five tracks moving through five phases.

12 skills is too many for a flat list to read. Five tracks (SEO, Content, Design, Experimentation, Foundation) run in parallel through five phases (audit, prioritize, validate, execute, measure). Track-grouped phases preserve legibility at scale.

Tracks

  • SEO · 3 skills
  • Content · 2 skills
  • Design · 2 skills
  • Experimentation · 3 skills
  • Foundation · 2 skills
  1. Phase 1

    Audit across tracks

    Five tracks run audits in parallel. Each track produces findings. integration-orchestrator coordinates so the audit phase finishes before prioritization begins.

    SEO

    Site health, content, traffic-diagnosis audits

    Content

    Landing-page copy review; content decay audit

    Design

    Design standards + accessibility audit

    Experimentation

    Audit existing tests; identify gaps

    Foundation

    Verify analytics supports the audit

  2. Phase 2

    Prioritize findings

    Cross-track triage. Plot findings on impact-vs-effort matrix. The top fixes get scoped; the noise gets logged but deferred.

    Foundation

    integration-orchestrator runs the cross-track triage session

  3. Phase 3

    Validate top fixes

    The top 2-3 candidate fixes get experiment-validated before site-wide rollout. Reduces the risk of confident changes that turn out neutral or negative.

    Experimentation

    experiment-design specs the tests; feature-flagging configures rollouts

    Foundation

    product-analytics-setup confirms metrics are reliable

  4. Phase 4

    Execute approved fixes

    Track owners ship. Content track refreshes copy; design track ships standards fixes; SEO track addresses indexability. Parallel work coordinated through the orchestrator.

    Content

    landing-page-copy + content-refresh-system ship updates

    Design

    design-standards + accessibility fixes shipped

    SEO

    Technical SEO fixes shipped; content audit dispositions executed

  5. Phase 5

    Measure and iterate

    experimentation-analytics confirms the fixes moved the needle. Loop back to refresh the audit; the diagnostic is recurring, not one-shot.

    Experimentation

    experimentation-analytics validates that fixes moved the metrics

    Foundation

    Audit refresh cadence set; quarterly cycle established

Coordination note: integration-orchestrator runs across all phases. Cross-track dependencies surface in phase 2 (prioritization) and phase 4 (execute approved fixes); coordinator owns the resolution.

Artifacts at each stage

What the workflow produces, illustrated.

Five artifacts span the diagnostic-prescriptive cycle: the cross-track findings dashboard, the impact-vs-effort prioritization matrix, a before/after for one shipped fix, an experiment validation card, and the multi-track execution status. Together they tell the story of moving from underperforming site to shipped improvements with measurable impact.

Phase 1 output

Cross-track audit findings dashboard

Five tracks produce findings in parallel. Each finding has severity, impact estimate, and effort estimate. The dashboard surfaces critical-issue counts and per-track coverage.

Cross-track audit findings

13 findings across 5 tracks

Total

13

Critical

8

SEO

3 findings

  • Sitemap missing 40+ live URLs

    high

    Indexation gap; long-tail traffic loss

    Effort: Low
  • Title tag length over 60 chars on 30% of pages

    medium

    Truncated SERP snippets

    Effort: Low
  • Pillar page on /pricing has no internal links

    high

    Authority dilution; pillar underperforms

    Effort: Medium
Content

3 findings

  • Hero copy buries the value prop

    high

    Hero conversion below benchmark

    Effort: Low
  • 30 blog posts in dead tail (less than 200 visits/90d)

    medium

    Library decay; merge or retire

    Effort: Medium
  • Pricing page lacks objection handling

    high

    Lower trial-start conversion

    Effort: Medium
Design

3 findings

  • Type contrast 3.2:1 on primary nav

    high

    Accessibility floor breach

    Effort: Low
  • Inconsistent button styles across CTAs

    medium

    Visual chaos; trust erosion

    Effort: Medium
  • Mobile hero stacks awkwardly below 380px

    medium

    Mobile conversion drop

    Effort: Medium
Accessibility

2 findings

  • Form inputs missing labels (4 forms)

    high

    Screen-reader users blocked

    Effort: Low
  • Missing skip-to-content link

    medium

    Keyboard navigation broken

    Effort: Low
Performance

2 findings

  • LCP at 4.8s on landing page (target 2.5s)

    high

    Core Web Vitals + bounce rate

    Effort: High
  • Render-blocking JS in hero

    high

    First paint slower than needed

    Effort: Medium

Phase 2 output

Impact-vs-effort prioritization matrix

All 13 findings plotted by impact (vertical) and effort (horizontal). Top 3 picks highlighted in the do-now quadrant. Each quadrant has a clear disposition; the prioritization is what cuts 50-find-list audits down to 10-15 active items.

Prioritization matrix · cross-track triage

13 findings plotted by impact (vertical) and effort (horizontal). Top picks get experiment-validated before site-wide rollout.

Impact

Do now

High impact · low effort

Ship these first. Many ship without experiment validation because the risk is bounded.

Plan and ship

High impact · high effort

Substantial fixes. Sequence into roadmap; experiment-validate before site-wide rollout.

Batch as fill work

Low impact · low effort

Knock out in a batch when capacity allows. Cumulative quality upside.

Skip or defer

Low impact · high effort

Hard to justify. Defer unless a strategic reason elevates impact.

High effortLow effort

Top 3 picks (highlighted in "Do now")

  1. 1. Hero copy: landing-page-copy + experiment-design validates before site-wide rollout.
  2. 2. Form labels: accessibility fix; ships immediately, no experiment needed.
  3. 3. Sitemap: SEO indexation; ships immediately, monitored via traffic diagnosis.

Phase 3 output

Experiment validation

Top fixes get experiment-validated before site-wide rollout. The validation cross-references the audit finding it tests; observed lift is compared against the audit's predicted band.

Experiment validation · pricing-page objection-handling fix

DAY 14 OF 14

Hypothesis

Adding objection handling to the pricing page (FAQ, comparison table, money-back guarantee callout) will increase trial-start conversion by at least 8% relative.

Control

2.1%

+/- 0.14% (95% CI)

12,840 sessions, 269 trial starts

Treatment

2.4%

+/- 0.15% (95% CI)

12,901 sessions, 310 trial starts

Relative lift

+14.3%

p-value

0.014

Cross-reference

Validates the audit finding "Pricing page lacks objection handling" (Content track, high impact, medium effort). Audit predicted lift around the 8-12% band; observed +14.3% exceeds.

Recommendation: SHIP. Hit MDE; no guardrail violations; segment effects consistent. Roll to 100% over 7 days; monitor for 14 days; integrate into pricing-page treatment for any subsequent fixes.

Phase 4 output

Before / after on one shipped fix

The hero copy fix is one of the top picks; this artifact shows the before-and-after with metrics. Methodology documented (experiment-validated, not observational).

Before / after · hero copy fix

Top-pick fix from prioritization matrix. Validated via 14-day experiment before site-wide rollout.

Before

Live for 14 months

The platform for modern teams

Transform your workflow with our innovative solution that helps teams unlock their full potential.

Metrics (14 days)

Hero CTA click-through
4.1%
Trial-start rate
2.2%
Bounce rate
62%

After

Validated in experiment; shipped 100% on day 14

Coordinate the work, not the tool

Cross-team handoffs without the spreadsheets. Used by 4,200 teams shipping faster than they could before.

Metrics (14 days)

Hero CTA click-through
5.7% (+39%)
Trial-start rate
3.1% (+41%)
Bounce rate
54% (-8pp)

Methodology: 14-day AB test, 50/50 split, 18,400 sessions per variant. p=0.008 on hero CTA click-through; passes guardrails on revenue per session and session duration. Validated rather than observational; the improvement is causally attributable to the copy change.

Phase 5 output

Multi-track execution status

Five tracks times five phases grid. Per-cell stage status, owner per track, fixes-shipped count. Cumulative impact tracker across all touched surfaces. Coordination flags surface where the orchestrator is sequencing.

Multi-track execution status · week 4 of 6

7 of 13 fixes shipped · cumulative impact tracking active

TrackOwnerAuditPrioritizeValidateExecuteMeasureShipped
SEOL. Hsu complete complete- n/a· in-progress· pending1 of 3 fixes
ContentJ. Park complete complete complete· in-progress· pending2 of 3 fixes
DesignS. Khan complete complete- n/a· in-progress· pending2 of 3 fixes
AccessibilityS. Khan complete complete- n/a complete· pending2 of 2 fixes
PerformanceR. Voss complete complete· in-progress· pending· pending0 of 2 fixes

Cumulative impact (vs baseline)

+18%

Trial-start conversion across all touched surfaces

Coordination flags

SEO and Performance tracks compete for engineering capacity in week 5. Coordinator is sequencing.

Next milestone

Week 6: Measure phase opens. experimentation-analytics confirms cumulative impact; audit refresh cycle scheduled.

How a Claude agent runs this

The agent transcript, illustrative.

A Claude agent loaded with these skills can chain through the orchestration. The transcript below illustrates the flow. Tool calls shown in cyan; agent reasoning in slate; user input in white.

claude-code — walkthrough orchestration

User

Our SaaS site is underperforming. Conversion is flat, organic is down 25% in 6 months, the design feels dated, and the team can't agree on what to fix first. Help.

Agent

Coordinating a cross-track audit. Five tracks run in parallel; we converge to prioritize together.

Skill

integration-orchestrator

Output

Tracks set: SEO, Content, Design, Accessibility, Performance. Owner per track. Two-week audit window; daily async updates; convergence meeting end of week 2 to triage findings together.

Agent

Audits running in parallel. Findings rolling in by track.

Skill

seo-site-health-audit

Output

SEO findings: sitemap missing 40+ URLs, 30% of pages have over-length titles, pillar /pricing has zero internal links. seo-content-audit identifies 30 dead-tail blog posts. seo-traffic-diagnosis traces decline to a 6-month-ago algorithm update plus stale top-of-funnel content.

User

What's coming from the other tracks?

Agent

Content, design, accessibility, performance audits all wrapping up.

Skill

design-standards

Output

Content: hero copy buries the value prop; pricing page has zero objection handling. Design: type contrast 3.2:1 on nav; inconsistent CTAs; mobile hero stacks awkwardly under 380px. Accessibility: 4 forms missing labels; no skip-to-content link. Performance: LCP 4.8s on landing page; render-blocking JS in hero.

User

13 findings is a lot. Which ones do we actually fix?

Agent

Plotting on the impact-vs-effort matrix to triage.

Skill

integration-orchestrator

Output

Top 3 picks (high impact, low effort): hero copy fix, accessibility form labels, sitemap addition. Plan-and-ship queue (high impact, high effort): pricing page rewrite, pillar internal-linking, mobile hero redesign. Batch queue: title-tag length, skip-to-content, dead-tail merges, button consolidation. Defer queue: render-blocking JS (high effort, contained impact since LCP is more strategic).

Agent

Validating the top-of-list copy fixes via experiment before site-wide rollout.

Skill

experiment-design

Output

Experiments running: hero copy (14-day test, 50/50 split, MDE 5%) and pricing-page objection handling (14-day test, 50/50 split). Both validated: hero +39% click-through (p=0.008), pricing +14.3% trial-start (p=0.014). Rolling to 100% over 7 days; monitoring guardrails.

Agent

Tracks executing in parallel; cumulative impact tracking active.

Skill

experimentation-analytics

Output

Week 4 of 6. 7 of 13 fixes shipped. Cumulative trial-start conversion +18% vs baseline. Coordination flag: SEO and Performance tracks compete for engineering capacity in week 5; coordinator sequencing. Audit refresh cycle scheduled for Q3.

Variations

Three tiers of the same workflow at different scales.

The full skill cluster fits a flagship version of the workflow. Most teams need lighter cuts more often. The three tiers below describe when each cut fits and which skills carry the work.

  • Tier 1

    Full transformation

    Complete cross-track overhaul. All 5 tracks, full 12-skill cluster, dedicated PM running coordination. Suits properties undergoing major repositioning, post-acquisition consolidation, or recovery from sustained underperformance.

    Time / cost

    3-month effort; dedicated PM + cross-functional team; sustained coordination

    Skills involved

    • seo-site-health-audit
    • seo-content-audit
    • seo-traffic-diagnosis
    • design-standards
    • accessibility-audit
    • landing-page-copy
    • content-refresh-system
    • experiment-design
    • feature-flagging
    • experimentation-analytics
    • product-analytics-setup
    • integration-orchestrator

    Output shape

    Cross-track audit + impact-effort prioritization + 4-6 experiment validations + 15-25 fixes shipped + cumulative measurement + audit refresh cycle.

  • Tier 2

    Standard audit and improve

    6-week effort. Audits run; top 5-7 fixes ship. Most properties benefit at this scale; cross-track coordination matters but does not require dedicated PM time.

    Time / cost

    6 weeks; PM-led with cross-functional execution; standard team allocation

    Skills involved

    • seo-site-health-audit
    • seo-content-audit
    • design-standards
    • accessibility-audit
    • landing-page-copy
    • experiment-design
    • experimentation-analytics
    • integration-orchestrator

    Output shape

    Cross-track audit + prioritization matrix + 1-2 experiment validations + 5-7 fixes shipped + post-fix measurement.

  • Tier 3

    Rapid audit

    2-week diagnostic only. No execution. Surfaces what's there; team uses the prioritization matrix to plan execution separately. Suits assessment moments (new agency engagement, leadership change, pre-board-review readout).

    Time / cost

    2 weeks; small audit team across tracks; no execution capacity required

    Skills involved

    • seo-site-health-audit
    • seo-content-audit
    • design-standards
    • accessibility-audit
    • integration-orchestrator

    Output shape

    Cross-track findings + impact-effort prioritization matrix + recommended sequencing. Hand-off to execution team.

Frequently asked

Questions this walkthrough surfaces.

How do we sequence cross-track work without thrashing teams?
integration-orchestrator is the load-bearing skill in this walkthrough. Three patterns work. Run audits in parallel (each track's audit is independent); converge to prioritize together; then sequence execution by capacity. The prioritization matrix surfaces dependencies (e.g., performance fixes may unblock SEO ranking; design standards may unblock conversion experiments). Track owners commit to specific weeks; the coordinator catches conflicts before they become missed deadlines. The failure mode to avoid: parallel execution with no coordination; tracks ship overlapping changes that contaminate each other's measurement.
What if the audit surfaces 50 issues? Where do we draw the line?
The impact-vs-effort matrix is the line-drawing tool. High-impact-low-effort fixes get done now (often 5-8 items). High-impact-high-effort gets sequenced into the roadmap with explicit timelines (often 3-5 items). Low-impact-low-effort gets batched as fill work (often 8-15 items, shipped in a single sprint over a few days). Low-impact-high-effort gets deferred or skipped (often 10-20 items). 50 findings reducing to 10-15 active items is normal. The audit's value is what it surfaces; the prioritization's value is what it cuts.
When do we ship fixes directly vs validate via experiment first?
Three criteria push toward direct ship. Bounded blast radius (the change affects a single page or surface). Accessibility or technical-correctness fixes (form labels, sitemap inclusion, broken links). Reversal cost is low (revert is a one-line change). Three criteria push toward experiment first. Substantial copy or layout changes (hero, pricing, primary CTAs). Cross-segment impact (the change may move some segments positive and others negative). High reversal cost (the change is hard to roll back without rebuild). The walkthrough's variations specify which fixes typically experiment-validate vs ship direct.
How do we measure improvement when multiple things change at once?
Two disciplines. First, isolate experiment-validated fixes (run them in proper experiments with control populations); the experiment provides causal attribution. Second, for direct-ship fixes, use a holdout or staged rollout where possible: ship to 50% of traffic for 2-3 weeks before full rollout, comparing the two halves. When neither is feasible, the cumulative measurement is observational; document that explicitly so the team does not over-attribute the lift to a specific change. The Foundation track ensures product-analytics-setup makes these measurements possible.
Can AI tools speed up the audit phase?
Yes, with discipline. AI accelerates several audit tasks. Cross-checking technical SEO findings against best-known patterns. Drafting initial copy revisions for landing pages. Summarizing content audit data into themes. Identifying accessibility issues against WCAG criteria. The discipline ai-content-collaboration covers applies here: AI accelerates the work humans direct, with editorial-qa or specialist review on the output. AI is not a substitute for the cross-track judgment that prioritization requires; that stays human.
How does this walkthrough relate to refresh-a-stale-content-library?
Refresh-a-stale-content-library is content-only and lifecycle-focused: a recurring cycle that keeps the library compounding. This walkthrough is cross-track and prescriptive-improvement focused: a one-time-or-recurring audit that surfaces and fixes underperformance across SEO, content, design, accessibility, performance, and conversion. Programs often run both: this walkthrough on a quarterly or semi-annual cycle for site-wide health; refresh-a-stale-content-library on an ongoing cadence specifically for content lifecycle. The two compose; this walkthrough's content track triggers refresh-cycle work where the audit surfaces decay.

Metrics shown are illustrative. Actual results vary by platform, methodology, and traffic volume.