Flagship Skill ยท AI content collaboration
The AI content collaboration skill.
Humans own, AI accelerates.
A senior editorial leader's playbook for how humans and AI compose in content workflows. Pragmatic, tool-agnostic, honest about both what AI in the loop enables and what it threatens. Where AI legitimately participates, where humans must own, hybrid workflow patterns, voice ownership preservation, AI slop prevention, disclosure tiering, team calibration, and the ethics of intellectually honest AI-assisted content production.
Audience: editorial leaders, content directors, content ops managers, agencies running AI-assisted production, in-house teams calibrating AI usage across writers.
What this skill is for
The workflow-scope skill that composes with all six others.
Seven skills compose into the content discipline. Six cover technique at distinct scopes; this skill is the cross-cutting workflow that applies to all of them. Briefs can be AI-assisted; hubs can be AI-assisted; programmatic SEO is almost always AI-involved; editorial QA includes AI-content audit by necessity.
- 01
content-strategyPROGRAM scope
Decides what to produce.
- 02
pillar-content-architectureHUB scope
Designs the topical hub structure.
- 03
content-brief-authoringPER-PIECE scope
Briefs each piece.
- 04
content-and-copyEXECUTION scope
Writes each piece.
- 05
programmatic-seoSCALED scope
Generates pages at scale from data.
- 06
editorial-qaGATE scope
Verifies before publish.
- 07
ai-content-collaborationWORKFLOW scope (this skill, cross-cutting)
How humans and AI compose across all six stages above. Applies to every other content skill in the suite.
The keystone distinction
Three positions. Both extremes are failure modes.
The pathology to avoid is treating AI as either a magic content factory (cheap, fast, scaled, output quality optional) OR as a forbidden intruder (purity gospel that does not survive contact with deadlines). Both readings produce bad work. The middle is the discipline.
Failure mode
AI factory
Treat AI as a magic content factory: cheap, fast, scaled, output quality optional. Generic prompts; no editorial judgment; volume over craft. Output: AI slop, hallucinations, voice loss, audience trust decline.
Failure mode
Pure human
Refuse all AI involvement on principle. Purity gospel that does not survive contact with deadlines. Team uses AI privately anyway because the principle was unworkable. Output: drift, inconsistency, unarticulated workflows.
The discipline
Collaboration
Humans own; AI accelerates. AI does work the human directs and verifies; AI does NOT make decisions about what publishes, who is quoted, what is true, or what voice the brand uses. The middle is the discipline.
The litmus test. If your AI-assisted piece publishes without a human being able to defend every claim, every position, and every word, you have crossed the line. The piece is AI's work, dressed in your byline. Readers eventually notice.
Where the line falls
The participation boundary, made tangible.
The two columns below are the operational expression of the keystone framing. Humans own decisions, judgment, and accountability. AI accelerates work the human directs and verifies. The boundary is not a spectrum; specific items belong on one side or the other. Programs that document the boundary explicitly produce consistent work; programs that leave it implicit produce drift.
Humans own
- Editorial judgment
- Voice and distinctive POV
- Fact verification
- Ethical decisions
- Reader empathy
- Quote attribution
- Tone calibration on hard topics
- Narrative arc
- Final approval
Decisions, judgment, and accountability. Not delegable to AI.
AI accelerates
- Research synthesis
- Outline generation against a brief
- First-draft generation
- Alternative phrasings
- Copy edit suggestions
- Summary and abstraction
- Transcription
- Translation drafts
- QC automation at scale
Work the human directs and verifies. Speed gain, not authorship.
The "human in the loop" framing is necessary but insufficient. A human briefly reviewing AI-generated content before publish is not ownership; it is rubber-stamping. Ownership requires the human to have made the actual decisions the piece embodies.
The framework
Twelve considerations for AI content collaboration.
When designing or auditing an AI-assisted content workflow, walk these 12 considerations.
- 01Humans own; AI accelerates
- 02Participation boundaries documented
- 03Hybrid pattern matched to context
- 04Voice guidelines as prompt input
- 05Voice drift sampling on long pieces
- 06Fact verification gate as halt-condition
- 07AI slop prevention through iteration
- 08Disclosure tiered to audience trust
- 09Team calibration: policy, sessions, voice library
- 10Tool-agnostic methodology
- 11Ethical floor: intellectual honesty
- 12Final accountability: human signs off
What is in the skill
Thirteen sections covered in the body.
The SKILL.md spans the AI-collaboration discipline from the keystone humans-own-AI-accelerates framing through participation boundaries, hybrid patterns, voice preservation, slop prevention, disclosure, calibration, ethics, and the failure-mode catalog.
01
What this skill is for
Composition with all 6 other content-suite skills as the workflow layer. Tool-agnostic methodology. What is and is not in scope at the implementation level.
02
Humans own, AI accelerates
The keystone framing. Humans own editorial judgment, voice, fact accuracy, ethical decisions. AI accelerates research synthesis, draft generation, copy edit suggestions. The line: AI does work the human directs and verifies.
03
Where AI legitimately participates
10 stages where AI in the loop is fine: research synthesis, outline generation, first-draft generation, alternative phrasings, copy edit suggestions, summary, transcription, translation drafts, QC automation at scale, idea generation.
04
Where humans must own
Editorial judgment, voice, fact verification, ethical decisions, reader empathy, quote attribution, tone calibration on hard topics, narrative arc, final approval. 'Human in the loop' is necessary but insufficient; ownership requires the human to have made the actual decisions.
05
Hybrid workflow patterns
Five patterns with tradeoffs: AI-first draft + human-edit-heavy, human-first outline + AI-draft + human-rewrite, AI-as-research-assistant + human-writes, human-writes + AI-as-editor, AI-generates-at-scale + human-samples. Selection criteria by volume, voice, trust, time.
06
Voice ownership preservation
Voice guidelines as prompt input. Sample text as voice anchor. Mid-draft voice check. Final pass in human voice. Reject the bland. Voice is the dominant casualty of careless AI workflows; preservation requires active discipline.
07
The AI slop problem
What produces slop, what prevents it. Generic prompts, AI doing too much work, no editorial judgment, volume prioritized over quality, no iteration. Cross-references editorial-qa's audit patterns.
08
Disclosure and transparency
Four-tier framework: always disclose (journalism, expert bylines), default disclosure (thought leadership, regulated), generally not necessary (marketing, product content), clearly fine without disclosure (AI as research assistant only).
09
Team training and calibration
Documented AI policy, calibration sessions, voice library, quality benchmarks, tool standardization or intentional pluralism, forbidden patterns list, structured onboarding.
10
Ethics: training data, attribution, intellectual honesty
Six ethical floors: do not pass AI work as fully human-written, do not deny AI involvement when it happened, do not mirror copyrighted source material, attribute when borrowing, do not fabricate quotes or expertise, be honest about AI capabilities and limits.
11
Common failure modes
11+ patterns: content feels generic, hallucinations to publish, inconsistent across writers, AI-assisted SEO penalized, cannot tell what was AI vs human, readers complained about AI-flavored content, disclosure lost credibility, tools changed and content shifted, 10x content same audience growth, ethics breach.
12
The framework: 12 considerations
Humans own + AI accelerates, participation boundaries, hybrid pattern selection, voice guidelines as prompt input, voice drift sampling, fact verification gate, slop prevention, disclosure tiering, team calibration, tool-agnostic methodology, ethical floor, final accountability.
13
Collaboration, not replacement
AI in content workflows is neither magic nor menace. Teams producing memorable AI-assisted content hold the line on human ownership, voice, fact accuracy, intellectual honesty. Teams producing slop treat AI as a content factory. The discipline is pro-craft, not anti-AI.
Reference files
Nine references that go alongside the SKILL.md.
The references hold participation boundaries, hybrid patterns, voice preservation, slop prevention, disclosure tiering, team calibration, quality calibration with AI in loop, ethics and intellectual honesty, and the failure-mode catalog. Each reference closes with a methodology-vs-implementation section per the discipline established by the skill-creation-walkthrough.
references/ai-participation-boundaries.md
Where AI legitimately helps, where humans must own. The boundary list with detail per stage. The 'human-in-the-loop is not ownership' distinction. The verification-and-defense litmus test.
references/hybrid-workflow-patterns.md
Five workflow patterns with tradeoffs and time profiles. Selection criteria by voice sensitivity, volume, trust, time budget, team skill. Multi-pattern combinations within a program. The pattern-drift anti-pattern.
references/voice-ownership-preservation.md
Voice guidelines as prompt input. Sample text as voice anchor. Mid-draft voice check. Final pass in human voice. Reject-the-bland audit. The voice library practice. When voice cannot be preserved at the program's volume.
references/ai-slop-detection-and-avoidance.md
What produces slop, what prevents it. The reader-detection problem. The 6-question slop avoidance audit. The volume-vs-quality counter-intuition. Cross-references editorial-qa's audit patterns.
references/disclosure-and-transparency-patterns.md
Four-tier framework. Disclosure language patterns. Industry norms across journalism, content marketing, academic research, regulated industries, ecommerce. When disclosure helps and when it hurts. Program-level policy structure. Hard-case edge handling.
references/team-training-and-calibration.md
Documented AI policy structure. Calibration session methodology. Voice library practice. Quality benchmarks. Tool standardization vs intentional pluralism. Forbidden patterns discipline. Onboarding components. Calibration breakdown signals.
references/quality-calibration-with-ai-in-loop.md
How editorial standards shift when AI is in the workflow. Same standards, different failure modes per QA dimension. The 'QA standards must be tighter' misreading correction. AI-specific calibration sessions. Program-level review. Kill-criteria discipline.
references/ethics-and-intellectual-honesty.md
Training data realities. Six ethical floors: no false byline implication, no false denials, no copyrighted-source mirroring, attribute when borrowing, no fabricated quotes or expertise, honesty about capabilities and limits. The intellectual honesty frame as the supervening principle. Edge-case handling.
references/common-collaboration-failures.md
12 failure patterns with diagnoses and fixes. Cross-references to other reference files. The pattern across most failures: magic-content-factory thinking and unarticulated workflows.
Pairs with these platforms
Three platforms with AI-assisted content workflows.
The skill is tool-agnostic. The methodology applies regardless of which AI tool is in the loop. Three platforms in the catalog ship AI-content workflows as a primary surface: Frase (AI writing with brand voice calibration built in), AirOps (managed workflows that compose AI generation with human review gates), Notion (AI features integrated with the team's content ops surface where briefs and pieces live).
SEO and content teams running research, writing, optimization, and AI search monitoring
Frase
Frase's read-write MCP for the full SEO + GEO content lifecycle
Open the pageContent teams that prefer managed workflow builders to build-it-yourself pipelines
AirOps
AirOps's official MCP and Claude Connector for AEO data and Brand Kits
Open the pageNotion-centric teams
Notion
Briefs as a queryable database
Open the page
Bridges to every other content-suite skill
Six sister skills compose with this workflow layer.
This skill is cross-cutting. Where the other six content skills cover technique at distinct scopes, this skill covers the workflow that applies across all of them. Briefs can be AI-assisted; hubs can be AI-assisted; programmatic SEO is almost always AI-involved; editorial QA includes AI-content audit by necessity.
Program scope
content-strategyDecides what to produce. Strategy decisions can be AI-assisted; the program-level judgment stays human.
Hub scope
pillar-content-architectureDesigns the topical hub structure. Hub architecture can be AI-suggested; the architectural commitment stays human.
Per-piece scope
content-brief-authoringBriefs each piece. Briefs can be AI-drafted from research; the contract decisions stay human. The brief is the input every AI generation respects.
Execution scope
content-and-copyWrites each piece. Drafts can be AI-produced; voice and editorial judgment stay human. This skill is the workflow layer for content-and-copy's execution.
Scaled scope
programmatic-seoGenerates pages from data at scale. AI generation is the dominant production model; sampling QA is the human gate. This skill is the workflow discipline that keeps programmatic generation honest.
Gate scope
editorial-qaVerifies before publish. AI-content audit is now a load-bearing gate; the audit's judgment stays human. This skill is the workflow; editorial-qa is the gate that catches what the workflow missed.
Direction 6 complete
The fifth and final skill in the content suite.
AI content collaboration is the fifth and final skill in the Direction 6b content suite. Together with content-brief-authoring (per-piece briefs), pillar-content-architecture (hub design), programmatic-seo (scaled production), and editorial-qa (pre-publish gate), this completes the five-skill suite. Combined with the prior three content-category skills (content-strategy, content-and-copy, landing-page-copy), the catalog now carries a full content workflow vocabulary across eight skills.
The Direction 6 surface also includes the five content and SEO platform integrations that landed in Direction 6a: Webflow and Contentful (publishing destinations), Frase (read-write SEO + GEO content lifecycle), Profound (AI search visibility measurement), and AirOps (managed workflow alternative). The skills compose with the platforms; the methodology compounds across both.
The next directions will expand the catalog into additional verticals. The content suite is settled; future content-related work will be Tier 2 additions, refresh cycles, or systemic re-orders that wait until both content and product suites complete.
Open source under MIT
Read the SKILL.md on GitHub.
The skill source lives in the rampstackco/claude-skills repository alongside dozens of other skills covering the full lifecycle of brand and product work. MIT licensed.
Frequently asked questions.
- What does 'humans own, AI accelerates' actually mean operationally?
- Humans own editorial judgment, voice, distinctive POV, fact accuracy, ethical decisions, what to publish vs what to kill, brand voice, narrative arc, tone calibration, reader empathy, and claim verification. AI accelerates research synthesis, draft generation against a brief, copy edit suggestions, alternative phrasings, summary, transcription, and quality-control automation at scale. The line: AI does work the human directs and verifies; AI does NOT make decisions about what publishes, who is quoted, what is true, or what voice the brand uses. The litmus test: if your AI-assisted piece publishes without a human being able to defend every claim, every position, and every word, you have crossed the line.
- Why is this skill tool-agnostic instead of recommending specific AI tools?
- The methodology applies regardless of which AI tool is in the loop. Workflow shape, participation boundaries, voice ownership question, and ethical frame stay constant; which specific tool is used varies by team, budget, and the question of which model best fits which task. Tool category mentions earn methodology relevance ('models with strong factual grounding for research drafts'); specific tool endorsements would tie the skill to a moment in time and create implicit advertorial pressure. Tool selection is implementation work that varies by team; the methodology is what compounds across tool changes.
- What is AI slop and how do you prevent it?
- AI slop is the term for AI-generated content that is technically functional but reads as generic, derivative, and signal-less. Slop is produced when AI does too much of the work, prompts are generic, no editorial judgment is in the loop, volume is prioritized over quality, and there is no iteration. Slop is prevented by strong briefs (per content-brief-authoring), voice guidelines as prompt context, heavy human editing pass (50-60% of production time), iteration (AI draft to human rewrite to AI suggestions to human final), and editorial judgment at every gate. Readers can sense slop even when they cannot articulate why; slop loses reader trust over time even when individual pieces are not algorithm-penalized.
- When should AI usage be disclosed to readers?
- Calibrate to context. The four-tier framework: always disclose (journalism, attributed expert opinion, regulated industries with explicit requirements); default disclosure with context (thought leadership where the byline carries trust value, content influencing purchase decisions, trust-sensitive audiences); generally not necessary (marketing copy, descriptive product content, programmatic data pages, copy edit assistance only); clearly fine without disclosure (AI as research assistant only, transcription only, spelling and grammar suggestions). The principle: disclose when the reader's understanding of the content's origin would change their trust in it.
- How is this skill different from editorial-qa's AI-content audit?
- editorial-qa is the gate that catches AI tells, hallucinations, and voice drift before publish (detection). This skill is the workflow that prevents these failures from being produced in the first place (prevention). The two skills compose: ai-content-collaboration teaches the workflow patterns that produce on-voice AI-assisted content, while editorial-qa teaches the QA discipline that catches what the workflow missed. Programs running both produce work that earns reader trust; programs running only the gate end up reactive; programs running only the workflow without the gate ship slop that should have been caught.
- Is the catalog itself produced with AI assistance?
- Yes. Pretending otherwise would be dishonest, and intellectual honesty is the supervening frame this skill takes. The catalog is produced with AI assistance using the methodology this skill teaches: humans own the editorial judgment, voice, fact accuracy, and ethical decisions; AI accelerates research synthesis, drafting against detailed briefs, copy edit suggestions, and alternative phrasings. The methodology applies to itself; if the catalog could not hold the discipline, the discipline would not be teachable.