Integration · LaunchDarkly

Three MCP servers across feature mgmt, AI Configs, observability.

LaunchDarkly is the largest pure-play feature flag platform. Three hosted MCP servers cover feature management, AI Configs (LLM rollout settings), and observability.

Use the hosted MCPs via OAuth, or self-host locally with an API key. The Federal environment is supported via a --server-url flag, so government and regulated customers get the same agent surface as commercial. Customers include Netflix, Atlassian, Square, IBM.

Official MCPThree product-area servers can be enabled independently. Companion agent-skills repository at skills.sh.

What you can do via MCP

Example prompts the agent runs.

  • Create a feature flag for the new-checkout feature, default off.

    Creates the flag in the LaunchDarkly project, sets the default off, returns the SDK snippet for the chosen language.

  • Toggle the pricing-banner flag on in production for 50% of users.

    Updates the production targeting rules to a 50% rollout and confirms the change took effect.

  • List all stale flags across all environments and create a cleanup plan.

    Pulls flag list, scores by last evaluation, surfaces a prioritized cleanup plan grouped by owner team.

  • Update targeting rules for the experimental-feature flag to include only beta-testers segment.

    Edits the targeting rule list with the new segment, surfaces the diff, and applies it to the right environment.

  • Show me the status of the new-onboarding flag across dev, staging, and production.

    Returns a per-environment status matrix with rollout percentages and the last change author.

LaunchDarkly · MCP

Local server invocation against the feature management MCP. Hosted alternatives via OAuth are equivalent and skip the env-var step.

LaunchDarkly MCP
docs
# LaunchDarkly MCP, local
# Run: LD_API_KEY=api-abc \
#      npx @launchdarkly/mcp-server

mcp.launchdarkly.create_feature_flag({
  project_key: "default",
  key: "new-checkout",
  name: "New checkout flow",
  variations: [
    { value: true, name: "on" },
    { value: false, name: "off" }
  ],
  defaults: { onVariation: 0, offVariation: 1 },
  temporary: false
})

# Hosted alternative: OAuth via skills.sh agent-skills
# Federal: pass --server-url <gov-endpoint>
One command sample showing how the agent talks to LaunchDarkly. The MCP exposes the platform's primitives as tools; the agent translates the prompt into the right call.

MCP integration

LaunchDarkly MCP server.

Server
Hosted (recommended) at LaunchDarkly endpoints, or local via @launchdarkly/mcp-server
Auth
OAuth (hosted) or LD_API_KEY (local)
Hosting
Hosted (recommended) or local
  • Three product-area servers: feature management, AI Configs, observability
  • AI Configs MCP for managing LLM rollout settings (prompts, model versions, fallbacks)
  • Federal environment supported via --server-url flag
  • Companion agent-skills repo at skills.sh for prebuilt prompts

LaunchDarkly MCP docs

Visual demonstration

What this looks like in practice.

LaunchDarkly/Feature flag
Release

new-checkout

New checkout flow with single-step billing

Environment

production

On50% rollout

Targeting rules

  • segment IS beta-testers ENABLE 100%
  • segment IS internal-users ENABLE 100%
  • ELSE rollout 50%
LaunchDarkly flag state managed via the feature management MCP. Targeting rules and rollout percentage editable through the agent.
LaunchDarkly· Stale flag audit
2 stale of 4
  • old-search-v1

    Stale

    Last evaluated 94 days ago · @search-team

    Pre-redesign search; new search is in production

    Remove
  • checkout-experiment-q3

    Stale

    Last evaluated 61 days ago · @growth

    Experiment ended, ship variant on; flag still gating

    Remove
  • new-onboarding

    Active

    Last evaluated 3 minutes ago · @onboarding

    Active rollout at 50%

    Keep
  • ai-recommendations

    Active

    Last evaluated 1 minute ago · @ai-platform

    AI Config: model A vs model B routing

    Keep
Cross-environment stale flag audit run by the agent. The AI Config flag stays; the post-experiment flag is queued for removal.

CLI alternative

REST API plus MCP.

LaunchDarkly has a comprehensive REST API for direct programmatic access. The MCP is the AI-native interface; the REST API covers CI integration, scripted reporting, and any non-agent workflow. Most teams use both: MCP for agent-driven flag work, REST for build-time and ops automation.

Pairs with these skills

The PM-experimentation skill suite.

This integration pairs with the forthcoming feature-flagging, experiment-design skills. The skill landing pages and SKILL.md sources land in subsequent dispatches; cross-link hyperlinks are added when the skill pages ship.