Integration · Profound

AI search visibility, queryable from the agent.

Profound's first-party MCP server (October 2025) brings AI search visibility data and Agent Analytics into Claude Desktop, Cursor, Cline, and any MCP client. Bot logs, citation reports, and visibility trends across the answer engines that matter.

Profound is positioned in the AI search visibility category. The question it answers: is the brand getting cited by ChatGPT, Perplexity, Gemini, and Claude when users ask questions in this domain. The MCP exposes the metrics the analytics dashboard ships: brand mention rate, citation share by engine, share of voice trends, top-cited domains, and AI crawler behavior at server-log level. SOC 2 Type II compliant; the MCP is positioned for enterprise procurement, not just point use.

Official MCPProfound MCP, API key auth, October 2025 launch.

What you can do via MCP

Example prompts the agent runs.

  • What's our brand mention rate across ChatGPT, Perplexity, Gemini, and Claude over the last 30 days? Compare to the prior 30.

    Calls the brand-mention-rate tool, returns the period-over-period delta segmented by AI engine. The number that matters: did the brand show up more or less often when users asked the agent something in our domain.

  • Show me the top-cited domains in our category for the past quarter; flag the ones we're not cited alongside.

    Returns a ranked list of domains the answer engines cite when users ask category questions. The flag: domains in the top 10 that the brand is not being cited alongside is a strategic gap.

  • Pull the AI crawler logs for the marketing site; tell me which user agents have been hitting the pricing page and at what frequency.

    Calls the Agent Analytics tool, returns server-log entries grouped by AI crawler user-agent (GPTBot, PerplexityBot, ClaudeBot, etc.) with hit counts and time series. The crawl pattern is the leading indicator of citation activity.

  • Which of our pages are getting cited the most across the AI engines, and which engines are doing the citing?

    Returns a citation report grouped by page and engine. The cross-tab tells the content team which pages the agents are surfacing and which engines are doing the surfacing. Different shape from traditional SEO ranking reports.

  • Has our share of voice in the 'experimentation platform' topic shifted in the last 60 days? Plot the trend.

    Returns the share-of-voice trend series for the topic across the answer engines. The shape of the trend is the input to whether the team's content investment is moving the visibility needle in this category.

Profound · MCP

Citation report query via the Profound MCP. The agent specifies the topic, the engines, and the time window; Profound returns the citation breakdown.

Profound MCP
docs
# Profound MCP, called via the agent runtime
mcp.profound.citation_report({
  topic: "experimentation platforms",
  engines: ["chatgpt", "perplexity", "gemini", "claude"],
  date_range: "last_30_days",
  group_by: ["engine", "domain"],
  include_share_of_voice: true
})

# Returns: brand_mention_rate, citation_share_by_engine,
#          top_cited_domains, share_of_voice_trend.
One command sample showing how the agent talks to Profound. The MCP exposes the platform's primitives as tools; the agent translates the prompt into the right call.

MCP integration

Profound MCP server.

Server
Profound MCP endpoint (see Profound docs for the latest URL)
Auth
API key (per-account, scoped per workspace)
Hosting
Profound hosted
Scoping note
Profound is SOC 2 Type II compliant. Workspace scoping respects the API key's permissions; admin keys can query across all monitored topics, analyst keys are scoped to assigned topics. Enterprise plans include SSO and audit log access.
  • Brand mention rate across ChatGPT, Perplexity, Gemini, Claude, and other answer engines
  • Citation share and share-of-voice trends, queryable per topic per engine
  • Agent Analytics: AI crawler behavior at server-log level (GPTBot, PerplexityBot, ClaudeBot, and others)
  • Citation reports grouped by page and engine for content-team prioritization
  • October 2025 MCP launch; first-party from Profound
  • SOC 2 Type II compliance positioned for enterprise procurement

Profound docs

Visual demonstration

What this looks like in practice.

Profound· AI search visibilityLast 30 days

Brand mention rate

23.4%

+4.2%vs prior 30d

Share of voice trend

AprApr+12

Citation share by engine

  • ChatGPT42.1%+3.2pp
  • Perplexity28.7%+1.8pp
  • Gemini17.4%-0.6pp
  • Claude11.8%+0.4pp

Top-cited domains

  • rampstack.cobrand
    184
  • competitor-a.com
    156
  • competitor-b.io
    124
  • industry-blog.com
    98
  • competitor-c.com
    84
  • research-paper.org
    62
  • competitor-d.com
    48
The AI search visibility view the Profound MCP exposes. Brand mention rate across the engines, citation share per engine with deltas, share-of-voice trend, and the top-cited domains in the topic. The agent reads this graph to prioritize the next content investment.

CLI alternative

Profound TypeScript / Python SDK plus REST API.

Profound ships a TypeScript SDK, a Python SDK, and a REST API alongside the MCP. The MCP is the primary surface for agent-driven analysis inside Claude Desktop, Cursor, and Cline. The SDKs and REST API are the right shape for analytics dashboards the team builds in-house, scheduled exports to a warehouse, and BI integrations the agent runtime is not built for.

Pairs with these skills

The content stack skill suite.

This integration pairs with the forthcoming content-strategy-for-ai-search and content-production skills. The Profound microsite assumes the content already exists; the integration shows the visibility-measurement surface that tells the team whether the content is reaching the answer engines. Skill pages and SKILL.md sources land in subsequent dispatches; cross-link hyperlinks are added when the skill pages ship.

Profound is the measurement layer. The platforms that produce content for AI search are Frase (read-write content lifecycle with GEO scoring) and AirOps (managed workflow with AEO data). Profound closes the loop: did the published content show up in the answer engines, and for which queries.