Weights & Biases MCP Server

Model Context Protocol server for querying W&B data

✅ Server Running
https://mcp.withwandb.com/mcp

📊 W&B Models

Query experiment tracking data, runs, sweeps, and performance metrics using GraphQL.

🔍 W&B Weave

Access LLM traces, evaluations, and datasets with powerful filtering and pagination.

🤖 Support Agent

Get help from wandbot, the W&B RAG-powered support agent for product questions.

📝 Reports

Create and save W&B Reports with markdown text and HTML visualizations.

🚀 Quick Setup

📝 Cursor

  • Add to ~/.cursor/mcp.json:
"wandb": {
  "transport": "http", 
  "url": "https://mcp.withwandb.com/mcp",
  "headers": {
    "Authorization": "Bearer YOUR_WANDB_API_KEY",
    "Accept": "application/json, text/event-stream"
  }
}

🖥️ Claude Desktop

  • Add to claude_desktop_config.json:
{
  "mcpServers": {
    "wandb-stdio": {
      "command": "uvx",
      "args": ["--from", "git+https://github.com/wandb/wandb-mcp-server.git", "wandb-mcp-server"],
      "env": {
        "WANDB_API_KEY": "YOUR_WANDB_API_KEY"
      }
    }
  }
}

💬 LeChat

  • MCP Server URL: https://mcp.withwandb.com/mcp
  • Auth: Bearer token
  • Token: Your W&B API key

🤖 Gemini CLI

  • Install extension:
gemini extensions install https://github.com/wandb/wandb-mcp-server
  • Then set API key (choose one):
  • export WANDB_API_KEY=your-key
  • uvx wandb login (opens browser)

Get your API key: wandb.ai/authorize

💬 Example Queries

"How many openai.chat traces are in my wandb-team/my-project weave project?"
"Show me the latest 10 runs from my experiment and create a report with the results."
"What's the best performing model in my latest sweep? Plot the results."

🚀 Want to Run Your Own Instance?

Install locally or deploy your own server from the open-source repository:

View on GitHub →