mcp-server / README.md
NiWaRe's picture
update readme
8820c6c
metadata
title: Weights & Biases MCP Server
emoji: πŸͺ„πŸ
colorFrom: yellow
colorTo: gray
sdk: docker
app_file: app.py
pinned: false

Weights & Biases

W&B MCP Server

Query and analyze your Weights & Biases data using natural language through the Model Context Protocol.

Cursor Claude OpenAI Gemini LeChat VSCode

What Can This Server Do?

Example Use Cases (click command to copy)

Analyze Experiments

Show me the top 5 runs 
by eval/accuracy in 
wandb-smle/hiring-agent-demo-public?

Debug Traces

How did the latency of 
my hiring agent predict traces 
evolve over the last months?

Create Reports

Generate a wandb report 
comparing the decisions made
by the hiring agent last month

Get Help

How do I create a leaderboard
in Weave - ask SupportBot?
Available Tools (6 powerful tools)
Tool Description Example Query
query_wandb_tool Query W&B runs, metrics, and experiments "Show me runs with loss < 0.1"
query_weave_traces_tool Analyze LLM traces and evaluations "What's the average latency?"
count_weave_traces_tool Count traces and get storage metrics "How many traces failed?"
create_wandb_report_tool Create W&B reports programmatically "Create a performance report"
query_wandb_entity_projects List projects for an entity "What projects exist?"
query_wandb_support_bot Get help from W&B documentation "How do I use sweeps?"
Usage Tips (best practices)

β†’ Provide your W&B project and entity name
LLMs are not mind readers, ensure you specify the W&B Entity and W&B Project to the LLM.

β†’ Avoid asking overly broad questions
Questions such as "what is my best evaluation?" are probably overly broad and you'll get to an answer faster by refining your question to be more specific such as: "what eval had the highest f1 score?"

β†’ Ensure all data was retrieved
When asking broad, general questions such as "what are my best performing runs/evaluations?" it's always a good idea to ask the LLM to check that it retrieved all the available runs. The MCP tools are designed to fetch the correct amount of data, but sometimes there can be a tendency from the LLMs to only retrieve the latest runs or the last N runs.


Quick Start

We recommend using our hosted server at https://mcp.withwandb.com - no installation required!

πŸ”‘ Get your API key from wandb.ai/authorize

Cursor

One-click installation
  1. Open Cursor Settings (⌘, or Ctrl,)
  2. Navigate to Features β†’ Model Context Protocol
  3. Click "Install from Registry" or "Add MCP Server"
  4. Search for "wandb" or enter:
    • Name: wandb
    • URL: https://mcp.withwandb.com/mcp
    • API Key: Your W&B API key

For local installation, see Option 2 below.

Claude Desktop

Configuration setup

Add to your Claude config file:

# macOS
open ~/Library/Application\ Support/Claude/claude_desktop_config.json

# Windows
notepad %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "wandb": {
      "url": "https://mcp.withwandb.com/mcp",
      "apiKey": "YOUR_WANDB_API_KEY"
    }
  }
}

Restart Claude Desktop to activate.

For local installation, see Option 2 below.

OpenAI Response API

Python client setup
from openai import OpenAI
import os

client = OpenAI()

resp = client.responses.create(
 model="gpt-4o",
 tools=[{
     "type": "mcp",
     "server_url": "https://mcp.withwandb.com/mcp",
     "authorization": os.getenv('WANDB_API_KEY'),
 }],
 input="How many traces are in my project?"
)
print(resp.output_text)

Note: OpenAI's MCP is server-side, so localhost URLs won't work. For local servers, see Option 2 with ngrok.

Gemini CLI

One-command installation
# Set your API key
export WANDB_API_KEY="your-api-key-here"

# Install the extension
gemini extensions install https://github.com/wandb/wandb-mcp-server

The extension will use the configuration from gemini-extension.json pointing to the hosted server.

For local installation, see Option 2 below.

Mistral LeChat

Configuration setup

In LeChat settings, add an MCP server:

  • URL: https://mcp.withwandb.com/mcp
  • API Key: Your W&B API key

For local installation, see Option 2 below.

VSCode

Settings configuration
# Open settings
code ~/.config/Code/User/settings.json
{
  "mcp.servers": {
    "wandb": {
      "url": "https://mcp.withwandb.com/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_WANDB_API_KEY"
      }
    }
  }
}

For local installation, see Option 2 below.


General Installation Guide

Option 1: Hosted Server (Recommended)

The hosted server provides a zero-configuration experience with enterprise-grade reliability. This server is maintained by the W&B team, automatically updated with new features, and scales to handle any workload. Perfect for teams and production use cases where you want to focus on your ML work rather than infrastructure.

Using the Public Server

The easiest way is using our hosted server at https://mcp.withwandb.com.

Benefits:

  • βœ… Zero installation
  • βœ… Always up-to-date
  • βœ… Automatic scaling
  • βœ… No maintenance

Simply use the configurations shown in Quick Start.

Option 2: Local Development (STDIO)

Run the MCP server locally for development, testing, or when you need full control over your data. The local server runs directly on your machine with STDIO transport for desktop clients or HTTP transport for web-based clients. Ideal for developers who want to customize the server or work in air-gapped environments.

Manual Configuration

Add to your MCP client config:

{
  "mcpServers": {
    "wandb": {
      "command": "uvx",
      "args": [
        "--from",
        "git+https://github.com/wandb/wandb-mcp-server",
        "wandb_mcp_server"
      ],
      "env": {
        "WANDB_API_KEY": "YOUR_API_KEY"
      }
    }
  }
}

Prerequisites

  • Python 3.10+
  • uv (recommended) or pip
# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh

Installation

# Using uv (recommended)
uv pip install wandb-mcp-server

# Or from GitHub
pip install git+https://github.com/wandb/wandb-mcp-server

Client-Specific Installation Commands

Cursor (Project-only)

Enable the server for a specific project:

uvx --from git+https://github.com/wandb/wandb-mcp-server add_to_client --config_path .cursor/mcp.json && uvx wandb login

Cursor (Global)

Enable the server for all Cursor projects:

uvx --from git+https://github.com/wandb/wandb-mcp-server add_to_client --config_path ~/.cursor/mcp.json && uvx wandb login

Windsurf

uvx --from git+https://github.com/wandb/wandb-mcp-server add_to_client --config_path ~/.codeium/windsurf/mcp_config.json && uvx wandb login

Claude Code

claude mcp add wandb -- uvx --from git+https://github.com/wandb/wandb-mcp-server wandb_mcp_server && uvx wandb login

With API key:

claude mcp add wandb -e WANDB_API_KEY=your-api-key -- uvx --from git+https://github.com/wandb/wandb-mcp-server wandb_mcp_server

Claude Desktop

uvx --from git+https://github.com/wandb/wandb-mcp-server add_to_client --config_path "~/Library/Application Support/Claude/claude_desktop_config.json" && uvx wandb login

Testing with ngrok (for server-side clients)

For clients like OpenAI and LeChat that require public URLs:

# 1. Start HTTP server
uvx wandb-mcp-server --transport http --port 8080

# 2. Expose with ngrok
ngrok http 8080

# 3. Use the ngrok URL in your client configuration

Note: These utilities are inspired by the OpenMCP Server Registry add-to-client pattern.

Option 3: Self-Hosted HTTP Server

Deploy your own W&B MCP server for team-wide access or custom infrastructure requirements. This option gives you complete control over deployment, security, and scaling while maintaining compatibility with all MCP clients. Perfect for organizations that need on-premises deployment or want to integrate with existing infrastructure.

Using Docker

docker run -p 7860:7860 \
  -e WANDB_API_KEY=your-server-key \
  ghcr.io/wandb/wandb-mcp-server

From Source

# Clone repository
git clone https://github.com/wandb/wandb-mcp-server
cd wandb-mcp-server

# Install and run
uv pip install -r requirements.txt
uv run app.py

Deploy to HuggingFace Spaces

  1. Fork wandb-mcp-server
  2. Create new Space on Hugging Face
  3. Choose "Docker" SDK
  4. Connect your fork
  5. Add WANDB_API_KEY as secret (optional)

Server URL: https://YOUR-SPACE.hf.space/mcp


More Information

Architecture & Performance

The W&B MCP Server uses pure stateless architecture for excellent performance:

Metric Performance
Concurrent Connections 500+ (hosted) / 1000+ (local)
Throughput ~35 req/s (hosted) / ~50 req/s (local)
Success Rate 100% up to capacity
Scaling Horizontal (add workers)

πŸ“– See Architecture Guide for technical details

Key Resources

Example Code

Complete OpenAI Example
from openai import OpenAI
from dotenv import load_dotenv
import os

load_dotenv()

client = OpenAI()

resp = client.responses.create(
    model="gpt-4o",  # Use gpt-4o for larger context window
    tools=[
        {
            "type": "mcp",
            "server_label": "wandb",
            "server_description": "Query W&B data",
            "server_url": "https://mcp.withwandb.com/mcp",
            "authorization": os.getenv('WANDB_API_KEY'),
            "require_approval": "never",
        },
    ],
    input="How many traces are in wandb-smle/hiring-agent-demo-public?",
)

print(resp.output_text)

Support