Spaces:
Runtime error
Runtime error
Update the root README.md
Browse files
README.md
CHANGED
|
@@ -1,81 +1,49 @@
|
|
| 1 |
---
|
| 2 |
-
title: Transformers.js Benchmark
|
| 3 |
-
emoji:
|
| 4 |
colorFrom: blue
|
| 5 |
-
colorTo:
|
| 6 |
-
sdk:
|
|
|
|
|
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
-
# Transformers.js Benchmark
|
| 11 |
|
| 12 |
-
A
|
| 13 |
|
| 14 |
## Features
|
| 15 |
|
| 16 |
-
-
|
| 17 |
-
-
|
| 18 |
-
-
|
| 19 |
-
-
|
| 20 |
-
-
|
| 21 |
-
|
| 22 |
-
## API Endpoints
|
| 23 |
-
|
| 24 |
-
### Submit Benchmark
|
| 25 |
-
```bash
|
| 26 |
-
POST /api/benchmark
|
| 27 |
-
Content-Type: application/json
|
| 28 |
-
|
| 29 |
-
{
|
| 30 |
-
"platform": "node", # "node" or "web"
|
| 31 |
-
"modelId": "Xenova/all-MiniLM-L6-v2",
|
| 32 |
-
"task": "feature-extraction",
|
| 33 |
-
"mode": "warm", # "warm" or "cold"
|
| 34 |
-
"repeats": 3,
|
| 35 |
-
"dtype": "fp32", # fp32, fp16, q8, int8, uint8, q4, bnb4, q4f16
|
| 36 |
-
"batchSize": 1,
|
| 37 |
-
"device": "webgpu", # For web: "webgpu" or "wasm"
|
| 38 |
-
"browser": "chromium", # For web: "chromium", "firefox", "webkit"
|
| 39 |
-
"headed": false
|
| 40 |
-
}
|
| 41 |
-
```
|
| 42 |
-
|
| 43 |
-
### Get Benchmark Result
|
| 44 |
-
```bash
|
| 45 |
-
GET /api/benchmark/:id
|
| 46 |
-
```
|
| 47 |
-
|
| 48 |
-
### List All Benchmarks
|
| 49 |
-
```bash
|
| 50 |
-
GET /api/benchmarks
|
| 51 |
-
```
|
| 52 |
-
|
| 53 |
-
### Queue Status
|
| 54 |
-
```bash
|
| 55 |
-
GET /api/queue
|
| 56 |
-
```
|
| 57 |
-
|
| 58 |
-
### Clear Results
|
| 59 |
-
```bash
|
| 60 |
-
DELETE /api/benchmarks
|
| 61 |
-
```
|
| 62 |
|
| 63 |
## Architecture
|
| 64 |
|
| 65 |
```
|
| 66 |
.
|
| 67 |
-
βββ
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
β βββ src/
|
| 69 |
β β βββ core/ # Shared types and utilities
|
| 70 |
β β βββ node/ # Node.js benchmark runner
|
| 71 |
-
β β βββ web/ # Browser benchmark runner
|
| 72 |
-
β β βββ server/ # REST API server
|
| 73 |
-
β βββ package.json
|
| 74 |
-
βββ client/ # CLI client for the server
|
| 75 |
-
β βββ src/
|
| 76 |
-
β β βββ index.ts # Yargs-based CLI
|
| 77 |
β βββ package.json
|
| 78 |
-
βββ
|
|
|
|
|
|
|
| 79 |
```
|
| 80 |
|
| 81 |
## Development
|
|
@@ -84,28 +52,103 @@ DELETE /api/benchmarks
|
|
| 84 |
|
| 85 |
1. Install dependencies:
|
| 86 |
```bash
|
| 87 |
-
cd
|
| 88 |
-
|
| 89 |
```
|
| 90 |
|
| 91 |
-
2.
|
| 92 |
```bash
|
| 93 |
-
|
|
|
|
|
|
|
| 94 |
```
|
| 95 |
|
| 96 |
-
3.
|
| 97 |
```bash
|
| 98 |
-
|
| 99 |
```
|
| 100 |
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 105 |
|
| 106 |
## Deployment
|
| 107 |
|
| 108 |
-
This
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 109 |
|
| 110 |
## License
|
| 111 |
|
|
|
|
| 1 |
---
|
| 2 |
+
title: Transformers.js Benchmark Leaderboard
|
| 3 |
+
emoji: π
|
| 4 |
colorFrom: blue
|
| 5 |
+
colorTo: purple
|
| 6 |
+
sdk: gradio
|
| 7 |
+
sdk_version: 5.49.1
|
| 8 |
+
app_file: leaderboard/src/leaderboard/app.py
|
| 9 |
pinned: false
|
| 10 |
---
|
| 11 |
|
| 12 |
+
# Transformers.js Benchmark Leaderboard
|
| 13 |
|
| 14 |
+
A Gradio-based leaderboard that displays benchmark results from a HuggingFace Dataset repository.
|
| 15 |
|
| 16 |
## Features
|
| 17 |
|
| 18 |
+
- **π Interactive leaderboard**: Display benchmark results in a searchable/filterable table
|
| 19 |
+
- **π Advanced filtering**: Filter by model name, task, platform, device, mode, and dtype
|
| 20 |
+
- **β Recommended models**: Curated list of WebGPU-compatible beginner-friendly models
|
| 21 |
+
- **π Real-time updates**: Refresh data on demand from HuggingFace Dataset
|
| 22 |
+
- **π Performance metrics**: View load time, inference time, and p50/p90 percentiles
|
| 23 |
+
- **π Markdown export**: Export recommended models for documentation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
## Architecture
|
| 26 |
|
| 27 |
```
|
| 28 |
.
|
| 29 |
+
βββ leaderboard/ # Gradio-based leaderboard app
|
| 30 |
+
β βββ src/
|
| 31 |
+
β β βββ leaderboard/
|
| 32 |
+
β β βββ app.py # Main Gradio application
|
| 33 |
+
β β βββ data_loader.py # HuggingFace Dataset loader
|
| 34 |
+
β β βββ formatters.py # Data formatting utilities
|
| 35 |
+
β βββ pyproject.toml # Python dependencies
|
| 36 |
+
β βββ README.md # Detailed leaderboard docs
|
| 37 |
+
βββ bench/ # Benchmark server (separate deployment)
|
| 38 |
β βββ src/
|
| 39 |
β β βββ core/ # Shared types and utilities
|
| 40 |
β β βββ node/ # Node.js benchmark runner
|
| 41 |
+
β β βββ web/ # Browser benchmark runner
|
| 42 |
+
β β βββ server/ # REST API server
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
β βββ package.json
|
| 44 |
+
βββ client/ # CLI client for benchmark server
|
| 45 |
+
βββ src/
|
| 46 |
+
βββ package.json
|
| 47 |
```
|
| 48 |
|
| 49 |
## Development
|
|
|
|
| 52 |
|
| 53 |
1. Install dependencies:
|
| 54 |
```bash
|
| 55 |
+
cd leaderboard
|
| 56 |
+
uv sync
|
| 57 |
```
|
| 58 |
|
| 59 |
+
2. Configure environment variables:
|
| 60 |
```bash
|
| 61 |
+
# Create .env file or export variables
|
| 62 |
+
export HF_DATASET_REPO="your-username/benchmark-results"
|
| 63 |
+
export HF_TOKEN="your-hf-token" # Optional, for private datasets
|
| 64 |
```
|
| 65 |
|
| 66 |
+
3. Run the leaderboard:
|
| 67 |
```bash
|
| 68 |
+
uv run python -m leaderboard.app
|
| 69 |
```
|
| 70 |
|
| 71 |
+
The leaderboard will be available at: http://localhost:7861
|
| 72 |
+
|
| 73 |
+
### Environment Variables
|
| 74 |
+
|
| 75 |
+
| Variable | Required | Description |
|
| 76 |
+
|----------|----------|-------------|
|
| 77 |
+
| `HF_DATASET_REPO` | Yes | HuggingFace dataset repository containing benchmark results |
|
| 78 |
+
| `HF_TOKEN` | No | HuggingFace API token (only needed for private datasets) |
|
| 79 |
|
| 80 |
## Deployment
|
| 81 |
|
| 82 |
+
This leaderboard is designed to run on Hugging Face Spaces using the Gradio SDK.
|
| 83 |
+
|
| 84 |
+
### Quick Deploy
|
| 85 |
+
|
| 86 |
+
1. **Create a new Space** on Hugging Face:
|
| 87 |
+
- Go to https://huggingface.co/new-space
|
| 88 |
+
- Choose **Gradio** as the SDK
|
| 89 |
+
- Set the Space name (e.g., `transformersjs-benchmark-leaderboard`)
|
| 90 |
+
|
| 91 |
+
2. **Upload files to your Space**:
|
| 92 |
+
```bash
|
| 93 |
+
# Clone your Space repository
|
| 94 |
+
git clone https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
|
| 95 |
+
cd YOUR_SPACE_NAME
|
| 96 |
+
|
| 97 |
+
# Copy leaderboard files (adjust path as needed)
|
| 98 |
+
cp -r /path/to/this/repo/leaderboard/* .
|
| 99 |
+
|
| 100 |
+
# Commit and push
|
| 101 |
+
git add .
|
| 102 |
+
git commit -m "Deploy leaderboard"
|
| 103 |
+
git push
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
3. **Configure Space secrets**:
|
| 107 |
+
- Go to your Space settings β **Variables and secrets**
|
| 108 |
+
- Add `HF_DATASET_REPO`: Your dataset repository (e.g., `username/benchmark-results`)
|
| 109 |
+
- Add `HF_TOKEN`: Your HuggingFace API token (if using private datasets)
|
| 110 |
+
|
| 111 |
+
4. **Space will automatically deploy** and be available at:
|
| 112 |
+
```
|
| 113 |
+
https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
### Dependencies
|
| 117 |
+
|
| 118 |
+
The Space automatically installs dependencies from `pyproject.toml`:
|
| 119 |
+
- `gradio>=5.49.1` - Web UI framework
|
| 120 |
+
- `pandas>=2.3.3` - Data manipulation
|
| 121 |
+
- `huggingface-hub>=0.35.3` - Dataset loading
|
| 122 |
+
- `python-dotenv>=1.1.1` - Environment variables
|
| 123 |
+
|
| 124 |
+
## Data Format
|
| 125 |
+
|
| 126 |
+
The leaderboard reads JSONL files from the HuggingFace Dataset repository. Each line should be a JSON object with benchmark results:
|
| 127 |
+
|
| 128 |
+
```json
|
| 129 |
+
{
|
| 130 |
+
"id": "benchmark-id",
|
| 131 |
+
"platform": "web",
|
| 132 |
+
"modelId": "Xenova/all-MiniLM-L6-v2",
|
| 133 |
+
"task": "feature-extraction",
|
| 134 |
+
"mode": "warm",
|
| 135 |
+
"device": "wasm",
|
| 136 |
+
"dtype": "fp32",
|
| 137 |
+
"status": "completed",
|
| 138 |
+
"result": {
|
| 139 |
+
"metrics": {
|
| 140 |
+
"load_ms": {"p50": 100, "p90": 120},
|
| 141 |
+
"first_infer_ms": {"p50": 10, "p90": 15},
|
| 142 |
+
"subsequent_infer_ms": {"p50": 8, "p90": 12}
|
| 143 |
+
}
|
| 144 |
+
}
|
| 145 |
+
}
|
| 146 |
+
```
|
| 147 |
+
|
| 148 |
+
## Related Projects
|
| 149 |
+
|
| 150 |
+
- **Benchmark Server** (`bench/`): REST API server for running benchmarks (separate Docker deployment)
|
| 151 |
+
- **CLI Client** (`client/`): Command-line tool for submitting benchmarks to the server
|
| 152 |
|
| 153 |
## License
|
| 154 |
|