File size: 3,352 Bytes
3e202b5
c0c37f8
 
 
 
3e202b5
 
 
 
4a5ad39
3f46491
3e202b5
155bfc3
1c52fac
 
 
 
 
eb29cf2
1c52fac
eb29cf2
1c52fac
 
 
 
 
 
 
 
 
eb29cf2
 
 
 
 
1c52fac
 
 
 
 
eb29cf2
1c52fac
 
 
 
 
 
 
 
 
 
 
 
a74444e
1c52fac
 
 
 
a74444e
1c52fac
 
a74444e
 
 
 
 
 
1c52fac
 
 
 
 
 
 
c76945f
 
1c52fac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
155bfc3
 
 
1c52fac
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
title: TuRTLe Leaderboard
emoji: 🐢
colorFrom: gray
colorTo: green
sdk: gradio
app_file: app.py
pinned: true
license: apache-2.0
short_description: A Unified Evaluation of LLMs for RTL Generation.
sdk_version: 5.39.0
---

## Quick Introduction

### Prerequisites

- **Python 3.11** or higher (required by the project)
- **[uv](https://docs.astral.sh/uv/getting-started/installation/)** for managing dependencies (optional but recommended)

#### Installing uv (optional)

On macOS and Linux:
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```

Install dependencies:
```bash
uv sync

# or using regular python
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
```

### Deploy Locally

```
$ uv run app.py # or python3 app.py
* Running on local URL:  http://127.0.0.1:7860
* To create a public link, set `share=True` in `launch()`.
```

Then on localhost http://127.0.0.1:7860 you should have the leaderboard running

### Add new models

If you are from outside of HPAI you must directly modify the `results/results_icarus.json` and `results/results_verilator.json` files.

If you are from HPAI, you can add your model onto our shared `.csv` file of results and follow these steps:

1. Modify the `config/model_metadata.py` file, `MODELS` dictionary to include a new entry for your model

    For example, if we wish to include the classic GPT2 model, we would add the following metadata:

    ```python
    MODELS = {
        ...

        "GPT2": ModelMetadata(
            "https://huggingface.co/openai-community/gpt2",  # model url
            0.13,  # params (in B)
            "Coding",  # model type: "General", "Coding", or "RTL-Specific"
            "V1",  # release of the TuRTLe Leaderboard: "V1", "V2", or "V3"
            "Dense"  # model architecture: "Dense" or "Reasoning"
        ),
    }
    ```

2. Parse the CSV files onto JSON, which is what the Leaderboard will take as ground truth

    ```
    $ uv run -m results.parse results/results_v3_mlcad_icarus.csv # will generate results/results_v3_mlcad_icarus.json
    $ uv run -m results.parse results/results_v3_mlcad_verilator.csv # will generate results/results_v3_mlcad_verilator.json
    ```

    The application is hardcoded to look for `results_icarus.json` and `results_verilator.json.` Rename the files you just created:

    ```
    $ mv results/results_v3_mlcad_icarus.json results/results_icarus.json
    $ mv results/results_v3_mlcad_verilator.json results/results_verilator.json
    ```

3. Compute the aggregated scores

    This will generate the corresponding `aggregated_scores` files that the leaderboard uses for some of its views.

    ```
    $ uv run results/compute_agg_results.py results/results_v3_mlcad_icarus.csv
    $ uv run results/compute_agg_results.py results/results_v3_mlcad_verilator.csv
    ```

    This will create `aggregated_scores_v3_mlcad_icarus.csv` and `aggregated_scores_v3_mlcad_verilator.csv`. Rename them to what the application expects:

    ```
    $ mv results/aggregated_scores_v3_mlcad_icarus.csv results/aggregated_scores_icarus.csv
    $ mv results/aggregated_scores_v3_mlcad_verilator.csv results/aggregated_scores_verilator.csv
    ```


## License

This project is licensed under the Apache License 2.0.  
See the [LICENSE](./LICENSE) and [NOTICE](./NOTICE) files for more details.