Spaces:
Running
Running
Update content.py
Browse files- content.py +11 -1
content.py
CHANGED
|
@@ -4,13 +4,23 @@ This file contains the text content for the leaderboard client.
|
|
| 4 |
HEADER_MARKDOWN = """
|
| 5 |
# π¨πΏ BenCzechMark [Beta Preview]
|
| 6 |
|
| 7 |
-
Welcome to the leaderboard! Here you can submit your model
|
|
|
|
|
|
|
| 8 |
"""
|
| 9 |
LEADERBOARD_TAB_TITLE_MARKDOWN = """
|
| 10 |
## Leaderboard
|
| 11 |
"""
|
| 12 |
|
| 13 |
SUBMISSION_TAB_TITLE_MARKDOWN = """
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
## Submission
|
| 15 |
To submit your model, please fill in the form below.
|
| 16 |
|
|
|
|
| 4 |
HEADER_MARKDOWN = """
|
| 5 |
# π¨πΏ BenCzechMark [Beta Preview]
|
| 6 |
|
| 7 |
+
Welcome to the leaderboard! Here you can compare models on tasks in Czech language and/or submit your own model. Head to submission page to learn about submission details.
|
| 8 |
+
We use our modified fork of [lm-evaluation-harness](https://github.com/DCGM/lm-evaluation-harness) to evaluate every model under same protocol.
|
| 9 |
+
See about page for brief description of our evaluation protocol & win score mechanism, citation information, and future directions for this benchmark.
|
| 10 |
"""
|
| 11 |
LEADERBOARD_TAB_TITLE_MARKDOWN = """
|
| 12 |
## Leaderboard
|
| 13 |
"""
|
| 14 |
|
| 15 |
SUBMISSION_TAB_TITLE_MARKDOWN = """
|
| 16 |
+
## How to submit
|
| 17 |
+
1. Head down to our modified fork of [lm-evaluation-harness](https://github.com/DCGM/lm-evaluation-harness).
|
| 18 |
+
Follow the instructions and evaluate your model on all π¨πΏ BenCzechMark tasks, while logging your lm harness outputs into designated folder.
|
| 19 |
+
|
| 20 |
+
2. Use our script <TODO: add script> for processing log files from your designated folder into single compact submission file that contains everything we need.
|
| 21 |
+
|
| 22 |
+
3. Upload your file, and fill the form below!
|
| 23 |
+
|
| 24 |
## Submission
|
| 25 |
To submit your model, please fill in the form below.
|
| 26 |
|