Spaces:
Running
Running
Space pinned and removed "[Beta Preview]" from the title in README.md and content.py
Browse files- README.md +2 -2
- content.py +1 -1
README.md
CHANGED
|
@@ -1,12 +1,12 @@
|
|
| 1 |
---
|
| 2 |
-
title: π¨πΏ BenCzechMark
|
| 3 |
emoji: π
|
| 4 |
colorFrom: gray
|
| 5 |
colorTo: blue
|
| 6 |
sdk: gradio
|
| 7 |
sdk_version: 4.43.0
|
| 8 |
app_file: app.py
|
| 9 |
-
pinned:
|
| 10 |
startup_duration_timeout: 5h 59m
|
| 11 |
---
|
| 12 |
|
|
|
|
| 1 |
---
|
| 2 |
+
title: π¨πΏ BenCzechMark
|
| 3 |
emoji: π
|
| 4 |
colorFrom: gray
|
| 5 |
colorTo: blue
|
| 6 |
sdk: gradio
|
| 7 |
sdk_version: 4.43.0
|
| 8 |
app_file: app.py
|
| 9 |
+
pinned: true
|
| 10 |
startup_duration_timeout: 5h 59m
|
| 11 |
---
|
| 12 |
|
content.py
CHANGED
|
@@ -2,7 +2,7 @@
|
|
| 2 |
This file contains the text content for the leaderboard client.
|
| 3 |
"""
|
| 4 |
HEADER_MARKDOWN = """
|
| 5 |
-
# π¨πΏ BenCzechMark
|
| 6 |
|
| 7 |
Welcome to the leaderboard!
|
| 8 |
Here you can compare models on tasks in Czech language and/or submit your own model. We use our modified fork of [lm-evaluation-harness](https://github.com/DCGM/lm-evaluation-harness) to evaluate every model under same protocol.
|
|
|
|
| 2 |
This file contains the text content for the leaderboard client.
|
| 3 |
"""
|
| 4 |
HEADER_MARKDOWN = """
|
| 5 |
+
# π¨πΏ BenCzechMark
|
| 6 |
|
| 7 |
Welcome to the leaderboard!
|
| 8 |
Here you can compare models on tasks in Czech language and/or submit your own model. We use our modified fork of [lm-evaluation-harness](https://github.com/DCGM/lm-evaluation-harness) to evaluate every model under same protocol.
|