Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
Update utils.py
Browse files
utils.py
CHANGED
|
@@ -29,7 +29,7 @@ COLUMN_NAMES = MODEL_INFO
|
|
| 29 |
|
| 30 |
LEADERBOARD_INTRODUCTION = """# MMLU-Pro Leaderboard
|
| 31 |
|
| 32 |
-
Welcome to the MMLU-Pro leaderboard, showcasing the performance of various advanced language models on the MMLU-Pro dataset. The MMLU-Pro dataset is an enhanced version of the original MMLU, specifically engineered to offer a more rigorous and realistic evaluation environment
|
| 33 |
|
| 34 |
## What's new about MMLU-Pro
|
| 35 |
|
|
@@ -48,15 +48,12 @@ TABLE_INTRODUCTION = """
|
|
| 48 |
|
| 49 |
LEADERBOARD_INFO = """
|
| 50 |
## Dataset Summary
|
| 51 |
-
|
| 52 |
- **Questions and Options:** Each question within the dataset typically has **ten** multiple-choice options, except for some that were reduced during the manual review process to remove unreasonable choices. This increase from the original **four** options per question is designed to enhance complexity and robustness, necessitating deeper reasoning to discern the correct answer among a larger pool of potential distractors.
|
| 53 |
-
|
| 54 |
- **Sources:** The dataset consolidates questions from several sources:
|
| 55 |
- **Original MMLU Questions:** Part of the dataset is coming from the original MMLU dataset. We remove the trivial and ambiguous questions.
|
| 56 |
- **STEM Website:** Hand picking high-quality STEM problems from the Internet.
|
| 57 |
- **TheoremQA:** High-quality human-annotated questions requiring theorems to solve.
|
| 58 |
- **Scibench:** Science questions from college exams.
|
| 59 |
-
|
| 60 |
"""
|
| 61 |
|
| 62 |
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
|
|
|
|
| 29 |
|
| 30 |
LEADERBOARD_INTRODUCTION = """# MMLU-Pro Leaderboard
|
| 31 |
|
| 32 |
+
Welcome to the MMLU-Pro leaderboard, showcasing the performance of various advanced language models on the MMLU-Pro dataset. The MMLU-Pro dataset is an enhanced version of the original MMLU, specifically engineered to offer a more rigorous and realistic evaluation environment.
|
| 33 |
|
| 34 |
## What's new about MMLU-Pro
|
| 35 |
|
|
|
|
| 48 |
|
| 49 |
LEADERBOARD_INFO = """
|
| 50 |
## Dataset Summary
|
|
|
|
| 51 |
- **Questions and Options:** Each question within the dataset typically has **ten** multiple-choice options, except for some that were reduced during the manual review process to remove unreasonable choices. This increase from the original **four** options per question is designed to enhance complexity and robustness, necessitating deeper reasoning to discern the correct answer among a larger pool of potential distractors.
|
|
|
|
| 52 |
- **Sources:** The dataset consolidates questions from several sources:
|
| 53 |
- **Original MMLU Questions:** Part of the dataset is coming from the original MMLU dataset. We remove the trivial and ambiguous questions.
|
| 54 |
- **STEM Website:** Hand picking high-quality STEM problems from the Internet.
|
| 55 |
- **TheoremQA:** High-quality human-annotated questions requiring theorems to solve.
|
| 56 |
- **Scibench:** Science questions from college exams.
|
|
|
|
| 57 |
"""
|
| 58 |
|
| 59 |
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
|