Spaces:
Running
Running
simple fix of columns
Browse files- src/display/about.py +15 -0
src/display/about.py
CHANGED
|
@@ -4,6 +4,21 @@ TITLE = """<h1 align="center" id="space-title">OPEN-MOE-LLM-LEADERBOARD</h1>"""
|
|
| 4 |
|
| 5 |
INTRODUCTION_TEXT = """
|
| 6 |
The OPEN-MOE-LLM-LEADERBOARD is specifically designed to assess the performance and efficiency of various Mixture of Experts (MoE) Large Language Models (LLMs). This initiative, driven by the open-source community, aims to comprehensively evaluate these advanced MoE LLMs. We extend our gratitude to the Huggingface for the GPU community grant that supported the initial debugging process, and to [NetMind.AI](https://netmind.ai/home) for their generous GPU donation, which ensures the continuous operation of the Leaderboard.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
"""
|
| 8 |
LLM_BENCHMARKS_TEXT = f"""
|
| 9 |
|
|
|
|
| 4 |
|
| 5 |
INTRODUCTION_TEXT = """
|
| 6 |
The OPEN-MOE-LLM-LEADERBOARD is specifically designed to assess the performance and efficiency of various Mixture of Experts (MoE) Large Language Models (LLMs). This initiative, driven by the open-source community, aims to comprehensively evaluate these advanced MoE LLMs. We extend our gratitude to the Huggingface for the GPU community grant that supported the initial debugging process, and to [NetMind.AI](https://netmind.ai/home) for their generous GPU donation, which ensures the continuous operation of the Leaderboard.
|
| 7 |
+
|
| 8 |
+
The OPEN-MOE-LLM-LEADERBOARD includes generation and multiple choice tasks to measure the performance and efficiency of MOE LLMs.
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
Tasks:
|
| 12 |
+
- **Generation Self-consistancy** -- [SelfCheckGPT](https://github.com/potsawee/selfcheckgpt)
|
| 13 |
+
- **Multiple Choice Performance** -- [MMLU](https://arxiv.org/abs/2009.03300)
|
| 14 |
+
|
| 15 |
+
Columns and Metrics:
|
| 16 |
+
- Method: The MOE LLMs inference framework.
|
| 17 |
+
- E2E(s): Average End to End generation time in seconds.
|
| 18 |
+
- PRE(s): Prefilling Time of input prompt in seconds.
|
| 19 |
+
- T/s: Tokens throughout per second.
|
| 20 |
+
- Precision: The precison of used model.
|
| 21 |
+
|
| 22 |
"""
|
| 23 |
LLM_BENCHMARKS_TEXT = f"""
|
| 24 |
|