Yuanxh commited on
Commit
b0f5a36
ยท
verified ยท
1 Parent(s): 74a692f

Update constants.py

Browse files
Files changed (1) hide show
  1. constants.py +1 -1
constants.py CHANGED
@@ -32,7 +32,7 @@ XLSX_DIR = "./file//results.xlsx"
32
 
33
  LEADERBOARD_INTRODUCTION = """# ๐Ÿ† S-Eval Leaderboard
34
  ## ๐Ÿ”” Updates
35
- ๐Ÿ“ฃ [2025/10/09]: ๐ŸŽ‰ We release [**Octopus**](https://github.com/Alibaba-AAIG/Octopus), an automated LLM safety evaluator, to meet the communityโ€™s need for accurate and reproducible safety assessment tools. You can download the model from [HuggingFace](https://huggingface.co/Alibaba-AAIG/Octopus-14B) or [ModelScope](https://modelscope.cn/models/Alibaba-AAIG/Octopus-14B/summary).
36
 
37
  ๐Ÿ“ฃ [2025/03/30]: ๐ŸŽ‰ Our [paper](https://dl.acm.org/doi/abs/10.1145/3728971) has been accepted by ISSTA 2025. To meet evaluation needs under different budgets, we partition the benchmark into four scales: [Small](https://github.com/IS2Lab/S-Eval/tree/main/s_eval/small) (1,000 Base and 10,000 Attack in each language), [Medium](https://github.com/IS2Lab/S-Eval/tree/main/s_eval/medium) (3,000 Base and 30,000 Attack in each language), [Large](https://github.com/IS2Lab/S-Eval/tree/main/s_eval/large) (5,000 Base and 50,000 Attack in each language) and [Full](https://github.com/IS2Lab/S-Eval/tree/main/s_eval/full) (10,000 Base and 100,000 Attack in each language), comprehensively considering the balance and harmfulness of data.
38
 
 
32
 
33
  LEADERBOARD_INTRODUCTION = """# ๐Ÿ† S-Eval Leaderboard
34
  ## ๐Ÿ”” Updates
35
+ ๐Ÿ“ฃ [2025/10/09]: We update the evaluation for the latest LLMs in the new [๐Ÿ† LeaderBoard](https://s.alibaba.com/aigc-web#/), and further release [**Octopus**](https://github.com/Alibaba-AAIG/Octopus), an automated LLM safety evaluator, to meet the communityโ€™s need for accurate and reproducible safety assessment tools. You can download the model from [HuggingFace](https://huggingface.co/Alibaba-AAIG/Octopus-14B) or [ModelScope](https://modelscope.cn/models/Alibaba-AAIG/Octopus-14B/summary).
36
 
37
  ๐Ÿ“ฃ [2025/03/30]: ๐ŸŽ‰ Our [paper](https://dl.acm.org/doi/abs/10.1145/3728971) has been accepted by ISSTA 2025. To meet evaluation needs under different budgets, we partition the benchmark into four scales: [Small](https://github.com/IS2Lab/S-Eval/tree/main/s_eval/small) (1,000 Base and 10,000 Attack in each language), [Medium](https://github.com/IS2Lab/S-Eval/tree/main/s_eval/medium) (3,000 Base and 30,000 Attack in each language), [Large](https://github.com/IS2Lab/S-Eval/tree/main/s_eval/large) (5,000 Base and 50,000 Attack in each language) and [Full](https://github.com/IS2Lab/S-Eval/tree/main/s_eval/full) (10,000 Base and 100,000 Attack in each language), comprehensively considering the balance and harmfulness of data.
38