Update README.md
Browse files
README.md
CHANGED
|
@@ -121,15 +121,23 @@ language:
|
|
| 121 |
pretty_name: PCA-Bench
|
| 122 |
---
|
| 123 |
|
|
|
|
| 124 |
<h1 align="center">PCA-Bench</h1>
|
| 125 |
|
| 126 |
<p align="center">
|
| 127 |
|
| 128 |
<a href="https://github.com/pkunlp-icler/PCA-EVAL">
|
| 129 |
<img alt="Static Badge" src="https://img.shields.io/badge/Github-Online-white">
|
| 130 |
-
|
| 131 |
-
<
|
| 132 |
-
<
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 133 |
</a>
|
| 134 |
</p>
|
| 135 |
|
|
@@ -139,28 +147,75 @@ pretty_name: PCA-Bench
|
|
| 139 |
*PCA-Bench is an innovative benchmark for evaluating and locating errors in Multimodal LLMs when conducting embodied decision making tasks, specifically focusing on perception, cognition, and action.*
|
| 140 |
|
| 141 |
|
| 142 |
-
##
|
| 143 |
-
- PCA-Bench-V1 is released in
|
|
|
|
| 144 |
|
|
|
|
|
|
|
| 145 |
|
| 146 |
|
| 147 |
-
## Run Evaluation on Accuracy
|
| 148 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 149 |
```python
|
| 150 |
-
#
|
| 151 |
from datasets import load_dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 152 |
dataset_ad = load_dataset("PCA-Bench/PCA-Bench-V1","Autonomous Driving")
|
| 153 |
dataset_dr = load_dataset("PCA-Bench/PCA-Bench-V1","Domestic Robot")
|
| 154 |
dataset_og = load_dataset("PCA-Bench/PCA-Bench-V1","Open-World Game")
|
| 155 |
|
| 156 |
-
|
| 157 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 158 |
|
|
|
|
| 159 |
```
|
| 160 |
|
| 161 |
-
|
| 162 |
|
| 163 |
-
We will send the closed track PCA-Eval results of your model to you.
|
| 164 |
|
| 165 |
-
|
| 166 |
-
To run PCA-Evaluation yourself, please follow the guidelines in the github repo [PCA-EVAL](https://github.com/pkunlp-icler/PCA-EVAL).
|
|
|
|
| 121 |
pretty_name: PCA-Bench
|
| 122 |
---
|
| 123 |
|
| 124 |
+
|
| 125 |
<h1 align="center">PCA-Bench</h1>
|
| 126 |
|
| 127 |
<p align="center">
|
| 128 |
|
| 129 |
<a href="https://github.com/pkunlp-icler/PCA-EVAL">
|
| 130 |
<img alt="Static Badge" src="https://img.shields.io/badge/Github-Online-white">
|
| 131 |
+
|
| 132 |
+
<a href="https://github.com/pkunlp-icler/PCA-EVAL/blob/main/PCA_Bench_Paper.pdf">
|
| 133 |
+
<img alt="Static Badge" src="https://img.shields.io/badge/Paper-PCABench-red">
|
| 134 |
+
|
| 135 |
+
<a href="https://huggingface.co/datasets/PCA-Bench/PCA-Bench-V1">
|
| 136 |
+
<img alt="Static Badge" src="https://img.shields.io/badge/HFDataset-PCABenchV1-yellow">
|
| 137 |
+
</a>
|
| 138 |
+
|
| 139 |
+
<a href="https://docs.qq.com/sheet/DVUd4WUpGRHRqUnNV">
|
| 140 |
+
<img alt="Static Badge" src="https://img.shields.io/badge/Leaderboard-Online-blue">
|
| 141 |
</a>
|
| 142 |
</p>
|
| 143 |
|
|
|
|
| 147 |
*PCA-Bench is an innovative benchmark for evaluating and locating errors in Multimodal LLMs when conducting embodied decision making tasks, specifically focusing on perception, cognition, and action.*
|
| 148 |
|
| 149 |
|
| 150 |
+
## Release
|
| 151 |
+
- [2024.02.15] [PCA-Bench-V1](https://github.com/pkunlp-icler/PCA-EVAL) is released. We release the open and closed track data in [huggingface](https://huggingface.co/datasets/PCA-Bench/PCA-Bench-V1). We also set an online [leaderboard ](https://docs.qq.com/sheet/DVUd4WUpGRHRqUnNV) accepting users' submission.
|
| 152 |
+
- [2023.12.15] [PCA-EVAL](https://arxiv.org/abs/2310.02071) is accepted to Foundation Model for Decision Making Workshop @NeurIPS 2023. PCA-Evaluation tool is released in github.
|
| 153 |
|
| 154 |
+
## Leaderboard
|
| 155 |
+
[Leaderboard with Full Metrics](https://docs.qq.com/sheet/DVUd4WUpGRHRqUnNV)
|
| 156 |
|
| 157 |
|
|
|
|
| 158 |
|
| 159 |
+
## Submit Results
|
| 160 |
+
|
| 161 |
+
📢 For close track evaluaiton and PCA-Evaluation, please follow [this file](https://github.com/pkunlp-icler/PCA-EVAL/blob/main/pca-eval/results/chatgpt_holmes_outputs/Autonomous%20Driving.json) to organize your model output. Submit **Six JSON files** from different domains and different tracks, along with your **model name** and **organization** to us via [email](mailto:leo.liang.chen@stu.pku.edu.cn). Ensure you use the dataset's provided prompt as the default input for fair comparison.
|
| 162 |
+
|
| 163 |
+
We will send the PCA-Eval results of your model to you and update the leaderboard.
|
| 164 |
+
|
| 165 |
+
We provide sample code to get the six json files. User only needs to add your model inference code:
|
| 166 |
```python
|
| 167 |
+
# Sample code for PCA-Eval
|
| 168 |
from datasets import load_dataset
|
| 169 |
+
from tqdm import tqdm
|
| 170 |
+
import json
|
| 171 |
+
import os
|
| 172 |
+
|
| 173 |
+
def YOUR_INFERENCE_CODE(prompt,image):
|
| 174 |
+
"""Simple single round multimodal conversation call.
|
| 175 |
+
"""
|
| 176 |
+
response = YOUR_MODEL.inference(prompt,image)
|
| 177 |
+
return response
|
| 178 |
+
|
| 179 |
+
output_path = "./Results-DIR-PATH/"
|
| 180 |
+
os.mkdir(output_path)
|
| 181 |
+
|
| 182 |
dataset_ad = load_dataset("PCA-Bench/PCA-Bench-V1","Autonomous Driving")
|
| 183 |
dataset_dr = load_dataset("PCA-Bench/PCA-Bench-V1","Domestic Robot")
|
| 184 |
dataset_og = load_dataset("PCA-Bench/PCA-Bench-V1","Open-World Game")
|
| 185 |
|
| 186 |
+
test_dataset_dict = {"Autonomous-Driving":dataset_ad,"Domestic-Robot":dataset_dr,"Open-World-Game":dataset_og}
|
| 187 |
+
test_split = ["test_closed","test_open"]
|
| 188 |
+
test_domain = list(test_dataset_dict.keys())
|
| 189 |
+
|
| 190 |
+
for domain in test_domain:
|
| 191 |
+
for split in test_split:
|
| 192 |
+
print("testing on %s:%s"%(domain,split))
|
| 193 |
+
|
| 194 |
+
prediction_results = []
|
| 195 |
+
output_filename = output_path+"%s-%s.json"%(domain,split)
|
| 196 |
+
prompts = test_dataset_dict[domain][split]['question_prompt']
|
| 197 |
+
images = test_dataset_dict[domain][split]['image']
|
| 198 |
+
|
| 199 |
+
for prompt_id in tqdm(range(len(prompts))):
|
| 200 |
+
user_inputs = prompts[prompt_id] # do not change the prompts for fair comparison
|
| 201 |
+
index = prompt_id
|
| 202 |
+
image = images[prompt_id]
|
| 203 |
+
|
| 204 |
+
outputs = YOUR_INFERENCE_CODE(user_inputs,image)
|
| 205 |
+
|
| 206 |
+
prediction_results.append({
|
| 207 |
+
'prompt': user_inputs,
|
| 208 |
+
'model_output': outputs,
|
| 209 |
+
'index': index,
|
| 210 |
+
})
|
| 211 |
+
|
| 212 |
+
with open(output_filename, 'w') as f:
|
| 213 |
+
json.dump(prediction_results, f, indent=4)
|
| 214 |
|
| 215 |
+
# submit the 6 json files in the output_path to our email
|
| 216 |
```
|
| 217 |
|
| 218 |
+
You could also simply compute the multiple-choice accuracy locally as a comparison metric in your own experiments. However, in the online leaderboard, we only consider the average action score and Genuine PCA score when ranking models.
|
| 219 |
|
|
|
|
| 220 |
|
| 221 |
+
For more information, refer to the offical [github repo](https://github.com/pkunlp-icler/PCA-EVAL)
|
|
|