Datasets:
upload image and readme
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- README.md +67 -3
- data/img/.DS_Store +0 -0
- data/img/academic/academic_0001.png +3 -0
- data/img/academic/academic_0002.png +3 -0
- data/img/academic/academic_0003.png +3 -0
- data/img/academic/academic_0004.png +3 -0
- data/img/academic/academic_0005.png +3 -0
- data/img/academic/academic_0006.png +3 -0
- data/img/academic/academic_0007.png +3 -0
- data/img/academic/academic_0008.png +3 -0
- data/img/academic/academic_0009.png +3 -0
- data/img/academic/academic_0010.png +3 -0
- data/img/academic/academic_0011.png +3 -0
- data/img/academic/academic_0012.jpg +3 -0
- data/img/academic/academic_0013.jpg +3 -0
- data/img/academic/academic_0014.png +3 -0
- data/img/academic/academic_0015.png +3 -0
- data/img/academic/academic_0016.png +3 -0
- data/img/academic/academic_0018.png +3 -0
- data/img/academic/academic_0019.png +3 -0
- data/img/academic/academic_0020.png +3 -0
- data/img/academic/academic_0021.png +3 -0
- data/img/academic/academic_0022.png +3 -0
- data/img/academic/academic_0028.png +3 -0
- data/img/academic/academic_0029.png +3 -0
- data/img/academic/academic_0030.png +3 -0
- data/img/academic/academic_0031.png +3 -0
- data/img/academic/academic_0032.png +3 -0
- data/img/academic/academic_0033.png +3 -0
- data/img/academic/academic_0034.png +3 -0
- data/img/academic/academic_0041.png +3 -0
- data/img/academic/academic_0042.png +3 -0
- data/img/academic/academic_0043.png +3 -0
- data/img/academic/academic_0047.png +3 -0
- data/img/academic/academic_0048.jpg +3 -0
- data/img/diagrams/diagrams_0001.png +3 -0
- data/img/diagrams/diagrams_0002.jpg +3 -0
- data/img/diagrams/diagrams_0003.jpg +3 -0
- data/img/diagrams/diagrams_0004.jpg +3 -0
- data/img/diagrams/diagrams_0005.png +3 -0
- data/img/diagrams/diagrams_0006.jpg +3 -0
- data/img/diagrams/diagrams_0007.jpg +3 -0
- data/img/diagrams/diagrams_0008.png +3 -0
- data/img/diagrams/diagrams_0009.jpg +3 -0
- data/img/diagrams/diagrams_0010.png +3 -0
- data/img/diagrams/diagrams_0011.png +3 -0
- data/img/diagrams/diagrams_0012.png +3 -0
- data/img/diagrams/diagrams_0013.png +3 -0
- data/img/diagrams/diagrams_0014.png +3 -0
- data/img/diagrams/diagrams_0015.png +3 -0
README.md
CHANGED
|
@@ -31,13 +31,77 @@ dataset_info:
|
|
| 31 |
dtype: image
|
| 32 |
splits:
|
| 33 |
- name: train
|
| 34 |
-
num_bytes:
|
| 35 |
num_examples: 281
|
| 36 |
-
download_size:
|
| 37 |
-
dataset_size:
|
| 38 |
configs:
|
| 39 |
- config_name: default
|
| 40 |
data_files:
|
| 41 |
- split: train
|
| 42 |
path: data/train-*
|
| 43 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
dtype: image
|
| 32 |
splits:
|
| 33 |
- name: train
|
| 34 |
+
num_bytes: 108098652.0
|
| 35 |
num_examples: 281
|
| 36 |
+
download_size: 107332725
|
| 37 |
+
dataset_size: 108098652.0
|
| 38 |
configs:
|
| 39 |
- config_name: default
|
| 40 |
data_files:
|
| 41 |
- split: train
|
| 42 |
path: data/train-*
|
| 43 |
---
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
|
| 47 |
+
🏠[Home & Leaderboard](https://github.com/flageval-baai/LRM-Eval) | 🤗[Data](https://huggingface.co/datasets/) | 🤗[Evaluation Response](https://huggingface.co/datasets/) | 💻[Code](https://github.com/flageval-baai/ROME-evaluation) | 📄[Paper](https://arxiv.org/pdf/2509.17177)
|
| 48 |
+
|
| 49 |
+
This repository contains a visual reasoning benchmark named ROME from the paper [A Preliminary Contamination-Free Evaluation of Reasoning Models](https://arxiv.org/).
|
| 50 |
+
|
| 51 |
+
ROME include 8 subtasks (281 high-quality questions in total). Each sample has been verified to ensure that images are necessary to answer correctly:
|
| 52 |
+
* Academic
|
| 53 |
+
* questions from college courses
|
| 54 |
+
* Diagrams
|
| 55 |
+
* charts and figures collected from recent scientific papers, reports, or blog posts
|
| 56 |
+
* Puzzles and games
|
| 57 |
+
* Raven's Progressive Matrices, rebus puzzles, and gameplay
|
| 58 |
+
* Memes
|
| 59 |
+
* recreated memes
|
| 60 |
+
* Geo
|
| 61 |
+
* geolocation inference
|
| 62 |
+
* Recognition
|
| 63 |
+
* fine-grained recognition
|
| 64 |
+
* Multi-image
|
| 65 |
+
* find-the-difference tasks or video frame reordering.
|
| 66 |
+
* Spatial
|
| 67 |
+
* relative positions, depths/distances, heights, etc
|
| 68 |
+
|
| 69 |
+
We plot the scatter of overall accuracy vs. token consumption for visual problems:
|
| 70 |
+
|
| 71 |
+

|
| 72 |
+
|
| 73 |
+
## 📰 News
|
| 74 |
+
**[09/10/2025]** 🚀 First release of Rome.
|
| 75 |
+
We released our [leaderboard](https://github.com/flageval-baai/LRM-Eval) on **30+ LLMs and MLLMs** that we have tested so far.
|
| 76 |
+
We also released all model responses across 4 runs of evaluations([Model responses]()).
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
## 👋 Evaluation Findings
|
| 80 |
+
We conduct a moderate-scale contamination-free (hopefully) evaluation of current LRMs with some preliminary findings. To highlight a few:
|
| 81 |
+
|
| 82 |
+
* With a few more thousands of thinking tokens, LRMs consistently show superior performance than their non-thinking counterparts in solving challenging problems or puzzles.
|
| 83 |
+
* LRMs achieving high metrics on previous benchmarks are also showing within-task generalization, thus benchmark saturation should not always be attributed to contamination or memorization.
|
| 84 |
+
* Many recently findings from LRMs might be model-specific or data-specific. For instance, we observe slight degradation in instruction following only on Claude Sonnet 4 and DeepSeek series, and on Qwen 3 and DeepSeek LRMs in multi-turn settings.
|
| 85 |
+
* There exists degradation in multi-turn settings for some LRMs against their non-thinking counterparts, even when they are showing superior or on-par metrics on single-turn instructions following.
|
| 86 |
+
* Current open-weight LRMs may tend to show more vulnerability against harmful content prompts or jailbreaking, implying necessity of careful deployment.
|
| 87 |
+
* Current-generation text-based inference-time scaling has not yet brought notable gains on visual reasoning for most VLMs. %\emoji{-1}
|
| 88 |
+
* Performance varies too much for generally difficult subsets which implies huge difficulty in conducting statistically reliable evaluation at moderate cost.
|
| 89 |
+
* Many top-tier LRMs may pretend to conduct tool use or web search even when they do not have real access, which leaves question on reliability. We appeal for more transparency in revealing the reasoning details to enable more awareness during usage, especially multimodal contents.
|
| 90 |
+
* Signals of misaligned thinking and answers: models are optimized to be stronger but also more difficult to monitor or to interpret, with inconsistency between thinking and answers being non-trivially prevalent for many LRMs we investigated.
|
| 91 |
+
* Different model developers seem to prioritize things differently: On visual questions (our ROME benchmark), Gemini 2.5 Pro tops in overall accuracy, o4-mini and GPT-5 strike a better balance in performance and token consumption, while Claude Sonnet 4 is showing the best controlled thinking behaviors.
|
| 92 |
+
|
| 93 |
+
## Licensing Information
|
| 94 |
+
The ROME benchmark is licensed under the [CC BY-SA 4.0 License](https://creativecommons.org/licenses/by-sa/4.0/).
|
| 95 |
+
|
| 96 |
+
## 🥺 Citation Information
|
| 97 |
+
```bibtex
|
| 98 |
+
@misc{qin2025flageval,
|
| 99 |
+
title={FlagEval Findings Report: A Preliminary Evaluation of Large Reasoning Models on Automatically Verifiable Textual and Visual Questions},
|
| 100 |
+
author={Bowen Qin and Chen Yue and Fang Yin and Hui Wang and JG Yao and Jiakang Liu and Jing-Shu Zheng and Miguel Hu Chen and Richeng Xuan and Shibei Meng and Shiqi Zhou and Teng Dai and Tong-Shuai Ren and Wei Cui and Xi Yang and Xialin Du and Xiaojing Xu and Xue Sun and Xuejing Li and Yaming Liu and Yesheng Liu and Ying Liu and Yonghua Lin and Yu Zhao and Yunduo Zhang and Yuwen Luo and Zheqi He and Zhiyuan He and Zhongyuan Wang},
|
| 101 |
+
year={2025},
|
| 102 |
+
eprint={2509.17177},
|
| 103 |
+
archivePrefix={arXiv},
|
| 104 |
+
primaryClass={cs.CL}
|
| 105 |
+
}
|
| 106 |
+
```
|
| 107 |
+
|
data/img/.DS_Store
ADDED
|
Binary file (8.2 kB). View file
|
|
|
data/img/academic/academic_0001.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0002.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0003.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0004.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0005.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0006.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0007.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0008.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0009.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0010.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0011.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0012.jpg
ADDED
|
Git LFS Details
|
data/img/academic/academic_0013.jpg
ADDED
|
Git LFS Details
|
data/img/academic/academic_0014.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0015.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0016.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0018.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0019.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0020.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0021.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0022.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0028.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0029.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0030.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0031.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0032.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0033.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0034.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0041.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0042.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0043.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0047.png
ADDED
|
Git LFS Details
|
data/img/academic/academic_0048.jpg
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0001.png
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0002.jpg
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0003.jpg
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0004.jpg
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0005.png
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0006.jpg
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0007.jpg
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0008.png
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0009.jpg
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0010.png
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0011.png
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0012.png
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0013.png
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0014.png
ADDED
|
Git LFS Details
|
data/img/diagrams/diagrams_0015.png
ADDED
|
Git LFS Details
|