Datasets:
language:
- en
license: cc-by-sa-4.0
size_categories:
- n<1K
task_categories:
- image-text-to-text
pretty_name: ROME
tags:
- benchmark
- reasoning
- vlm
dataset_info:
features:
- name: task_category
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: img_paths
dtype: string
- name: reference
dtype: string
- name: question_type
dtype: string
- name: evaluator
dtype: string
- name: evaluator_kwargs
dtype: string
- name: meta_info
dtype: string
- name: image_0
dtype: image
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
splits:
- name: train
num_bytes: 108098652
num_examples: 281
download_size: 107332725
dataset_size: 108098652
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
🏠Project Page & Leaderboard | 💻Code | 📄Paper | 🤗Data | 🤗Evaluation Response
This repository contains a visual reasoning benchmark named ROME from the paper FlagEval Findings Report: A Preliminary Evaluation of Large Reasoning Models on Automatically Verifiable Textual and Visual Questions.
ROME include 8 subtasks (281 high-quality questions in total). Each sample has been verified to ensure that images are necessary to answer correctly:
- Academic
- questions from college courses
- Diagrams
- charts and figures collected from recent scientific papers, reports, or blog posts
- Puzzles and games
- Raven's Progressive Matrices, rebus puzzles, and gameplay
- Memes
- recreated memes
- Geo
- geolocation inference
- Recognition
- fine-grained recognition
- Multi-image
- find-the-difference tasks or video frame reordering.
- Spatial
- relative positions, depths/distances, heights, etc
We plot the scatter of overall accuracy vs. token consumption for visual problems:
📰 News
[09/10/2025] 🚀 First release of Rome. We released our leaderboard on 30+ LLMs and MLLMs that we have tested so far. We also released all model responses across 4 runs of evaluations(Model responses).
👋 Evaluation Findings
We conduct a moderate-scale contamination-free (hopefully) evaluation of current LRMs with some preliminary findings. To highlight a few:
- With a few more thousands of thinking tokens, LRMs consistently show superior performance than their non-thinking counterparts in solving challenging problems or puzzles.
- LRMs achieving high metrics on previous benchmarks are also showing within-task generalization, thus benchmark saturation should not always be attributed to contamination or memorization.
- Many recently findings from LRMs might be model-specific or data-specific. For instance, we observe slight degradation in instruction following only on Claude Sonnet 4 and DeepSeek series, and on Qwen 3 and DeepSeek LRMs in multi-turn settings.
- There exists degradation in multi-turn settings for some LRMs against their non-thinking counterparts, even when they are showing superior or on-par metrics on single-turn instructions following.
- Current open-weight LRMs may tend to show more vulnerability against harmful content prompts or jailbreaking, implying necessity of careful deployment.
- Current-generation text-based inference-time scaling has not yet brought notable gains on visual reasoning for most VLMs. %\emoji{-1}
- Performance varies too much for generally difficult subsets which implies huge difficulty in conducting statistically reliable evaluation at moderate cost.
- Many top-tier LRMs may pretend to conduct tool use or web search even when they do not have real access, which leaves question on reliability. We appeal for more transparency in revealing the reasoning details to enable more awareness during usage, especially multimodal contents.
- Signals of misaligned thinking and answers: models are optimized to be stronger but also more difficult to monitor or to interpret, with inconsistency between thinking and answers being non-trivially prevalent for many LRMs we investigated.
- Different model developers seem to prioritize things differently: On visual questions (our ROME benchmark), Gemini 2.5 Pro tops in overall accuracy, o4-mini and GPT-5 strike a better balance in performance and token consumption, while Claude Sonnet 4 is showing the best controlled thinking behaviors.
Licensing Information
The ROME benchmark is licensed under the CC BY-SA 4.0 License.
🥺 Citation Information
@misc{qin2025flageval,
title={FlagEval Findings Report: A Preliminary Evaluation of Large Reasoning Models on Automatically Verifiable Textual and Visual Questions},
author={Bowen Qin and Chen Yue and Fang Yin and Hui Wang and JG Yao and Jiakang Liu and Jing-Shu Zheng and Miguel Hu Chen and Richeng Xuan and Shibei Meng and Shiqi Zhou and Teng Dai and Tong-Shuai Ren and Wei Cui and Xi Yang and Xialin Du and Xiaojing Xu and Xue Sun and Xuejing Li and Yaming Liu and Yesheng Liu and Ying Liu and Yonghua Lin and Yu Zhao and Yunduo Zhang and Yuwen Luo and Zheqi He and Zhiyuan He and Zhongyuan Wang},
year={2025},
eprint={2509.17177},
archivePrefix={arXiv},
primaryClass={cs.CL}
}

