license: cc-by-nc-4.0
modalities:
- audio
- text
configs:
- config_name: temporal_reasoning
data_files:
- split: test
path: meta_info/holistic_reasoning_temporal.json
default: true
- config_name: spatial_reasoning
data_files:
- split: test
path: meta_info/holistic_reasoning_spatial.json
- config_name: perception
data_files:
- split: test
path: meta_info/foundation_perception.json
STAR-Bench: Probing Deep Spatio-Temporal Reasoning as Audio 4D Intelligence
Zihan Liu*
·
Zhikang Niu*
·
Qiuyang Xiao
·
Zhisheng Zheng
·
Ruoqi Yuan
·
Yuhang Zang†
Yuhang Cao
·
Xiaoyi Dong
·
Jianze Liang
·
Xie Chen
·
Leilei Sun
·
Dahua Lin
·
Jiaqi Wang†
* Equal Contribution. †Corresponding authors.
🌈Overview
We formalize audio 4D intelligence that is defined as reasoning over sound dynamics in time and 3D space, and introduce a STAR-Bench to measure it. STAR-Bench combines a Foundational Acoustic Perceptionsetting (six attributes under absolute and relative regimes) with a Holistic Spatio-Temporal Reasoning setting that includes segment reordering for continuous and discrete processes and spatial tasks spanning static localization, multi-source relations, and dynamic trajectories.
Benchmark examples are illustrated below. You can also visit the homepage for a more intuitive overview.
📊Results and Analysis
Evaluation results of various models on STAR-Bench v0.5 are shown below. The leaderboard for v1.0 will be released soon.
💡 Key Insights
- 🔥A clear capability hierarchy between the two groups. Closed-source models are bottlenecked by fine-grained perception, while open-source models lag across perception, knowledge, and reasoning.
- 🔥 Enhancing dense audio captioning. Open-source models struggle to produce dense, fine-grained captions, which limits their perceptual sensitivity and ability to extract embedded knowledge. Bridging this gap is a crucial first step.
- 🔥 Improving multi-audio reasoning. Open-source models lag significantly in comparing, integrating, and grounding information across multiple audio clips.
- 🔥 Moving beyond channel-averaged audio preprocessing. The common practice of averaging multi-channel audio into a mono signal is a major bottleneck for spatial reasoning. Developing architectures that natively process multi-channel cues is essential for unlocking genuine spatial awareness.
⚙️Data Curation
For the holistic spatio-temporal reasoning task, the curation process comprises four key stages, including human annotation and final selection based on human performance, as illustrated below.
✒️Citation
TBD
📄 License
Usage and License Notices: The data and code are intended and licensed for research use only.