Datasets:

Modalities:
Text
Formats:
json
Libraries:
Datasets
pandas
License:
STAR-Bench / README.md
yuhangzang's picture
Upload folder using huggingface_hub
9084f75 verified
metadata
license: cc-by-nc-4.0
modalities:
  - audio
  - text
configs:
  - config_name: temporal_reasoning
    data_files:
      - split: test
        path: meta_info/holistic_reasoning_temporal.json
    default: true
  - config_name: spatial_reasoning
    data_files:
      - split: test
        path: meta_info/holistic_reasoning_spatial.json
  - config_name: perception
    data_files:
      - split: test
        path: meta_info/foundation_perception.json

STAR-Bench: Probing Deep Spatio-Temporal Reasoning as Audio 4D Intelligence

Zihan Liu* · Zhikang Niu* · Qiuyang Xiao · Zhisheng Zheng · Ruoqi Yuan · Yuhang Zang
Yuhang Cao · Xiaoyi Dong · Jianze Liang · Xie Chen · Leilei Sun · Dahua Lin · Jiaqi Wang

* Equal Contribution. Corresponding authors.

📖arXiv |🏠Code |🌐Homepage | 🤗Dataset

🌈Overview

We formalize audio 4D intelligence that is defined as reasoning over sound dynamics in time and 3D space, and introduce a STAR-Bench to measure it. STAR-Bench combines a Foundational Acoustic Perceptionsetting (six attributes under absolute and relative regimes) with a Holistic Spatio-Temporal Reasoning setting that includes segment reordering for continuous and discrete processes and spatial tasks spanning static localization, multi-source relations, and dynamic trajectories.

teaser

Unlike prior benchmarks where caption-only answering reduces accuracy slightly, STAR-Bench induces far larger drops (-31.5\% temporal, -35.2\% spatial), evidencing its focus on linguistically hard-to-describe cues. Evaluating 19 models reveals substantial gaps to humans and a capability hierarchy. Our STAR-Bench provides critical insights and a clear path forward for developing future models with a more robust understanding of the physical world.

Benchmark examples are illustrated below. You can also visit the homepage for a more intuitive overview.

STAR-Bench Examples

📊Results and Analysis

Evaluation results of various models on STAR-Bench v0.5 are shown below. The leaderboard for v1.0 will be released soon.

Results

Error distribution across temporal and spatial Tasks:

Results

💡 Key Insights

  • 🔥A clear capability hierarchy between the two groups. Closed-source models are bottlenecked by fine-grained perception, while open-source models lag across perception, knowledge, and reasoning.
  • 🔥 Enhancing dense audio captioning. Open-source models struggle to produce dense, fine-grained captions, which limits their perceptual sensitivity and ability to extract embedded knowledge. Bridging this gap is a crucial first step.
  • 🔥 Improving multi-audio reasoning. Open-source models lag significantly in comparing, integrating, and grounding information across multiple audio clips.
  • 🔥 Moving beyond channel-averaged audio preprocessing. The common practice of averaging multi-channel audio into a mono signal is a major bottleneck for spatial reasoning. Developing architectures that natively process multi-channel cues is essential for unlocking genuine spatial awareness.

⚙️Data Curation

All audio for the foundational perception task is synthesized using precise parameterization or the Pyroomacoustics physics-based simulator, providing complete control over acoustic parameters. Domain experts rigorously validate the task difficulty levels, which are then calibrated through human testing.
For the holistic spatio-temporal reasoning task, the curation process comprises four key stages, including human annotation and final selection based on human performance, as illustrated below.

pipeline

✒️Citation

TBD

📄 License

Code License Data License Usage and License Notices: The data and code are intended and licensed for research use only.