MMLA-Datasets / README.md
lzh0275's picture
Update README.md
ff135d3 verified
metadata
license: cc-by-4.0
task_categories:
  - zero-shot-classification
  - text-classification
  - text-generation
language:
  - en
  - zh
size_categories:
  - 10K<n<100K
pretty_name: MMLA

Can Large Language Models Help Multimodal Language Analysis? MMLA: A Comprehensive Benchmark

1. Introduction

MMLA is the first comprehensive multimodal language analysis benchmark for evaluating foundation models. It has the following features:

  • Large Scale: 61K+ multimodal samples.
  • Various Sources: 9 datasets.
  • Three Modalities: text, video, and audio
  • Both Acting and Real-world Scenarios: films, TV series, YouTube, Vimeo, Bilibili, TED, improvised scripts, etc.
  • Six Core Dimensions in Multimodal Language Analysis: intent, emotion, sentiment, dialogue act, speaking style, and communication behavior.

We also build baselines with three evaluation methods (i.e., zero-shot inference, supervised fine-tuning, and instruction tuning) on 8 mainstream foundation models (i.e., 5 MLLMs (Qwen2-VL, VideoLLaMA2, LLaVA-Video, LLaVA-OV, MiniCPM-V-2.6), 3 LLMs (InternLM2.5, Qwen2, LLaMA3). More details can refer to our paper.

2. Datasets

2.1 Statistics

Dataset statistics for each dimension in the MMLA benchmark. #C, #U, #Train, #Val, and #Test represent the number of label classes, utterances, training, validation, and testing samples, respectively. avg. and max. refer to the average and maximum lengths.

Dimensions Datasets #C #U #Train #Val #Test Video Hours Source #Video Length (avg. / max.) #Text Length (avg. / max.) Language
Intent MIntRec 20 2,224 1,334 445 445 1.5 TV series 2.4 / 9.6 7.6 / 27.0 English
MIntRec2.0 30 9,304 6,165 1,106 2,033 7.5 TV series 2.9 / 19.9 8.5 / 46.0
Dialogue Act MELD 12 9,989 6,992 999 1,998 8.8 TV series 3.2 / 41.1 8.6 / 72.0 English
IEMOCAP 12 9,416 6,590 942 1,884 11.7 Improvised scripts 4.5 / 34.2 12.4 / 106.0
Emotion MELD 7 13,708 9,989 1,109 2,610 12.2 TV series 3.2 / 305.0 8.7 / 72.0 English
IEMOCAP 6 7,532 5,237 521 1,622 9.6 Improvised scripts 4.6 / 34.2 12.8 / 106.0
Sentiment MOSI 2 2,199 1,284 229 686 2.6 Youtube 4.3 / 52.5 12.5 / 114.0 English
CH-SIMS v2.0 3 4,403 2,722 647 1,034 4.3 TV series, films 3.6 / 42.7 1.8 / 7.0 Mandarin
Speaking Style UR-FUNNY-v2 2 9,586 7,612 980 994 12.9 TED 4.8 / 325.7 16.3 / 126.0 English
MUStARD 2 690 414 138 138 1.0 TV series 5.2 / 20.0 13.1 / 68.0
Communication Behavior Anno-MI (client) 3 4,713 3,123 461 1,128 10.8 YouTube & Vimeo 8.2 / 600.0 16.3 / 266.0 English
Anno-MI (therapist) 4 4,773 3,161 472 1,139 12.1 9.1 / 1316.1 17.9 / 205.0

2.2 Collection Timeline

  • MIntRec: Released in 2022/10.
  • MIntRec2.0: Released in 2024/01.
  • MELD: Collected from TV series (released in 2019/05).
  • UR-FUNNY-v2: Collected from publicly available TED talks (released in 2019/11).
  • MUStARD: Collected from TV series (released in 2019/07).
  • MELD-DA: Dialogue act annotations added to MELD in 2020/07. Derived from the EMOTyDA dataset, which re-annotated the original MELD training set videos without collecting new video data.
  • IEMOCAP-DA: Dialogue act annotations added to IEMOCAP (released in 2020/07). Derived from the EMOTyDA dataset, which re-annotated all original IEMOCAP videos without collecting new video data.
  • MOSI: Collected from YouTube opinion videos (released in 2016/06).
  • IEMOCAP: Collected from scripted improvisational acting (released in 2008/12).
  • Anno-MI: Collected from publicly available YouTube and Vimeo videos (released in 2023/03).

2.3 License

This benchmark uses nine datasets, each of which is employed strictly in accordance with its official license and exclusively for academic research purposes. We fully respect the datasets’ copyright policies, license requirements, and ethical standards. For those datasets whose licenses explicitly permit redistribution, we release the original video data (e.g., MIntRec, MIntRec2.0, MELD, UR-FUNNY-v2, MUStARD, MELD-DA, CH-SIMS v2.0, and Anno-MI. For datasets that restrict video redistribution, users should obtain the videos directly from their official repositories (e.g., MOSI, IEMOCAP and IEMOCAP-DA. In compliance with all relevant licenses, we also provide the original textual data unchanged, together with the specific dataset splits used in our experiments. This approach ensures reproducibility and academic transparency while strictly adhering to copyright obligations and protecting the privacy of individuals featured in the videos.

3. LeaderBoard

3.1 Rank of Zero-shot Inference

RANK Models ACC TYPE
🥇 GPT-4o 52.60 MLLM
🥈 Qwen2-VL-72B 52.55 MLLM
🥉 LLaVA-OV-72B 52.44 MLLM
4 LLaVA-Video-72B 51.64 MLLM
5 InternLM2.5-7B 50.28 LLM
6 Qwen2-7B 48.45 LLM
7 Qwen2-VL-7B 47.12 MLLM
8 Llama3-8B 44.06 LLM
9 LLaVA-Video-7B 43.32 MLLM
10 VideoLLaMA2-7B 42.82 MLLM
11 LLaVA-OV-7B 40.65 MLLM
12 Qwen2-1.5B 40.61 LLM
13 MiniCPM-V-2.6-8B 37.03 MLLM
14 Qwen2-0.5B 22.14 LLM

3.2 Rank of Supervised Fine-tuning (SFT) and Instruction Tuning (IT)

Rank Models ACC Type
🥇 Qwen2-VL-72B (SFT) 69.18 MLLM
🥈 MiniCPM-V-2.6-8B (SFT) 68.88 MLLM
🥉 LLaVA-Video-72B (IT) 68.87 MLLM
4 LLaVA-ov-72B (SFT) 68.67 MLLM
5 Qwen2-VL-72B (IT) 68.64 MLLM
6 LLaVA-Video-72B (SFT) 68.44 MLLM
7 VideoLLaMA2-7B (SFT) 68.30 MLLM
8 Qwen2-VL-7B (SFT) 67.60 MLLM
9 LLaVA-ov-7B (SFT) 67.54 MLLM
10 LLaVA-Video-7B (SFT) 67.47 MLLM
11 Qwen2-VL-7B (IT) 67.34 MLLM
12 MiniCPM-V-2.6-8B (IT) 67.25 MLLM
13 Llama-3-8B (SFT) 66.18 LLM
14 Qwen2-7B (SFT) 66.15 LLM
15 Internlm-2.5-7B (SFT) 65.72 LLM
16 Qwen-2-7B (IT) 64.58 LLM
17 Internlm-2.5-7B (IT) 64.41 LLM
18 Llama-3-8B (IT) 64.16 LLM
19 Qwen2-1.5B (SFT) 64.00 LLM
20 Qwen2-0.5B (SFT) 62.80 LLM

4. Data Integrity

All files included in the MMLA benchmark are verified using SHA-256 checksums. Please ensure the integrity of the files using the following checksums:

File Path SHA256 Hash
/MMLA-Datasets/AnnoMi-client/test.tsv d555c7131bc54cb61424d421c7a3ec117fa5587c1d4027dd8501321a5d1abc09
/MMLA-Datasets/AnnoMi-client/dev.tsv 4695dc4e1c360cac53ecd0386b82d10fdda9414ad1d559c0a9491a8981657acd
/MMLA-Datasets/AnnoMi-client/train.tsv 8e1104e7d4e42952d0e615c22ee7ea08c03d9b7d07807ba6f4fd4b41d08fed89
/MMLA-Datasets/AnnoMi-client/AnnoMI-client_video.tar.gz 597d9b8c1a701a89c3f6b18e4a451c21b6699a670a83225b7bce5212f5abdfe0
/MMLA-Datasets/AnnoMi-therapist/dev.tsv bde3ae0e4f16e2249ac94245802b1e5053df3c9d4864f8a889347fe492364767
/MMLA-Datasets/AnnoMi-therapist/test.tsv 0ef6ceeba7dfff9f3201b263aecdb6636b6dd39c5eec220c91a328b5dd23e9d5
/MMLA-Datasets/AnnoMi-therapist/train.tsv fd0a4741bd3fb32014318f0bd0fbc464a87a9e267163fcac9618707fedca12b2
/MMLA-Datasets/AnnoMi-therapist/AnnoMi-therapist_video.tar.gz 767ce57ad55078001cdd616d642f78d3b0433d9ebcbc14db1608408a54c9fa10
/MMLA-Datasets/CH-SIMSv2.0/test.tsv 40afae5245b1060e8bb5162e8cc4f17f294a43b51a9e01e5bbd64d1f5ebcb6d7
/MMLA-Datasets/CH-SIMSv2.0/dev.tsv 47dfac9ca8d77868ed644b8cd9536fa403f9d6f81e26796cd882e39d2cc14608
/MMLA-Datasets/CH-SIMSv2.0/train.tsv 96350a9e35d62dc63035256e09f033f84aa670f6bf1c06e38daef85d39bde7d7
/MMLA-Datasets/CH-SIMSv2.0/Ch-simsv2_video.tar.gz e2817c4841a74f9e73eed6cf3196442ff0245f999bdfc5f975dcf18e66348f1e
/MMLA-Datasets/IEMOCAP-DA/dev.tsv 67d357fee50c9b009f9cdc81738e1f45562e0a7f193f6f100320e1881d2b2c8c
/MMLA-Datasets/IEMOCAP-DA/test.tsv 050d27887bec3714f8f0c323594c3c287fa9a5c006f94de0fa09565ba0251773
/MMLA-Datasets/IEMOCAP-DA/train.tsv 823b37fa045aa6aad694d94ad134e23b92491cd6c5d742ed6e9d9456b433608b
/MMLA-Datasets/IEMOCAP/dev.tsv b6b0bbe1f49dc1f20c4121ac8f943b2d85722c95bb0988946282a496c0c1094d
/MMLA-Datasets/IEMOCAP/test.tsv 7ab10d9c126e037e8c6be1ddf6487d57e9132b2e238286a6a9cccce029760581
/MMLA-Datasets/IEMOCAP/train.tsv a0017547086721147ed1191e8b7d5da42f795c4070687cffcff001d8827b81d8
/MMLA-Datasets/MELD-DA/test.tsv b25f4396f30a8d591224ec8074cc4ebfd5727f22fa816ab46cdb455dc22ee854
/MMLA-Datasets/MELD-DA/dev.tsv 4fcc28d139ac933df8e8a288f2d17e010d5e013c70722485a834a7b843536351
/MMLA-Datasets/MELD-DA/train.tsv 045642a0abaa9d9d9ea5f7ade96a09dd856311c9a375dea1839616688240ec71
/MMLA-Datasets/MELD-DA/MELD-DA_video.tar.gz 92154bb5d2cf9d8dc229d5fe7ce65519ee7525487f4f42ca7acdf79e48c69707
/MMLA-Datasets/MELD/dev.tsv ce677f8162ce901e0cc26f531f1786620cac40b7507fa34664537dadc407d256
/MMLA-Datasets/MELD/test.tsv ee0e0a35a8ae73b522f359039cea34e92d0e13283f5f01c4f29795b439a92a69
/MMLA-Datasets/MELD/train.tsv 063ac8accce2e0da3b45e9cdb077c5374a4cf08f6d62db41438e6e0c52981287
/MMLA-Datasets/MELD/MELD_video.tar.gz 6ce66e5e0d3054aeaf2f5857106360f3b94c37e099bf2e2b17bc1304ef79361b
/MMLA-Datasets/MIntRec/dev.tsv 629ab568ec3e1343c83d76b43d7398f7580361370d09162065a6bb1883f2fe9a
/MMLA-Datasets/MIntRec/test.tsv adffdc8f061878ad560ee0e0046ba32e6bc9e0332d9e09094cfce0b755fcc2a9
/MMLA-Datasets/MIntRec/train.tsv c1bec2ff06712063c7399264d7c06f4cdc125084314e6fa8bdfd94d3f0b42332
/MMLA-Datasets/MIntRec/MIntRec_video.tar.gz a756b6ad5f851773b3ae4621e3aa5c33a662bde80b239a6815a8541c30fc6411
/MMLA-Datasets/MIntRec2.0/dev.tsv f2f69111d0bd8c26681db0a613a0112f466c667d56a79949ce17ccadd1e6ae37
/MMLA-Datasets/MIntRec2.0/test.tsv 6aa650afbaf40256afdbb546a9f7253511f3fe8d791a9acc7b6829824455a6ed
/MMLA-Datasets/MIntRec2.0/train.tsv e8b8767bd9a4de5833475db2438e63390c9674041a7b8ea39183a74fa4b624ef
/MMLA-Datasets/MIntRec2.0/MIntRec2.0_video.tar.gz 78bd9ab4a0f9e5768ed2a094524165ecc51926e210a4701a9548d036a68d5e29
/MMLA-Datasets/MOSI/dev.tsv bd8ccded8dacb9cb7d37743f54c7e4c7bef391069b67b55c7e0cf4626fadee5f
/MMLA-Datasets/MOSI/test.tsv c480fc2cb444d215e5ba3433452db546fd8e638d332ee0f03278158b69375eca
/MMLA-Datasets/MOSI/train.tsv f1afe6018ae0b0ab8833da6934c0847f480412ed11c9c22e204a01e8cf75971b
/MMLA-Datasets/MUStARD/MUStARD_video.tar.gz 8bd863c7ab4c29a710aa3edc0f560361275830a1e98ec41908d51c43e08647c1
/MMLA-Datasets/MUStARD/dev.tsv 45477e0bda84c3d45ff197734b3943fc30e9f89c0d0cb8c272f0c10d31ee5474
/MMLA-Datasets/MUStARD/test.tsv ae248884d42d690700b6ce9930bb12827cd0fbcae200c43aace5a90003ad99e5
/MMLA-Datasets/MUStARD/train.tsv 4292e07a087978a08552268b6c8405d897ee855af495e7e58ee99863e705eb43
/MMLA-Datasets/UR-FUNNY-v2/dev.tsv a82f758ef5d2a65bc41e09e24a616d4654c1565e851cd42c71a575b09282a2d2
/MMLA-Datasets/UR-FUNNY-v2/test.tsv 6cb9dee9fd55545f46cd079ecb7541981d4c19a76c0ce79d7d874fe73703b63a
/MMLA-Datasets/UR-FUNNY-v2/train.tsv 8eb91657faa19a2d53cc930c810d2fa3abd8e365c49d27fa6feb68cd95f40fb4
/MMLA-Datasets/UR-FUNNY-v2/UR-FUNNYv2_video.tar.gz e5a3962985c8ead5f593db69ab77a9d6702895768bb5871fe8764406358f8cae

5. Acknowledgements

For more details, please refer to our Github repo. If our work is helpful to your research, please consider citing the following paper:

@article{zhang2025mmla,
  author={Zhang, Hanlei and Li, Zhuohang and Zhu, Yeshuang and Xu, Hua and Wang, Peiwu and Zhu, Haige and Zhou, Jie and Zhang, Jinchao},
  title={Can Large Language Models Help Multimodal Language Analysis? MMLA: A Comprehensive Benchmark},
  year={2025},
  journal={arXiv preprint arXiv:2504.16427},
}