Improve dataset card: Add task categories, tags, HF paper link, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +57 -26
README.md CHANGED
@@ -1,9 +1,13 @@
1
  ---
2
- pretty_name: ENACT
3
  language:
4
  - en
 
 
 
5
  task_categories:
6
  - visual-question-answering
 
 
7
  configs:
8
  - config_name: default
9
  data_files:
@@ -26,32 +30,32 @@ dataset_info:
26
  sequence: string
27
  - name: gt_answer
28
  sequence: int32
29
- license: mit
30
  tags:
31
  - agent
32
- size_categories:
33
- - 1K<n<10K
34
  ---
35
 
36
  # ENACT: Evaluating Embodied Cognition with World Modeling of Egocentric Interaction
37
 
38
  ENACT is a benchmark dataset for evaluating **embodied cognition** in vision–language models via **egocentric world modeling**. It probes whether models can reason about how the world changes under sequences of actions, using long-horizon household activities in a mobile manipulation setting.
39
 
40
- - **Project page:** https://enact-embodied-cognition.github.io/
41
- - **Code & evaluation:** https://github.com/mll-lab-nu/ENACT
 
42
 
43
 
44
  ## Dataset Summary
45
 
46
  Each ENACT example is a **multi-image, multi-step reasoning problem** built from robot trajectories:
47
 
48
- - **Forward world modeling**
49
- - Input: one **current state image**, several **future state images** (shuffled), and a list of **actions in correct order**.
50
- - Task: output a Python list of integers giving the **correct chronological order of future images** (e.g., `[1, 3, 2]`).
51
 
52
- - **Inverse world modeling**
53
- - Input: an **ordered sequence of images** showing state changes, plus **actions in shuffled order**.
54
- - Task: output a Python list of integers giving the **correct chronological order of actions** (e.g., `[2, 3, 1]`).
55
 
56
  All images are egocentric RGB observations rendered from long-horizon household tasks (e.g., assembling gift baskets, bringing water, preparing lunch boxes, cleaning up a desk).
57
 
@@ -106,30 +110,57 @@ Each line in `enact_ordering.jsonl` is a JSON object:
106
  }
107
  ```
108
 
109
- * **`id`** – unique identifier for this QA instance.
110
- * **`type`** – question type and horizon, e.g. `forward_world_modeling_3_steps` or `inverse_world_modeling_4_steps`.
111
- * **`task_name`** – underlying household task instance.
112
- * **`key_frame_ids`** – frame indices of selected key frames in the trajectory.
113
- * **`images`** – relative paths to PNG images:
114
 
115
- * index 0 is the **current state**;
116
- * subsequent entries are **future states** (forward) or later states (inverse).
117
- * **`question`** – natural language prompt specifying the task setup, actions, and the required output as a Python list of integers.
118
- * **`gt_answer`** – ground-truth ordering of image or action labels (list of integers, e.g. `[1, 3, 2]`).
119
 
120
 
121
- ## Usage
122
- To evaluate, follow the scripts in the code repository: [https://github.com/mll-lab-nu/ENACT](https://github.com/mll-lab-nu/ENACT)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
123
 
124
 
125
  ## Citation
126
 
127
- If you use ENACT, please cite the paper:
128
- ```
129
  @article{wang2025enact,
130
  title={ENACT: Evaluating Embodied Cognition with World Modeling of Egocentric Interaction},
131
  author={Wang, Qineng and Huang, Wenlong and Zhou, Yu and Yin, Hang
132
  and Bao, Tianwei and Lyu, Jianwen and Liu, Weiyu and Zhang, Ruohan
133
- and Wu, Jiajun and Li, Fei-Fei and Li, Manling}
 
 
134
  }
135
  ```
 
1
  ---
 
2
  language:
3
  - en
4
+ license: mit
5
+ size_categories:
6
+ - 1K<n<10K
7
  task_categories:
8
  - visual-question-answering
9
+ - image-text-to-text
10
+ pretty_name: ENACT
11
  configs:
12
  - config_name: default
13
  data_files:
 
30
  sequence: string
31
  - name: gt_answer
32
  sequence: int32
 
33
  tags:
34
  - agent
35
+ - robotics
36
+ - embodied-cognition
37
  ---
38
 
39
  # ENACT: Evaluating Embodied Cognition with World Modeling of Egocentric Interaction
40
 
41
  ENACT is a benchmark dataset for evaluating **embodied cognition** in vision–language models via **egocentric world modeling**. It probes whether models can reason about how the world changes under sequences of actions, using long-horizon household activities in a mobile manipulation setting.
42
 
43
+ - **Paper:** [https://huggingface.co/papers/2511.20937](https://huggingface.co/papers/2511.20937)
44
+ - **Project page:** https://enact-embodied-cognition.github.io/
45
+ - **Code & evaluation:** https://github.com/mll-lab-nu/ENACT
46
 
47
 
48
  ## Dataset Summary
49
 
50
  Each ENACT example is a **multi-image, multi-step reasoning problem** built from robot trajectories:
51
 
52
+ - **Forward world modeling**
53
+ - Input: one **current state image**, several **future state images** (shuffled), and a list of **actions in correct order**.
54
+ - Task: output a Python list of integers giving the **correct chronological order of future images** (e.g., `[1, 3, 2]`).
55
 
56
+ - **Inverse world modeling**
57
+ - Input: an **ordered sequence of images** showing state changes, plus **actions in shuffled order**.
58
+ - Task: output a Python list of integers giving the **correct chronological order of actions** (e.g., `[2, 3, 1]`).
59
 
60
  All images are egocentric RGB observations rendered from long-horizon household tasks (e.g., assembling gift baskets, bringing water, preparing lunch boxes, cleaning up a desk).
61
 
 
110
  }
111
  ```
112
 
113
+ * **`id`** – unique identifier for this QA instance.
114
+ * **`type`** – question type and horizon, e.g. `forward_world_modeling_3_steps` or `inverse_world_modeling_4_steps`.
115
+ * **`task_name`** – underlying household task instance.
116
+ * **`key_frame_ids`** – frame indices of selected key frames in the trajectory.
117
+ * **`images`** – relative paths to PNG images:
118
 
119
+ * index 0 is the **current state**;
120
+ * subsequent entries are **future states** (forward) or later states (inverse).
121
+ * **`question`** – natural language prompt specifying the task setup, actions, and the required output as a Python list of integers.
122
+ * **`gt_answer`** – ground-truth ordering of image or action labels (list of integers, e.g. `[1, 3, 2]`).
123
 
124
 
125
+ ## Sample Usage
126
+ To evaluate your model on the ENACT dataset, follow these steps:
127
+
128
+ 1. **Download the ENACT QA dataset:**
129
+ ```bash
130
+ python scripts/helpers/download_dataset.py
131
+ ```
132
+
133
+ 2. **Run your model** on `data/QA/enact_ordering.jsonl` to generate predictions. Your model should output a JSONL file (e.g., `enact_ordering_mymodel.jsonl`) where each line contains the original fields plus an `answer` field as a stringified list (e.g., `"[2, 1]"`).
134
+
135
+ 3. **Evaluate your predictions:**
136
+ ```bash
137
+ enact eval enact_ordering_mymodel.jsonl --analyze-wrong-cases
138
+ ```
139
+
140
+ 4. **Check results:**
141
+ ```bash
142
+ cat data/evaluation/meta_performance/enact_ordering_mymodel.json
143
+ ```
144
+
145
+ 5. **For batch evaluation of multiple models:**
146
+ ```bash
147
+ enact eval model_outputs_directory/ --analyze-wrong-cases
148
+ cat data/evaluation/batch_evaluation_summary.json
149
+ ```
150
+
151
+ For more details on data generation and advanced evaluation options, please refer to the [code repository](https://github.com/mll-lab-nu/ENACT).
152
 
153
 
154
  ## Citation
155
 
156
+ If you use ENACT in your research, please cite the paper:
157
+ ```bibtex
158
  @article{wang2025enact,
159
  title={ENACT: Evaluating Embodied Cognition with World Modeling of Egocentric Interaction},
160
  author={Wang, Qineng and Huang, Wenlong and Zhou, Yu and Yin, Hang
161
  and Bao, Tianwei and Lyu, Jianwen and Liu, Weiyu and Zhang, Ruohan
162
+ and Wu, Jiajun and Li, Fei-Fei and Li, Manling},
163
+ journal={arXiv preprint arXiv:2511.20937},
164
+ year={2025}
165
  }
166
  ```