Datasets:
You need to agree to share your contact information to access this dataset
The information you provide will be collected, stored, processed and shared in accordance with the InternData Privacy Policy.
InternData-A1 COMMUNITY LICENSE AGREEMENT
InternData-A1 Release Date: July 26, 2025. All the data and code within this repo are under CC BY-NC-SA 4.0.
Log in or Sign Up to review the conditions and access this dataset content.
InternData-A1
InternData-A1 is a hybrid synthetic-real manipulation dataset containing over 630k trajectories and 7,433 hours across 4 embodiments, 18 skills, 70 tasks, and 227 scenes, covering rigid, articulated, deformable, and fluid-object manipulation.
π’ News
- Supplementary information: We have supplemented the updated dataset with camera intrinsics, extrinsics, end-effector poses, and TCP poses. However, due to storage quota limitations on Hugging Face, we have temporarily hosted the updated dataset at https://www.modelscope.cn/datasets/InternRobotics/InternData-A1/tree/master/sim_updated. The data directory is named sim_updated. Note that the data for folding garments in lift-2 and long-horizon tasks in the Franka environment are still in the process of being transferred and are not yet included. We are currently working to increase our storage quota on Hugging Face and will upload the full dataset there as soon as possible.
π Key Features
- Heterogeneous multi-robot platforms: ARX Lift-2, AgileX Split Aloha, A2D, Franka
- Hybrid synthetic-real manipulation demonstrations with task-level digital twins, containing four task categories:
- Articulation tasks
- Basic tasks
- Long-horizon tasks
- Pick and place tasks
- Diverse scenarios include:
- Moving Object Manipulation in Conveyor Belt Scenarios
- Rigid, articulated, deformable, and fluid-object manipulation
- Multi-robot / multi-arm collaboration
- Human-robot interaction
π Table of Contents
Get started π₯
Download the Dataset
To download the full dataset, you can use the following code. If you encounter any issues, please refer to the official Hugging Face documentation.
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
# When prompted for a password, use an access token with write permissions.
# Generate one from your settings: https://huggingface.co/settings/tokens
git clone https://huggingface.co/datasets/InternRobotics/InternData-A1
# If you want to clone without large files - just their pointers
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/InternRobotics/InternData-A1
Dataset Structure
Folder hierarchy
data
βββ sim
β βββ articulation_tasks
β β βββ ...
β βββ basic_tasks
β β βββ ...
β βββ long_horizon_tasks # category
β β βββ franka # robot
β β β βββ ...
β β βββ lift2
β β β βββ sort_the_rubbish # task
β β β β βββ data
β β β β β βββ chunk-000
β β β β β β βββ episode_000000.parquet
β β β β β β βββ episode_000001.parquet
β β β β β β βββ episode_000002.parquet
β β β β β β βββ ...
β β β β β βββ chunk-001
β β β β β β βββ ...
β β β β β βββ ...
β β β β βββ meta
β β β β β βββ episodes.jsonl
β β β β β βββ episodes_stats.jsonl
β β β β β βββ info.json
β β β β β βββ modality.json
β β β β β βββ stats.json
β β β β β βββ tasks.jsonl
β β β β βββ videos
β β β β β βββ chunk-000
β β β β β β βββ images.rgb.head
β β β β β β β βββ episode_000000.mp4
β β β β β β β βββ episode_000001.mp4
β β β β β β β βββ ...
β β β β β β βββ ...
β β β β β βββ chunk-001
β β β β β β βββ ...
β β β β β βββ ...
β β β βββ...
β β βββ split_aloha
β β β βββ ...
β β βββ ...
β βββ pick_and_place_tasks
β β βββ ...
β βββ ...
βββ real
β βββ ...
This subdataset(such as sort_the_rubbish) was created using LeRobot (dataset v2.1). For GROOT training framework compatibility, additional stats.json and modality.json files are included, where stats.json provides statistical values (mean, std, min, max, q01, q99) for each feature across the dataset, and modality.json defines model-related custom modalities.
meta/info.json
{
"codebase_version": "v2.1",
"robot_type": "piper",
"total_episodes": 1544,
"total_frames": 1477285,
"total_tasks": 1,
"total_videos": 4632,
"total_chunks": 2,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1544"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"images.rgb.head": {
"dtype": "video",
"shape": [
360,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 360,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"images.rgb.hand_left": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"images.rgb.hand_right": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"head_camera_intrinsics": {
"dtype": "float32",
"shape": [
4
],
"names": [
"fx",
"fy",
"cx",
"cy"
]
},
"hand_left_camera_intrinsics": {
"dtype": "float32",
"shape": [
4
],
"names": [
"fx",
"fy",
"cx",
"cy"
]
},
"hand_right_camera_intrinsics": {
"dtype": "float32",
"shape": [
4
],
"names": [
"fx",
"fy",
"cx",
"cy"
]
},
"head_camera_to_robot_extrinsics": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"hand_left_camera_to_robot_extrinsics": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"hand_right_camera_to_robot_extrinsics": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"states.left_joint.position": {
"dtype": "float32",
"shape": [
6
],
"names": [
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5"
]
},
"states.left_gripper.position": {
"dtype": "float32",
"shape": [
1
],
"names": [
"left_gripper_0"
]
},
"states.left_ee_to_left_armbase_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"states.left_ee_to_robot_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"states.left_tcp_to_left_armbase_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"states.left_tcp_to_robot_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"states.right_joint.position": {
"dtype": "float32",
"shape": [
6
],
"names": [
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5"
]
},
"states.right_gripper.position": {
"dtype": "float32",
"shape": [
1
],
"names": [
"right_gripper_0"
]
},
"states.right_ee_to_right_armbase_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"states.right_ee_to_robot_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"states.right_tcp_to_right_armbase_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"states.right_tcp_to_robot_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"states.robot_to_env_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"actions.left_joint.position": {
"dtype": "float32",
"shape": [
6
],
"names": [
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5"
]
},
"actions.left_gripper.position": {
"dtype": "float32",
"shape": [
1
],
"names": [
"left_gripper_0"
]
},
"actions.left_ee_to_left_armbase_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"actions.left_ee_to_robot_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"actions.left_tcp_to_left_armbase_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"actions.left_tcp_to_robot_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"actions.right_joint.position": {
"dtype": "float32",
"shape": [
6
],
"names": [
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5"
]
},
"actions.right_gripper.position": {
"dtype": "float32",
"shape": [
1
],
"names": [
"right_gripper_0"
]
},
"actions.right_ee_to_right_armbase_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"actions.right_ee_to_robot_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"actions.right_tcp_to_right_armbase_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"actions.right_tcp_to_robot_pose": {
"dtype": "float32",
"shape": [
7
],
"names": [
"position.x",
"position.y",
"position.z",
"quaternion.w",
"quaternion.x",
"quaternion.y",
"quaternion.z"
]
},
"master_actions.left_joint.position": {
"dtype": "float32",
"shape": [
6
],
"names": [
"left_joint_0",
"left_joint_1",
"left_joint_2",
"left_joint_3",
"left_joint_4",
"left_joint_5"
]
},
"master_actions.left_gripper.position": {
"dtype": "float32",
"shape": [
1
],
"names": [
"left_gripper_0"
]
},
"master_actions.left_gripper.openness": {
"dtype": "float32",
"shape": [
1
],
"names": [
"left_gripper_0"
]
},
"master_actions.right_joint.position": {
"dtype": "float32",
"shape": [
6
],
"names": [
"right_joint_0",
"right_joint_1",
"right_joint_2",
"right_joint_3",
"right_joint_4",
"right_joint_5"
]
},
"master_actions.right_gripper.position": {
"dtype": "float32",
"shape": [
1
],
"names": [
"right_gripper_0"
]
},
"master_actions.right_gripper.openness": {
"dtype": "float32",
"shape": [
1
],
"names": [
"right_gripper_0"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
key format in features
Select appropriate keys for features based on characteristics such as ontology, single-arm or bimanual-arm, etc.
|-- images
|-- rgb
|-- head
|-- hand_left
|-- hand_right
|-- states
|-- left_joint
|-- position
|-- right_joint
|-- position
|-- left_gripper
|-- position
|-- right_gripper
|-- position
|-- actions
|-- left_joint
|-- position
|-- right_joint
|-- position
|-- left_gripper
|-- position
|-- right_gripper
|-- position
π TODO List
- [2025.11] Released: 632k simulation pretraining data (over 7433 hours).
- [2026.01] Released: datasets with full annotations of camera intrinsics, extrinsics, ee pose and tcp pose (currently folding garments in lift-2 and long-horizon tasks in franka are still under transferred). However, due to the storage quota limit in huggingface internrobotics (we are trying to enlarge), we upload this dataset in https://www.modelscope.cn/datasets/InternRobotics/InternData-A1/tree/master/sim_updated . The directory is sim_updated.
- To be released: real-world post-training data.
License and Citation
All the data and code within this repo are under CC BY-NC-SA 4.0. Please consider citing our project if it helps your research.
@misc{contributors2025internroboticsrepo,
title={InternData-A1},
author={InternData-A1 contributors},
howpublished={\url{https://github.com/InternRobotics/InternManip}}, year={2025}
}
- Downloads last month
- 21,801