The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 64, in _split_generators
with h5py.File(first_file, "r") as h5:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 564, in __init__
fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 238, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "h5py/_objects.pyx", line 56, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 57, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 102, in h5py.h5f.open
FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = 'hf://datasets/HORA-DB/HORA@8eee458c5c8364f76f5c2a0438e9ab8f11e6cd4e/DualArm/handover_glass_bottle_on_mat/handover_glass_bottle_on_mat_chunk_0.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
HORA: Hand–Object to Robot Action Dataset
Dataset Summary
HORA (Hand–Object to Robot Action) is a large-scale multimodal dataset that converts human hand–object interaction (HOI) demonstrations into robot-usable supervision for cross-embodiment learning. It combines HOI-style annotations (e.g., MANO hand parameters, object pose, contact) with embodied-robot learning signals (e.g., robot observations, end-effector trajectories) under a unified canonical action space.
HORA is constructed from three sources/subsets:
- HORA(Mocap): custom multi-view motion capture system with tactile-sensor gloves (includes tactile maps).
- HORA(Recordings): custom RGB(D) HOI recording setup (no tactile).
- HORA(Public Dataset): derived from multiple public HOI datasets and retargeted to robot embodiments (6/7-DoF arms).
Overall scale: ~150k trajectories across all subsets.
Key Features
- Unified multimodal representation across subsets, covering both HOI analysis and downstream robotic learning.
- HOI modalities: MANO hand parameters (pose/shape + global transform), object 6DoF pose, object assets, hand–object contact annotations.
- Robot modalities: wrist-view & third-person observations, and end-effector pose trajectories for robotic arms, all mapped to a canonical action space.
- Tactile (mocap subset): dense tactile map for both hand and object (plus object pose & assets).
Dataset Statistics
| Subset | Tactile | #Trajectories | Notes |
|---|---|---|---|
| HORA(Mocap) | ✅ | 63,141 | 6-DoF object pose + assets + tactile map |
| HORA(Recordings) | ❌ | 23,560 | 6-DoF object pose + assets |
| HORA(Public Dataset) | ❌ | 66,924 | retargeted cross-embodiment robot modalities |
| Total | ~150k |
Supported Tasks and Use Cases
HORA is suitable for:
- Imitation Learning (IL) / Visuomotor policy learning
- Vision–Language–Action (VLA) model training and evaluation
- HOI-centric research: contact analysis, pose/trajectory learning, hand/object dynamics
Data Format
Example Episode Structure
Each episode/trajectory may include:
HOI fields
hand_mano: MANO parameters (pose/shape, global rotation/translation)object_pose_6d: 6DoF object pose sequencecontact: hand–object contact annotationsobject_asset: mesh/texture id or path
Robot fields
Global Attributes
task_description: Natural language instruction for the task (stored as HDF5 attribute).total_demos: Total number of trajectories in the file.
Observations (
obsgroup)agentview_rgb: JPEG byte stream (variable lengthuint8). Decodes to(T, 480, 640, 3).eye_in_hand_{side}_rgb: JPEG byte stream (variable lengthuint8). Decodes to(T, 480, 640, 3).{prefix}_joint_states: Arm joint positions in radians. Shape(T, N_dof).{prefix}_gripper_states: Gripper joint positions. Shape(T, N_grip).{prefix}_eef_pos: End-effector position in Robot Base Frame. Shape(T, 3).{prefix}_eef_quat: End-effector orientation(w, x, y, z)in Robot Base Frame. Shape(T, 4).object_{name}_pos: Object ground truth position in World Frame. Shape(T, 3).object_{name}_quat: Object ground truth orientation(w, x, y, z)in World Frame. Shape(T, 4).
Actions & States
Note: For multi-robot setups, the fields below concatenate data from all robots in order (e.g.,
[robot0, robot1]).actions: Joint-space control targets. Shape(T, N_dof + 1). Format:[joint_positions, normalized_gripper]where gripper is in[0, 1].actions_ee: Cartesian control targets. Shape(T, 7). Format:[pos (3), axis-angle (3), normalized_gripper (1)].robot_states: Robot base pose in World Frame. Shape(T, 7 * N_robots). Format:[pos (3), quat (4)]per robot, quat is(w, x, y, z).
Tactile fields (mocap only)
tactile_hand: dense tactile map (time × sensors/vertices)tactile_object: dense tactile map
- Downloads last month
- 911