Datasets:
The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 299, in get_dataset_config_info
for split_generator in builder._split_generators(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 81, in _split_generators
first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 55, in _get_pipeline_from_tar
current_example[field_name] = cls.DECODERS[data_extension](current_example[field_name])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 318, in torch_loads
return torch.load(io.BytesIO(data), weights_only=True)
File "/src/services/worker/.venv/lib/python3.9/site-packages/torch/serialization.py", line 1024, in load
raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None
_pickle.UnpicklingError: Weights only load failed. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 353, in get_dataset_split_names
info = get_dataset_config_info(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 304, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
OpenVid HD Latents Dataset
This repository contains VAE-encoded latent representations extracted from the OpenVid HD video dataset using the Wan2.1 VAE encoder.
π Dataset Overview
- Source Dataset: Enderfga/openvid-hd (~433k videos)
- Generated Dataset: Enderfga/openvid-hd-wan-latents-81frames (~270k latents)
- VAE Model: Wan2.1 VAE from Alibaba's Wan2.1 video generation suite
- Frame Count: 81 frames per video (21 temporal latent dimensions Γ ~3.86 frame compression ratio)
- Target FPS: 16 fps for decoded videos
- Video Duration: ~5.06 seconds per video
Each .pth file contains the following keys:
'latents': the encoded latent representation, saved aslatents.squeeze(0).contiguous().clone()'prompt_embeds': the text embedding corresponding to the video prompt, saved asprompt_embeds.squeeze(0).contiguous().clone()
π Enhanced Captioning Variant
We also release a caption-only variant of the dataset at
π Enderfga/openvid-hd-wan-latents-81frames-tarsier2_recaption,
which includes only 'prompt_embeds' for the same set of videos, generated using the Tarsier2-Recap-7b model from Tarsier.
This re-captioning significantly improves caption quality by producing more accurate and descriptive prompt embeddings.
π― About the Source Dataset
The OpenVid HD dataset is built upon the OpenVid-1M dataset, which is a high-quality text-to-video dataset introduced in the ICLR 2025 paper "OpenVid-1M: A Large-Scale High-Quality Dataset for Text-to-video Generation". The original OpenVid-1M contains over 1 million video-text pairs with expressive captions, and OpenVid HD specifically curates the 433k highest-quality 1080p videos from this collection.
Key features of the source dataset:
- High aesthetic quality and visual clarity
- Detailed, expressive captions
- 1080p resolution videos
- Diverse content covering various scenarios and camera motions
- Enhanced temporal consistency compared to other large-scale video datasets
π Extraction Process
The latent extraction was performed using a distributed processing pipeline with the following steps:
- Video Loading: Videos are loaded using the decord library with precise frame sampling
- Preprocessing:
- Frames are resized and center-cropped to the target resolution
- Normalized to [-1, 1] range using mean=[0.5, 0.5, 0.5] and std=[0.5, 0.5, 0.5]
- Sampled to 16 FPS target framerate
- VAE Encoding: Videos are encoded through the Wan-VAE encoder to latent space
- Quality Filtering: Only videos with aspect ratio β₯ 1.7 and exact frame count are kept
- Storage: Latents are saved as
.pthfiles as described above
π License
This dataset follows the licensing terms of the original OpenVid-1M dataset (CC-BY-4.0) and the Wan2.1 model (Apache 2.0). Please ensure compliance with both licenses when using this dataset.
π€ Acknowledgments
- OpenVid-1M Team for creating the high-quality source dataset
- Wan2.1 Team at Alibaba for developing the advanced VAE architecture
- Tarsier Team at ByteDance for providing Tarsier2-Recap-7b model
- Diffusers Library for providing easy access to the VAE models
- Downloads last month
- 1,204