Enderfga's picture
Update README.md
c95d45d verified
metadata
license: cc-by-4.0
task_categories:
  - text-to-video
language:
  - en
tags:
  - text-to-video
  - Video Generative Model Training
  - Text-to-Video Diffusion Model Training
  - prompts
pretty_name: OpenVid-1M
size_categories:
  - 1M<n<10M

OpenVid HD Latents Dataset

This repository contains VAE-encoded latent representations extracted from the OpenVid HD video dataset using the Wan2.1 VAE encoder.

πŸ“Š Dataset Overview

  • Source Dataset: Enderfga/openvid-hd (~433k videos)
  • Generated Dataset: Enderfga/openvid-hd-wan-latents-81frames (~270k latents)
  • VAE Model: Wan2.1 VAE from Alibaba's Wan2.1 video generation suite
  • Frame Count: 81 frames per video (21 temporal latent dimensions Γ— ~3.86 frame compression ratio)
  • Target FPS: 16 fps for decoded videos
  • Video Duration: ~5.06 seconds per video

Each .pth file contains the following keys:

  • 'latents': the encoded latent representation, saved as latents.squeeze(0).contiguous().clone()
  • 'prompt_embeds': the text embedding corresponding to the video prompt, saved as prompt_embeds.squeeze(0).contiguous().clone()

πŸ†• Enhanced Captioning Variant

We also release a caption-only variant of the dataset at
πŸ‘‰ Enderfga/openvid-hd-wan-latents-81frames-tarsier2_recaption,
which includes only 'prompt_embeds' for the same set of videos, generated using the Tarsier2-Recap-7b model from Tarsier.
This re-captioning significantly improves caption quality by producing more accurate and descriptive prompt embeddings.

🎯 About the Source Dataset

The OpenVid HD dataset is built upon the OpenVid-1M dataset, which is a high-quality text-to-video dataset introduced in the ICLR 2025 paper "OpenVid-1M: A Large-Scale High-Quality Dataset for Text-to-video Generation". The original OpenVid-1M contains over 1 million video-text pairs with expressive captions, and OpenVid HD specifically curates the 433k highest-quality 1080p videos from this collection.

Key features of the source dataset:

  • High aesthetic quality and visual clarity
  • Detailed, expressive captions
  • 1080p resolution videos
  • Diverse content covering various scenarios and camera motions
  • Enhanced temporal consistency compared to other large-scale video datasets

πŸ“ Extraction Process

The latent extraction was performed using a distributed processing pipeline with the following steps:

  1. Video Loading: Videos are loaded using the decord library with precise frame sampling
  2. Preprocessing:
    • Frames are resized and center-cropped to the target resolution
    • Normalized to [-1, 1] range using mean=[0.5, 0.5, 0.5] and std=[0.5, 0.5, 0.5]
    • Sampled to 16 FPS target framerate
  3. VAE Encoding: Videos are encoded through the Wan-VAE encoder to latent space
  4. Quality Filtering: Only videos with aspect ratio β‰₯ 1.7 and exact frame count are kept
  5. Storage: Latents are saved as .pth files as described above

πŸ“ License

This dataset follows the licensing terms of the original OpenVid-1M dataset (CC-BY-4.0) and the Wan2.1 model (Apache 2.0). Please ensure compliance with both licenses when using this dataset.

🀝 Acknowledgments

  • OpenVid-1M Team for creating the high-quality source dataset
  • Wan2.1 Team at Alibaba for developing the advanced VAE architecture
  • Tarsier Team at ByteDance for providing Tarsier2-Recap-7b model
  • Diffusers Library for providing easy access to the VAE models