Stable Video Infinity: Infinite-Length Video Generation with Error Recycling
Abstract
Stable Video Infinity generates infinite-length videos with high temporal consistency and controllable storylines by using Error-Recycling Fine-Tuning on the Diffusion Transformer.
We propose Stable Video Infinity (SVI) that is able to generate infinite-length videos with high temporal consistency, plausible scene transitions, and controllable streaming storylines. While existing long-video methods attempt to mitigate accumulated errors via handcrafted anti-drifting (e.g., modified noise scheduler, frame anchoring), they remain limited to single-prompt extrapolation, producing homogeneous scenes with repetitive motions. We identify that the fundamental challenge extends beyond error accumulation to a critical discrepancy between the training assumption (seeing clean data) and the test-time autoregressive reality (conditioning on self-generated, error-prone outputs). To bridge this hypothesis gap, SVI incorporates Error-Recycling Fine-Tuning, a new type of efficient training that recycles the Diffusion Transformer (DiT)'s self-generated errors into supervisory prompts, thereby encouraging DiT to actively identify and correct its own errors. This is achieved by injecting, collecting, and banking errors through closed-loop recycling, autoregressively learning from error-injected feedback. Specifically, we (i) inject historical errors made by DiT to intervene on clean inputs, simulating error-accumulated trajectories in flow matching; (ii) efficiently approximate predictions with one-step bidirectional integration and calculate errors with residuals; (iii) dynamically bank errors into replay memory across discretized timesteps, which are resampled for new input. SVI is able to scale videos from seconds to infinite durations with no additional inference cost, while remaining compatible with diverse conditions (e.g., audio, skeleton, and text streams). We evaluate SVI on three benchmarks, including consistent, creative, and conditional settings, thoroughly verifying its versatility and state-of-the-art role.
Community
Stable Video Infinity: Infinite-Length Video Generation with Error Recycling
Stable-Video-Infinity(SVI) is able to generate ANY-length videos with high temporal consistency, plausible scene transitions, and controllable streaming storylines in ANY domains.
🌟 Key Highlights
- OpenSVI: Everything is open-sourced: training & evaluation scripts, datasets, and more.
- Infinite Length: No inherent limit on video duration; generate arbitrarily long stories (see the 10‑minute “Tom and Jerry” demo).
- Versatile: Supports diverse in-the-wild generation tasks: multi-scene short films, single‑scene animations, skeleton-/audio-conditioned generation, cartoons, and more.
- Efficient: Only LoRA adapters are tuned, requiring very little training data: anyone can make their own SVI easily.
paper: https://arxiv.org/abs/2510.09212
project page: https://stable-video-infinity.github.io/homepage/
code: https://github.com/vita-epfl/Stable-Video-Infinity
model: https://huggingface.co/vita-video-gen/svi-model
dataset:https://huggingface.co/datasets/vita-video-gen/svi-benchmark
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Pack and Force Your Memory: Long-form and Consistent Video Generation (2025)
- InfVSR: Breaking Length Limits of Generic Video Super-Resolution (2025)
- LongLive: Real-time Interactive Long Video Generation (2025)
- Rolling Forcing: Autoregressive Long Video Diffusion in Real Time (2025)
- Real-Time Motion-Controllable Autoregressive Video Diffusion (2025)
- InfiniteTalk: Audio-driven Video Generation for Sparse-Frame Video Dubbing (2025)
- BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper