🧠 FlashVSR (ComfyUI Integration Model)

This repository hosts the FlashVSR model weights used in the ComfyUI-FlashVSR project. It provides fast and efficient video super-resolution powered by FlashAttention-based architectures.

πŸ“¦ Model Overview

FlashVSR (Flash Video Super-Resolution) is a high-performance video SR model that leverages efficient attention computation to upscale low-resolution videos while maintaining temporal consistency.

This version is provided for ComfyUI integration, optimized for direct use with the ComfyUI-FlashVSR extension.

🧩 Model Files

File Description Size
Wan2_1-T2V-1_3B_FlashVSR_fp32.safetensors Main FlashVSR model weights ~5.7 GB
Wan2_1_FlashVSR_TCDecoder_fp32.safetensors Temporal consistency decoder ~181 MB
Wan2_1_FlashVSR_LQ_proj_model_bf16.safetensors Low-quality projection model ~576 MB
Wan2.1_VAE.safetensors VAE weights ~254 MB
Prompt.safetensors Conditioning embeddings ~4 MB

βš™οΈ Usage (with ComfyUI)

  1. Install ComfyUI

  2. Clone the FlashVSR node extension:

    git clone https://github.com/1038lab/ComfyUI-FlashVSR.git
    
  3. Download the model files from this Hugging Face repo:

    • Place them inside:

      ComfyUI/models/FlashVSR/
      
  4. Launch ComfyUI and load the FlashVSR workflow.

πŸ§‘β€πŸ”¬ Original Work

This model is based on the research and implementation from πŸ‘‰ JunhaoZhuang/FlashVSR All credit for the model architecture and original training goes to the authors.

This repository only repackages the model for ComfyUI integration.

πŸ“œ License

Please refer to the original FlashVSR license for usage rights. Any redistribution or fine-tuning should comply with the same terms.

🧰 Related Resources

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support