FLUX.1-dev 4-bit Quantized for MLX

This is a 4-bit quantized version of the FLUX.1-dev model optimized for use with MLX and flux.swift. The model size has been reduced from ~24GB to 9.2GB while maintaining excellent image generation quality.

Quantized using flux.swift, a Swift implementation of FLUX models for Apple Silicon.

Model Details

  • Quantization: 4-bit with group size 64
  • Total Size: 9.2GB
  • Original Model: black-forest-labs/FLUX.1-dev
  • Framework: MLX (Metal Performance Shaders)
  • Components: Transformer, VAE, CLIP text encoder, T5 text encoder

Usage

This model requires the flux.swift implementation. Please refer to the repository for installation and usage instructions.

Quick Start

# Load and use the quantized model
flux.swift.cli \
  --load-quantized-path /path/to/this/model \
  --hf-token YOUR_HF_TOKEN \
  --prompt "Your prompt here" \
  --output output.png

Recommended Parameters

  • Steps: 10+ (for quality)
  • Guidance Scale: 7.5
  • Authentication: Requires Hugging Face token
  • Use Case: High-quality generation

Example with Parameters

flux.swift.cli \
  --load-quantized-path /path/to/this/model \
  --hf-token YOUR_HF_TOKEN \
  --prompt "A futuristic robot in a cyberpunk city" \
  --steps 20 \
  --guidance 7.5 \
  --width 768 \
  --height 768 \
  --seed 42 \
  --output cyberpunk_robot.png

License

This model is a quantized version of FLUX.1-dev, which is licensed under the FLUX.1 [dev] Non-Commercial License. Please review the original license terms:

Performance

  • Memory Usage: Reduced from ~24GB to 9.2GB
  • Quality: Excellent preservation of generation quality
  • Platform: Optimized for Apple Silicon Macs

Citation

@misc{flux-dev,
  author = {Black Forest Labs},
  title = {FLUX.1-dev},
  publisher = {Black Forest Labs},
  year = {2024},
  url = {https://huggingface.co/black-forest-labs/FLUX.1-dev}
}

@software{flux-swift,
  author = {mzbac},
  title = {flux.swift: Swift implementation of FLUX models},
  url = {https://github.com/mzbac/flux.swift},
  year = {2024}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
MLX
Hardware compatibility
Log In to view the estimation

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support