Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Maxclon
/
Flux_Models
like
0
GGUF
Model card
Files
Files and versions
xet
Community
No model card
Downloads last month
20
GGUF
Model size
12B params
Architecture
flux
Hardware compatibility
Log In
to view the estimation
2-bit
Q2_K
4.03 GB
3-bit
Q3_K_S
2.1 GB
4-bit
Q4_1
7.53 GB
6-bit
Q6_K
3.91 GB
View +2 variants
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support