Will additional CLIP-ViT checkpoints for different samples-seen scales be released?
Dear author,
Thank you for releasing the scaling-laws-for-comparison models and sharing the accompanying paper.
In Section 2.1 (Pre-training setup), you mention that CLIP and MaMMUT models were trained over 11 samples-seen scales
(D = {1.28 M, 3.07 M, 6.4 M, 12.8 M, 30.7 M, 64 M, 128 M, 307 M, 640 M, 1.28 B, 3.07 B})
and 15 vision configurations (ViT-S/M/B/L/H × patch sizes 14/16/32).
I would like to kindly ask whether there are plans to release more CLIP-ViT checkpoints corresponding to these additional samples-seen scales,
as only a subset appears to be available on Hugging Face.
Such releases would greatly help researchers reproduce and extend your scaling-law analyses.
Thank you again for this valuable work and for making so many resources openly available!
Hi @SoaringYao ,
thank your for your interest. Yes, we are currently uploading all model checkpoints (including intermediate checkpoints) of all scales and model sizes.
Currently, the 3B models are already uploaded.  We will make the rest available as soon as possible.
Best,
Mehdi
Dear Mehdi,
Thank you very much for the update and for uploading the 3B CLIP models — it’s greatly appreciated!
I noticed that several smaller-scale CLIP models (e.g., ViT-S and ViT-B variants at lower samples-seen scales) are still missing from the Hugging Face repository.
May I kindly ask if there are plans or an approximate timeline for releasing these smaller models as well?
They would be extremely valuable for reproducing and extending the scaling-law analyses at finer granularity.
Thank you again for your efforts in maintaining and sharing this impressive collection of models!
Best regards,
SoaringYao
Hey @SoaringYao ,
the total amount of all intermediate checkpoints is heavy, reaching up to 500TB (including training with const lr schedule). We currently exhausted our HF capacity (after uploading 75TB of checkpoints), and look into ways how to provide the full collection. This will take some time, we will update on that as soon as we know more.
In the meantime, to experiment with scaling law derivation, full eval data of all the end checkpoints can be accessed here: https://github.com/LAION-AI/scaling-laws-for-comparison/tree/main/scaling_laws/data
Best,
Jenia

 
						 
						