The full collection of our EmoNet effort. More info available at: https://huggingface.co/blog/felfri/emonet
LAION eV
non-profit
AI & ML interests
datasets, computer vision
Recent Activity
View all activity
Releases related to Open-ψ (Open-Sci) Collective
Re-LAION-5B-research
OpenCLIP models trained on DataComp (https://huggingface.co/papers/2304.14108).
-
laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K
Zero-Shot Image Classification • Updated • 69.7k • 120 -
laion/CLIP-ViT-B-16-DataComp.XL-s13B-b90K
Zero-Shot Image Classification • Updated • 35.5k • 8 -
laion/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K
Zero-Shot Image Classification • Updated • 33.5k • 8 -
laion/CLIP-ViT-B-16-DataComp.L-s1B-b8K
Zero-Shot Image Classification • Updated • 2k • 1
CLAP is to audio what CLIP is to image.
openMaMMUT models trained on DataComp-1.4B
-
laion/openMaMMUT-ViT-L-14-DataComp-1.4B-s12.8B-b180K
Zero-Shot Image Classification • Updated • 31 • 4 -
Scaling Laws for Robust Comparison of Open Foundation Language-Vision Models and Datasets
Paper • 2506.04598 • Published • 6 -
laion/openMaMMUT-ViT-L-14-512x512-pt_datacomp1b-ft_DFN512x512-s293M-b32k
Zero-Shot Image Classification • Updated • 23
Re-LAION-5B research safe
OpenCLIP models trained on LAION-2B
-
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
Zero-Shot Image Classification • Updated • 110k • 295 -
laion/CLIP-ViT-g-14-laion2B-s34B-b88K
Zero-Shot Image Classification • Updated • 11k • 27 -
laion/CLIP-ViT-g-14-laion2B-s12B-b42K
1B • Updated • 34.4k • 44 -
laion/CLIP-ViT-H-14-laion2B-s32B-b79K
Zero-Shot Image Classification • 1.0B • Updated • 2.77M • 414
The full collection of our EmoNet effort. More info available at: https://huggingface.co/blog/felfri/emonet
Releases related to Open-ψ (Open-Sci) Collective
openMaMMUT models trained on DataComp-1.4B
-
laion/openMaMMUT-ViT-L-14-DataComp-1.4B-s12.8B-b180K
Zero-Shot Image Classification • Updated • 31 • 4 -
Scaling Laws for Robust Comparison of Open Foundation Language-Vision Models and Datasets
Paper • 2506.04598 • Published • 6 -
laion/openMaMMUT-ViT-L-14-512x512-pt_datacomp1b-ft_DFN512x512-s293M-b32k
Zero-Shot Image Classification • Updated • 23
Re-LAION-5B-research
Re-LAION-5B research safe
OpenCLIP models trained on DataComp (https://huggingface.co/papers/2304.14108).
-
laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K
Zero-Shot Image Classification • Updated • 69.7k • 120 -
laion/CLIP-ViT-B-16-DataComp.XL-s13B-b90K
Zero-Shot Image Classification • Updated • 35.5k • 8 -
laion/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K
Zero-Shot Image Classification • Updated • 33.5k • 8 -
laion/CLIP-ViT-B-16-DataComp.L-s1B-b8K
Zero-Shot Image Classification • Updated • 2k • 1
OpenCLIP models trained on LAION-2B
-
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
Zero-Shot Image Classification • Updated • 110k • 295 -
laion/CLIP-ViT-g-14-laion2B-s34B-b88K
Zero-Shot Image Classification • Updated • 11k • 27 -
laion/CLIP-ViT-g-14-laion2B-s12B-b42K
1B • Updated • 34.4k • 44 -
laion/CLIP-ViT-H-14-laion2B-s32B-b79K
Zero-Shot Image Classification • 1.0B • Updated • 2.77M • 414
CLAP is to audio what CLIP is to image.