Running Featured 1.25k FineWeb: decanting the web for the finest text data at scale π· 1.25k Generate high-quality text data for LLMs using FineWeb
Running Featured 1.03k Can You Run It? LLM version π 1.03k Determine GPU requirements for running large language models
meta-llama/Meta-Llama-3-8B-Instruct Text Generation β’ 8B β’ Updated Jun 18, 2025 β’ 1.54M β’ β’ 4.34k
nvidia/Aegis-AI-Content-Safety-LlamaGuard-Permissive-1.0 Text Classification β’ Updated Sep 22, 2025 β’ 75 β’ 18
nvidia/Aegis-AI-Content-Safety-LlamaGuard-Defensive-1.0 Text Classification β’ Updated Sep 22, 2025 β’ 1.81k β’ 25
Running on CPU Upgrade Featured 994 Model Memory Utility π 994 Calculate vRAM needed for model training and inference