Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Building on HF
7
11
Boning Cui
Bc-AI
Follow
gowthamA457's profile picture
Keeby-smilyai's profile picture
2 followers
ยท
16 following
AI & ML interests
I like LLM's and VLM's.
Recent Activity
replied
to
sagar007
's
post
about 10 hours ago
๐ I built a Multimodal Vision-Language Model from using Gemma-270M + CLIP! Just finished training my multimodal model on the full LLaVA-Instruct-150K dataset (157K samples) and wanted to share the results! ๐ง What I Built: A vision-language model that can understand images and answer questions about them, combining: - Google Gemma-3-270M (language) - OpenAI CLIP ViT-Large/14 (vision) - LoRA fine-tuning for efficiency ๐ Training Stats: - 157,712 training samples (full LLaVA dataset) - 3 epochs on A100 40GB - ~9 hours training time - Final loss: 1.333 training / 1.430 validation - Only 18.6M trainable params (3.4% of 539M total) ๐ https://huggingface.co/sagar007/multigemma Benchmark Results: - VQA Accuracy: 53.8% - Works great for: animal detection, room identification, scene understanding ๐ **Try it yourself:** - ๐ค Model: https://huggingface.co/sagar007/multigemma - ๐ฎ Demo: https://huggingface.co/spaces/sagar007/Multimodal-Gemma - ๐ป GitHub: https://github.com/sagar431/multimodal-gemma-270m Built with PyTorch Lightning + MLflow for experiment tracking. Full MLOps pipeline with CI/CD! Would love to hear your feedback! ๐ #multimodal #gemma #clip #llava #vision-language #pytorch
liked
a Space
about 10 hours ago
google/functiongemma-tuning-lab
liked
a model
8 days ago
darkc0de/XortronCriminalComputingConfig
View all activity
Organizations
Bc-AI
's datasets
1
Sort:ย Recently updated
Bc-AI/ChatPILE-v2
Updated
Nov 1, 2025
โข
1