Musashi Hinck
commited on
Commit
·
cb4912f
1
Parent(s):
5b48a72
Initial model card
Browse files
README.md
CHANGED
|
@@ -3,4 +3,57 @@ license_name: gemma-terms
|
|
| 3 |
license_link: https://ai.google.dev/gemma/terms
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
license_link: https://ai.google.dev/gemma/terms
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
+
---
|
| 7 |
+
|
| 8 |
+
# LLaVA-Gemma Model Card
|
| 9 |
+
|
| 10 |
+
_This model card corresponds to the 2B version of the model with the CLIP-based vision encoder._
|
| 11 |
+
|
| 12 |
+
## Overview
|
| 13 |
+
|
| 14 |
+
`llava-gemma-2b` is a large multimodal model (LMM) trained using the [LLaVA-v1.5 framework](https://arxiv.org/abs/2310.03744) with the 2-billion parameter `google/gemma-2b-it` model as language backbone.
|
| 15 |
+
|
| 16 |
+
## Uses
|
| 17 |
+
|
| 18 |
+
The model has been finetuned for multimodal benchmark evaluations, but can also be used as a multimodal chatbot.
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
## Bias, Risks, and Limitations
|
| 22 |
+
|
| 23 |
+
This model has not been assessed for harm or biases, and should not be used for sensitive applications where it may cause harm.
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
## How to Get Started with the Model
|
| 27 |
+
|
| 28 |
+
Using the LLaVA-Gemma models currently requires a custom fork of the [`LLaVA`](https://github.com/haotian-liu/LLaVA) library. _We will release converted checkpoints compatible with the HuggingFace implementation of LLaVA shortly._
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
## Training Details
|
| 34 |
+
|
| 35 |
+
The `llava-gemma-2b` model was trained on 8 Gaudi 2 accelerators.
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
### Training Data
|
| 39 |
+
|
| 40 |
+
The model was trained using the LLaVA-v1.5 data mixture.
|
| 41 |
+
|
| 42 |
+
This is listed as follows:
|
| 43 |
+
|
| 44 |
+
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
|
| 45 |
+
- 158K GPT-generated multimodal instruction-following data.
|
| 46 |
+
- 450K academic-task-oriented VQA data mixture.
|
| 47 |
+
- 40K ShareGPT data.
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
## Evaluation
|
| 51 |
+
|
| 52 |
+
| LM Backbone | Vision Model | Pretrained Connector | GQA | MME cognition | MME perception | MM-Vet | POPE accuracy | POPE F1 | VQAv2 | TextVQA | ScienceQA Image | MMVP |
|
| 53 |
+
| ------------ | ------------- | --------------------- | ------ | ---------------- | ----------------- | ------- | ------------------ | ------------ | ------ | -------- | -------------------- | ------ |
|
| 54 |
+
| gemma-2b-it | CLIP | Yes | 0.531 | 236.071 | 1130.492 | 17.706 | 0.850 | 0.839 | 70.65 | 28.06 | 0.564 | 0.287 |
|
| 55 |
+
| gemma-2b-it | CLIP | No | 0.481 | 247.857 | 934.611 | 13.119 | 0.784 | 0.762 | 61.74 | | 0.549 | 0.180 |
|
| 56 |
+
| gemma-7b-it | CLIP | Yes | 0.472 | 253.571 | 894.910 | 18.165 | 0.848 | 0.829 | 68.7 | | 0.625 | 0.327 |
|
| 57 |
+
| gemma-7b-it | CLIP | No | 0.472 | 278.214 | 857.274 | 19.083 | 0.782 | 0.734 | 65.09 | | 0.636 | 0.240 |
|
| 58 |
+
| gemma-2b-it | DinoV2 | Yes | 0.587 | 307.143 | 1132.970 | 19.128 | 0.853 | 0.838 | 71.37 | 12.53 | 0.555 | 0.227 |
|
| 59 |
+
| gemma-2b-it | DinoV2 | No | 0.501 | 308.929 | 959.351 | 14.541 | 0.793 | 0.772 | 61.65 | 11.1 | 0.568 | 0.180 |
|