Image-Text-to-Text
Transformers
TensorBoard
Safetensors
feature-extraction
conversational
custom_code
xiangan commited on
Commit
21cbb17
·
verified ·
1 Parent(s): d6fbfb7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -70,10 +70,10 @@ LLaVA-OneVision-1.5 is a fully open-source family of large multimodal models (LM
70
 
71
  ## Dataset
72
 
73
- | Description | Link |
74
- |---|---|
75
- | Mid-training data for LLaVA-OneVision-1.5 | [🤗 Download (Uploading!)](https://huggingface.co/datasets/lmms-lab/LLaVA-One-Vision-1.5-Mid-Training-85M) |
76
- | SFT data for LLaVA-OneVision-1.5 | [🤗 Download (Uploading!)](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-1.5-Insturct-Data) |
77
 
78
  ## Evaluation Results
79
  All evaluations were conducted using [lmms_eval](https://github.com/EvolvingLMMs-Lab/lmms-eval).
 
70
 
71
  ## Dataset
72
 
73
+ | Description | Link | Status |
74
+ |--------------------|--------------------------------------------------------------------------------------------------------|-------------|
75
+ | LLaVA-OneVision-1.5-Mid-Training-85M | [🤗HF / Mid-Training 85M](https://huggingface.co/datasets/mvp-lab/LLaVA-OneVision-1.5-Mid-Training-85M) | Uploading… |
76
+ | LLaVA-OneVision-1.5-Instruct | [🤗HF / Instruct-Data](https://huggingface.co/datasets/mvp-lab/LLaVA-OneVision-1.5-Instruct-Data) | Available |
77
 
78
  ## Evaluation Results
79
  All evaluations were conducted using [lmms_eval](https://github.com/EvolvingLMMs-Lab/lmms-eval).