Update README.md
Browse files
README.md
CHANGED
|
@@ -23,4 +23,28 @@ dataset_info:
|
|
| 23 |
---
|
| 24 |
# Dataset Card for "VizWiz-VQA"
|
| 25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
|
|
|
| 23 |
---
|
| 24 |
# Dataset Card for "VizWiz-VQA"
|
| 25 |
|
| 26 |
+
<p align="center" width="100%">
|
| 27 |
+
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
|
| 28 |
+
</p>
|
| 29 |
+
|
| 30 |
+
# Large-scale Multi-modality Models Evaluation Suite
|
| 31 |
+
|
| 32 |
+
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
|
| 33 |
+
|
| 34 |
+
๐ [Homepage](https://lmms-lab.github.io/) | ๐ [Documentation](docs/README.md) | ๐ค [Huggingface Datasets](https://huggingface.co/lmms-lab)
|
| 35 |
+
|
| 36 |
+
# This Dataset
|
| 37 |
+
|
| 38 |
+
This is a formatted version of [VizWiz-VQA](https://vizwiz.org/tasks-and-datasets/vqa/). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
|
| 39 |
+
|
| 40 |
+
```
|
| 41 |
+
@inproceedings{gurari2018vizwiz,
|
| 42 |
+
title={Vizwiz grand challenge: Answering visual questions from blind people},
|
| 43 |
+
author={Gurari, Danna and Li, Qing and Stangl, Abigale J and Guo, Anhong and Lin, Chi and Grauman, Kristen and Luo, Jiebo and Bigham, Jeffrey P},
|
| 44 |
+
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
|
| 45 |
+
pages={3608--3617},
|
| 46 |
+
year={2018}
|
| 47 |
+
}
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|