Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -28,3 +28,6 @@ Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vi
|
|
| 28 |
|
| 29 |
# Benchmarks
|
| 30 |
- [MVBench](https://github.com/OpenGVLab/Ask-Anything/tree/main/video_chat2): a comprehensive benchmark for multimodal video understanding.
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
# Benchmarks
|
| 30 |
- [MVBench](https://github.com/OpenGVLab/Ask-Anything/tree/main/video_chat2): a comprehensive benchmark for multimodal video understanding.
|
| 31 |
+
- [CRPE](https://github.com/OpenGVLab/all-seeing/tree/main/all-seeing-v2): a benchmark covering all elements of the relation triplets (subject, predicate, object), providing a systematic platform for the evaluation of relation comprehension ability.
|
| 32 |
+
- [MM-NIAH](https://github.com/uni-medical/GMAI-MMBench): a comprehensive benchmark for long multimodal documents comprehension.
|
| 33 |
+
- [GMAI-MMBench](https://huggingface.co/datasets/OpenGVLab/GMAI-MMBench): a comprehensive multimodal evaluation benchmark towards general medical AI.
|