Open-source evaluation toolkit of large vision-language models (LVLMs), support GPT-4v, Gemini, QwenVLPlus, 30+ HF models, 15+ benchmarks
VLMEvalKit (the python package name is vlmeval) is an open-source evaluation toolkit of large vision-language models (LVLMs). It enables one-command evaluation of LVLMs on various benchmarks, without the heavy workload of data preparation under multiple repositories. In VLMEvalKit, we adopt generation-based evaluation for all LVLMs, and provide the evaluation results obtained with both exact matching and LLM-based answer extraction.
.env
file to manage all environment variables used in VLMEvalKit, see Quickstart for more detailsThe performance numbers on our official multi-modal leaderboards can be downloaded from here!
OpenCompass Multi-Modal Leaderboard: Download All DETAILED Results.
Supported Dataset
Dataset | Dataset Names (for run.py) | Task | Inference | Evaluation | Results |
---|---|---|---|---|---|
MMBench Series: MMBench, MMBench-CN, CCBench |
MMBench_DEV_[EN/CN] MMBench_TEST_[EN/CN] CCBench |
Multi-choice | β | β | MMBench Leaderboard |
MMStar | MMStar | Multi-choice | β | β | Open_VLM_Leaderboard |
MME | MME | Yes or No | β | β | Open_VLM_Leaderboard |
SEEDBench_IMG | SEEDBench_IMG | Multi-choice | β | β | Open_VLM_Leaderboard |
MM-Vet | MMVet | VQA | β | β | Open_VLM_Leaderboard |
MMMU | MMMU_DEV_VAL/MMMU_TEST | Multi-choice | β | β | Open_VLM_Leaderboard |
MathVista | MathVista_MINI | VQA | β | β | Open_VLM_Leaderboard |
ScienceQA_IMG | ScienceQA_[VAL/TEST] | Multi-choice | β | β | Open_VLM_Leaderboard |
COCO Caption | COCO_VAL | Caption | β | β | Open_VLM_Leaderboard |
HallusionBench | HallusionBench | Yes or No | β | β | Open_VLM_Leaderboard |
OCRVQA | OCRVQA_[TESTCORE/TEST] | VQA | β | β | TBD. |
TextVQA | TextVQA_VAL | VQA | β | β | TBD. |
ChartQA | ChartQA_TEST | VQA | β | β | TBD. |
AI2D | AI2D_TEST | Multi-choice | β | β | Open_VLM_Leaderboard |
LLaVABench | LLaVABench | VQA | β | β | Open_VLM_Leaderboard |
DocVQA | DocVQA_[VAL/TEST] | VQA | β | β | TBD. |
InfoVQA | InfoVQA_[VAL/TEST] | VQA | β | β | TBD. |
OCRBench | OCRBench | VQA | β | β | Open_VLM_Leaderboard |
Core-MM | CORE_MM | VQA | β | N/A | |
RealWorldQA | RealWorldQA | VQA | β | β | TBD. |
VLMEvalKit will use an judge LLM to extract answer from the output if you set the key, otherwise it uses the exact matching mode (find "Yes", "No", "A", "B", "C"... in the output strings). The exact matching can only be applied to the Yes-or-No tasks and the Multi-choice tasks.
Supported API Models
GPT-4-Vision-PreviewποΈπ | GeminiProVisionποΈπ | QwenVLPlusποΈπ | QwenVLMaxποΈπ | Step-1VποΈπ |
---|
Supported PyTorch / HF Models
IDEFICS-[9B/80B/v2-8B]-InstructποΈπ | InstructBLIP-[7B/13B] | LLaVA-[v1-7B/v1.5-7B/v1.5-13B] | MiniGPT-4-[v1-7B/v1-13B/v2-7B] |
---|---|---|---|
mPLUG-Owl2ποΈ | OpenFlamingo-v2ποΈ | PandaGPT-13B | Qwen-VLποΈπ , Qwen-VL-ChatποΈπ |
VisualGLM-6Bπ | InternLM-XComposer-7Bπ ποΈ | ShareGPT4V-[7B/13B]π | TransCore-M |
LLaVA (XTuner)π | CogVLM-17B-Chatπ | SharedCaptionerπ | CogVLM-Grounding-Generalistπ |
Monkeyπ | EMU2-Chatπ ποΈ | Yi-VL-[6B/34B] | MMAlayaπ |
InternLM-XComposer2-[1.8B/7B]π ποΈ | MiniCPM-[V1/V2]π | OmniLMM-12B | InternVL-Chat Seriesπ |
DeepSeek-VLποΈ | LLaVA-NeXTπ |
ποΈ: Support multiple images as inputs.
π : Model can be used without any additional configuration / operation.
Transformers Version Recommendation:
Note that some VLMs may not be able to run under certain transformer versions, we recommend the following settings to evaluate each VLM:
transformers==4.33.0
for: Qwen series
, Monkey series
, InternLM-XComposer Series
, mPLUG-Owl2
, OpenFlamingo v2
, IDEFICS series
, VisualGLM
, MMAlaya
, SharedCaptioner
, MiniGPT-4 series
, InstructBLIP series
, PandaGPT
.transformers==4.37.0
for: LLaVA series
, ShareGPT4V series
, TransCore-M
, LLaVA (XTuner)
, CogVLM Series
, EMU2 Series
, Yi-VL Series
, MiniCPM-V series
, OmniLMM-12B
, DeepSeek-VL series
, InternVL series
.transformers==4.39.0
for: LLaVA-Next series
.pip install git+https://github.com/huggingface/transformers
for: IDEFICS2
.# Demo
from vlmeval.config import supported_VLM
model = supported_VLM['idefics_9b_instruct']()
# Forward Single Image
ret = model.generate(['assets/apple.jpg', 'What is in this image?'])
print(ret) # The image features a red apple with a leaf on it.
# Forward Multiple Images
ret = model.generate(['assets/apple.jpg', 'assets/apple.jpg', 'How many apples are there in the provided images? '])
print(ret) # There are two apples in the provided images.
See QuickStart for a quick start guide.
To develop custom benchmarks, VLMs, or simply contribute other codes to VLMEvalKit, please refer to Development_Guide.
The codebase is designed to:
generate
function, all other workloads (data downloading, data preprocessing, prediction inference, metric calculation) are handled by the codebase.The codebase is not designed to:
If you use VLMEvalKit in your research or wish to refer to the published OpenSource evaluation results, please use the following BibTeX entry and the BibTex entry corresponding to the specific VLM / benchmark you used.
@misc{2023opencompass,
title={OpenCompass: A Universal Evaluation Platform for Foundation Models},
author={OpenCompass Contributors},
howpublished = {\url{https://github.com/open-compass/opencompass}},
year={2023}
}