Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
Multi-Modality Arena is an evaluation platform for large multi-modality models. Following Fastchat, two anonymous models side-by-side are compared on a visual question-answering task. We release the Demo and welcome the participation of everyone in this evaluation initiative.
LVLM-eHub is a comprehensive evaluation benchmark for publicly available large multimodal models (LVLM). It extensively evaluates $8$ LVLMs in terms of $6$ categories of multimodal capabilities with $47$ datasets and $1$ arena online platform.
The following models are involving in randomized battles currently,
More details about these models can be found at ./model_detail/.model.jpg
. We will try to schedule computing resources to host more multi-modality models in the arena.
If you are interested in any pieces of our VLarena platform, feel free to join the Wechat group.
conda create -n arena python=3.10
conda activate arena
pip install numpy gradio uvicorn fastapi
To serve using the web UI, you need three main components: web servers that interface with users, model workers that host two or more models, and a controller to coordinate the webserver and model workers.
Here are the commands to follow in your terminal:
python controller.py
This controller manages the distributed workers.
python model_worker.py --model-name SELECTED_MODEL --device TARGET_DEVICE
Wait until the process finishes loading the model and you see "Uvicorn running on ...". The model worker will register itself to the controller. For each model worker, you need to specify the model and the device you want to use.
python server_demo.py
This is the user interface that users will interact with.
By following these steps, you will be able to serve your models using the web UI. You can open your browser and chat with a model now. If the models do not show up, try to reboot the gradio web server.
The project is built upon Fastchat and open-source multi-modality models.
The project is an experimental research tool for non-commercial purposes only. It has limited safeguards and may generate inappropriate content. It cannot be used for anything illegal, harmful, violent, racist, or sexual.