Code for ALBEF: a new vision-language pre-training method
This is the official PyTorch implementation of the ALBEF paper [Blog]. This repository supports pre-training on custom datasets, as well as finetuning on VQA, SNLI-VE, NLVR2, Image-Text Retrieval on MSCOCO and Flickr30k, and visual grounding on RefCOCO+. Pre-trained and finetuned checkpoints are released.
We provide code in visualize.ipynb to visualize the important areas in an image for each word in a text. Here is an example visualization using the visual grounding checkpoint.
python -m torch.distributed.launch --nproc_per_node=8 --use_env Pretrain.py --config ./configs/Pretrain.yaml --output_dir output/Pretrain
python -m torch.distributed.launch --nproc_per_node=8 --use_env Retrieval.py \ --config ./configs/Retrieval_flickr.yaml \ --output_dir output/Retrieval_flickr \ --checkpoint [Pretrained checkpoint]
python -m torch.distributed.launch --nproc_per_node=8 --use_env VQA.py \ --config ./configs/VQA.yaml \ --output_dir output/vqa \ --checkpoint [Pretrained checkpoint]
python -m torch.distributed.launch --nproc_per_node=8 --use_env VE.py \ --config ./configs/VE.yaml \ --output_dir output/VE \ --checkpoint [Pretrained checkpoint]
python -m torch.distributed.launch --nproc_per_node=8 --use_env Grounding.py \ --config ./configs/Grounding.yaml \ --output_dir output/RefCOCO \ --gradcam_mode itm \ --block_num 8 \ --checkpoint [Pretrained checkpoint]
NLVR2 requires an additional pre-training step with text-assignment (TA) to adapt the model for image-pair inputs. In order to perform TA, first set the paths for the json training files in configs/NLVR_pretrain.yaml, then run:
python -m torch.distributed.launch --nproc_per_node=8 --use_env Pretrain_nlvr.py \ --config ./configs/NLVR_pretrain.yaml \ --output_dir output/NLVR_pretrain \ --checkpoint [Pretrained checkpoint]
We provide the checkpoint after TA pre-training, which can be fine-tuned with the following steps.
python -m torch.distributed.launch --nproc_per_node=8 --use_env NLVR.py \ --config ./configs/NLVR.yaml \ --output_dir output/NLVR \ --checkpoint [TA pretrained checkpoint]
If you find this code to be useful for your research, please consider citing.
@inproceedings{ALBEF, title={Align before Fuse: Vision and Language Representation Learning with Momentum Distillation}, author={Junnan Li and Ramprasaath R. Selvaraju and Akhilesh Deepak Gotmare and Shafiq Joty and Caiming Xiong and Steven Hoi}, year={2021}, booktitle={NeurIPS}, }