Awesome Foundation And Multimodal Models Save Abandoned

๐Ÿ‘๏ธ + ๐Ÿ’ฌ + ๐ŸŽง = ๐Ÿค– Curated list of top foundation and multimodal models! [Paper + Code]

Project README

awesome foundation and multimodal models

๐Ÿ‘๏ธ + ๐Ÿ’ฌ + ๐ŸŽง = ๐Ÿค–

foundation model - a pre-trained machine learning model that serves as a base for a wide range of downstream tasks. It captures general knowledge from a large dataset and can be fine-tuned to perform specific tasks more effectively.

multimodal model - a model that can process multiple modalities (e.g. text, image, video, audio, etc.) at the same time.

๐Ÿ—ž๏ธ papers

AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining

arXiv GitHub Gradio

Haohe Liu, Qiao Tian, Yi Yuan, Xubo Liu, Xinhao Mei, Qiuqiang Kong, Yuping Wang, Wenwu Wang, Yuxuan Wang, Mark D. Plumbley

  • Date: 10-08-2023
  • Modalities: ๐Ÿ’ฌ๏ธ + ๐ŸŽง
  • Tasks: Text-to-Audio, Text-to-Speech

OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models

arXiv GitHub Gradio

Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt

  • Date: 02-08-2023
  • Modalities: ๐Ÿ‘๏ธ + ๐Ÿ’ฌ
  • Tasks: Image Classification, Image Captioning, VQA

Kosmos-2: Grounding Multimodal Large Language Models to the World

arXiv GitHub Gradio

Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei

  • Date: 26-07-2023
  • Modalities: ๐Ÿ‘๏ธ + ๐Ÿ’ฌ
  • Tasks: Image Captioning, VQA, Phrase Grounding

LLaVA: Large Language and Vision Assistant

arXiv GitHub Gradio

Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee

  • Date: 17-04-2023
  • Modalities: ๐Ÿ‘๏ธ + ๐Ÿ’ฌ
  • Tasks:

ImageBind: One Embedding Space To Bind Them All

arXiv GitHub

Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra

  • Date: 09-05-2023
  • Modalities: ๐Ÿ‘๏ธ + ๐Ÿ’ฌ + ๐ŸŽง
  • Tasks:

Segment Anything

arXiv GitHub Colab

Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollรกr, Ross Girshick

  • Date: 05-04-2023
  • Modalities: ๐Ÿ‘๏ธ
  • Tasks:

Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection

arXiv GitHub Gradio Colab

Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang

  • Date: 09-03-2023
  • Modalities: ๐Ÿ‘๏ธ + ๐Ÿ’ฌ
  • Tasks:

BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models

arXiv GitHub Gradio Colab

Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi

  • Date: 30-01-2023
  • Modalities: ๐Ÿ‘๏ธ + ๐Ÿ’ฌ
  • Tasks:

OWL-ST: Scaling Open-Vocabulary Object Detection

arXiv Gradio

Matthias Minderer, Alexey Gritsenko, Neil Houlsby

  • Date: 16-01-2023
  • Modalities: ๐Ÿ‘๏ธ + ๐Ÿ’ฌ
  • Tasks:

Whisper: Robust Speech Recognition via Large-Scale Weak Supervision

arXiv GitHub Colab

Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever

  • Date: 06-12-2022
  • Modalities: ๐Ÿ’ฌ๏ธ + ๐ŸŽง
  • Tasks:

OWL-ViT: Simple Open-Vocabulary Object Detection with Vision Transformers

arXiv GitHub Gradio

Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, Neil Houlsby

  • Date: 12-05-2022
  • Modalities: ๐Ÿ‘๏ธ + ๐Ÿ’ฌ
  • Tasks:

CLIP: Learning Transferable Visual Models From Natural Language Supervision

arXiv GitHub Colab

Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever

  • Date: 26-02-2021
  • Modalities: ๐Ÿ‘๏ธ + ๐Ÿ’ฌ
  • Tasks:

๐Ÿฆธ contribution

We would love your help in making this repository even better! If you know of an amazing paper that isn't listed here, or if you have any suggestions for improvement, feel free to open an issue or submit a pull request.

Open Source Agenda is not affiliated with "Awesome Foundation And Multimodal Models" Project. README Source: SkalskiP/awesome-foundation-and-multimodal-models

Open Source Agenda Badge

Open Source Agenda Rating