Awesome Masked Autoencoders Save

A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).

Project README

Awesome Masked Autoencoders

Contrib PaperNum

Fig. 1. Masked Autoencoders from Kaiming He et al.

Masked Autoencoder (MAE, Kaiming He et al.) has renewed a surge of interest due to its capacity to learn useful representations from rich unlabeled data. Until recently, MAE and its follow-up works have advanced the state-of-the-art and provided valuable insights in research (particularly vision research). Here I list several follow-up works after or concurrent with MAE to inspire future research.

*:octocat: code link, 🌐 project page

Vision

Audio

Graph

Point Cloud

Language (Omitted)

There has been a surge of language research focused on such masking-and-predicting paradigm, e.g. BERT, so I'm not going to report these works.

Miscellaneous

TODO List

  • Add code links
  • Add authers list
  • Add conference/journal venues
  • Add more illustrative figures
Open Source Agenda is not affiliated with "Awesome Masked Autoencoders" Project. README Source: EdisonLeeeee/Awesome-Masked-Autoencoders
Stars
695
Open Issues
0
Last Commit
1 month ago
License
MIT

Open Source Agenda Badge

Open Source Agenda Rating