Unofficial JAX implementations of deep learning research papers
Weights for Visual Attention Network (Meng-Hao Guo et al., 2022). All weights translated from the official repository. Full credits go to the original authors.
Weights for Going deeper with Image Transformers (Hugo Touvron et al., 2021) These weights have been translated from the official Github repository and all credits for the weights go to the original authors.
Weights from Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions (Wenhai Wang et al., 2021). All credits for these weights go to the original authors.
Weights for ConvNeXt (Zhuang Liu et al, 2022) translated from the official repository.
All credits for the weights go to the original authors.
This release contains weights for the entire stack of Swin Transformer models (SwinTiny224, SwinSmall224, SwinBase224, SwinBase384, SwinLarge224, SwinLarge384).
These weights have been ported from the official repository and timm.