Transformer MM Explainability Resources Save

[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

No resources for this project.

Add resource

Open Source Agenda Badge

Open Source Agenda Rating
Submit Resource Articles, Courses, Videos