A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
Inspired by this repo and ML Writing Month. Questions and discussions are most welcome!
Lil-log is the best blog I have ever read!
TNNLS 2019
Adversarial Examples: Attacks and Defenses for Deep Learning
IEEE ACCESS 2018
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
2019
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
2019
A Study of Black Box Adversarial Attacks in Computer Vision
2019
Adversarial Examples in Modern Machine Learning: A Review
2020
Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey
TPAMI 2021
Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks
2019
Adversarial attack and defense in reinforcement learning-from AI security view
2020
A Survey of Privacy Attacks in Machine Learning
2020
Learning from Noisy Labels with Deep Neural Networks: A Survey
2020
Optimization for Deep Learning: An Overview
2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
2020
Learning from Noisy Labels with Deep Neural Networks: A Survey
2020
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective
2020
Efficient Transformers: A Survey
2019
A Survey of Black-Box Adversarial Attacks on Computer Vision Models
2020
Backdoor Learning: A Survey
2020
Transformers in Vision: A Survey
2020
A Survey on Neural Network Interpretability
2020
A Survey of Privacy Attacks in Machine Learning
2020
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
2021
Recent Advances in Adversarial Training for Adversarial Robustness (Our work, accepted by IJCAI 2021)2021
Explainable Artificial Intelligence Approaches: A Survey
2021
A Survey on Understanding, Visualizations, and Explanation of Deep Neural Networks
2020
A survey on Semi-, Self- and Unsupervised Learning for Image Classification
2021
Model Complexity of Deep Learning: A Survey
2021
Deep Generative Modelling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models
2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
2019
Advances and Open Problems in Federated Learning
2021
Countering Malicious DeepFakes: Survey, Battleground, and Horizon
ICLR
Intriguing properties of neural networks
ARXIV
[Identifying and attacking the saddle point problem in
high-dimensional non-convex optimization]EuroS&P
The limitations of deep learning in adversarial settings
CVPR
Deepfool
SP
C&W Towards evaluating the robustness of neural networks
Arxiv
Transferability in machine learning: from phenomena to black-box attacks using adversarial samples
NIPS
[Adversarial Images for Variational Autoencoders]ARXIV
[A boundary tilting persepective on the phenomenon of adversarial examples]ARXIV
[Adversarial examples in the physical world]ICLR
Delving into Transferable Adversarial Examples and Black-box Attacks
CVPR
Universal Adversarial Perturbations
ICCV
Adversarial Examples for Semantic Segmentation and Object Detection
ARXIV
Adversarial Examples that Fool Detectors
CVPR
A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection
ICCV
Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics
AIS
[Adversarial examples are not easily detected: Bypassing ten detection methods]ICCV
UNIVERSAL
[Universal Adversarial Perturbations Against Semantic Image Segmentation]ICLR
[Adversarial Machine Learning at Scale]ARXIV
[The space of transferable adversarial examples]ARXIV
[Adversarial attacks on neural network policies]ICLR
Generating Natural Adversarial Examples
NeurlPS
Constructing Unrestricted Adversarial Examples with Generative Models
IJCAI
Generating Adversarial Examples with Adversarial Networks
CVPR
Generative Adversarial Perturbations
AAAI
Learning to Attack: Adversarial transformation networks
S&P
Learning Universal Adversarial Perturbations with Generative Models
CVPR
Robust physical-world attacks on deep learning visual classification
ICLR
Spatially Transformed Adversarial Examples
CVPR
Boosting Adversarial Attacks With Momentum
ICML
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples :thumbsup:CVPR
UNIVERSAL
[Art of Singular Vectors and Universal Adversarial Perturbations]ARXIV
[Adversarial Spheres]ECCV
[Characterizing adversarial examples based on spatial consistency information for semantic segmentation]ARXIV
[Generating natural language adversarial examples]SP
[Audio adversarial examples: Targeted attacks on speech-to-text]ARXIV
[Adversarial attack on graph structured data]ARXIV
[Maximal Jacobian-based Saliency Map Attack (Variants of JAMA)]SP
[Exploiting Unintended Feature Leakage in Collaborative Learning]CVPR
Feature Space Perturbations Yield More Transferable Adversarial Examples
ICLR
The Limitations of Adversarial Training and the Blind-Spot Attack
ICLR
Are adversarial examples inevitable? :thought_balloon:IEEE TEC
One pixel attack for fooling deep neural networks
ARXIV
Generalizable Adversarial Attacks Using Generative Models
ICML
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks:thought_balloon:ARXIV
SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing
CVPR
Rob-GAN: Generator, Discriminator, and Adversarial Attacker
ARXIV
Cycle-Consistent Adversarial {GAN:} the integration of adversarial attack and defense
ARXIV
Generating Realistic Unrestricted Adversarial Inputs using Dual-Objective {GAN} Training :thought_balloon:ICCV
Sparse and Imperceivable Adversarial Attacks:thought_balloon:ARXIV
Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions
ARXIV
Joint Adversarial Training: Incorporating both Spatial and Pixel Attacks
IJCAI
Transferable Adversarial Attacks for Image and Video Object Detection
TPAMI
Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations
CVPR
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses
CVPR
[FDA: Feature Disruptive Attack]ARXIV
[SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations]CVPR
[SparseFool: a few pixels make a big difference]ICLR
[Adversarial Attacks on Graph Neural Networks via Meta Learning]NeurIPS
[Deep Leakage from Gradients]CCS
[Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning]ICCV
[Universal Perturbation Attack Against Image Retrieval]ICCV
[Enhancing Adversarial Example Transferability with an Intermediate Level Attack]CVPR
[Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks]ICLR
[ADef: an Iterative Algorithm to Construct Adversarial Deformations]Neurips
[iDLG: Improved deep leakage from gradients.]ARXIV
[Reversible Adversarial Attack based on Reversible Image Transformation]CCS
[Seeing isn’t Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors]NeurIPS
[Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder]ICLR
Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking:thought_balloon:ARXIV
[Sponge Examples: Energy-Latency Attacks on Neural Networks]ICML
[Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack]ICML
[Stronger and Faster Wasserstein Adversarial Attacks]CVPR
[QEBA: Query-Efficient Boundary-Based Blackbox Attack]ECCV
[New Threats Against Object Detector with Non-local Block]ARXIV
[Towards Imperceptible Universal Attacks on Texture Recognition]ECCV
[Frequency-Tuned Universal Adversarial Attacks]AAAI
[Learning Transferable Adversarial Examples via Ghost Networks]ECCV
[SPARK: Spatial-aware Online Incremental Attack Against Visual Tracking]Neurips
[Inverting Gradients - How easy is it to break privacy in federated learning?]ICLR
[Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks]NeurIPS
[On Adaptive Attacks to Adversarial Example Defenses]AAAI
[Beyond Digital Domain: Fooling Deep Learning Based Recognition System in Physical World]ARXIV
[Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter]CVPR
[Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles]CVPR
[Universal Physical Camouflage Attacks on Object Detectors] code
ARXIV
[Understanding Object Detection Through An Adversarial Lens]CIKM
[Can Adversarial Weight Perturbations Inject Neural Backdoors?]ICCV
[Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers]ARXIV
[On Generating Transferable Targeted Perturbations]CVPR
[See through Gradients: Image Batch Recovery via GradInversion] :thumbsup:ARXIV
[Admix: Enhancing the Transferability of Adversarial Attacks]ARXIV
[Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep Image-to-Image Models against Adversarial Attacks]ARXIV
[Poisoning the Unlabeled Dataset of Semi-Supervised Learning] Carlini
ARXIV
[AdvHaze: Adversarial Haze Attack]CVPR
LAFEAT : Piercing Through Adversarial Defenses with Latent Features
ARXIV
[IMPERCEPTIBLE ADVERSARIAL EXAMPLES FOR FAKE IMAGE DETECTION]ICME
[TRANSFERABLE ADVERSARIAL EXAMPLES FOR ANCHOR FREE OBJECT DETECTION]ICLR
[Unlearnable Examples: Making Personal Data Unexploitable]ICMLW
[Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them]ARXIV
[Mischief: A Simple Black-Box Attack Against Transformer Architectures]ECCV
[Patch-wise Attack for Fooling Deep Neural Network]ICCV
[Naturalistic Physical Adversarial Patch for Object Detectors]CVPR
[Natural Adversarial Examples]ICLR
[WaNet - Imperceptible Warping-based Backdoor Attack]ICLR
[ON IMPROVING ADVERSARIAL TRANSFERABILITY OF VISION TRANSFORMERS]TIFS
[Decision-based Adversarial Attack with Frequency Mixup]NIPS
Robustness of classifiers: from adversarial to random noise :thought_balloon:ARXIV
Countering Adversarial Images using Input Transformations
ICCV
[SafetyNet: Detecting and Rejecting Adversarial Examples Robustly]Arxiv
Detecting adversarial samples from artifacts
ICLR
On Detecting Adversarial Perturbations :thought_balloon:ASIA CCS
[Practical black-box attacks against machine learning]ARXIV
[The space of transferable adversarial examples]ICCV
[Adversarial Examples for Semantic Segmentation and Object Detection]ICLR
Defense-{GAN}: Protecting Classifiers Against Adversarial Attacks Using Generative Models
ICLR
Ensemble Adversarial Training: Attacks and Defences
CVPR
Defense Against Universal Adversarial Perturbations
CVPR
Deflecting Adversarial Attacks With Pixel Deflection
TPAMI
Virtual adversarial training: a regularization method for supervised and semi-supervised learning :thought_balloon:ARXIV
Adversarial Logit Pairing
CVPR
Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser
ARXIV
Evaluating and understanding the robustness of adversarial logit pairing
CCS
Machine Learning with Membership Privacy Using Adversarial Regularization
ARXIV
[On the robustness of the cvpr 2018 white-box adversarial example defenses]ICLR
[Thermometer Encoding: One Hot Way To Resist Adversarial Examples]IJCAI
[Curriculum Adversarial Training]ICLR
[Countering Adversarial Images using Input Transformations]CVPR
[Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser]ICLR
[Towards Deep Learning Models Resistant to Adversarial Attacks]AAAI
[Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing Their Input Gradients]NIPS
[Adversarially robust generalization requires more data]ARXIV
[Is robustness the cost of accuracy? - {A} comprehensive study on the robustness of 18 deep image classification models.]ARXIV
[Robustness may be at odds with accuracy]ICLR
[PIXELDEFEND: LEVERAGING GENERATIVE MODELS TO UNDERSTAND AND DEFEND AGAINST ADVERSARIAL EXAMPLES]NIPS
Adversarial Training and Robustness for Multiple Perturbations
NIPS
Adversarial Robustness through Local Linearization
CVPR
Retrieval-Augmented Convolutional Neural Networks against Adversarial Examples
CVPR
Feature Denoising for Improving Adversarial Robustness
NEURIPS
A New Defense Against Adversarial Images: Turning a Weakness into a Strength
ICML
Interpreting Adversarially Trained Convolutional Neural Networks
ICLR
Robustness May Be at Odds with Accuracy:thought_balloon:IJCAI
Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss
ICML
Adversarial Examples Are a Natural Consequence of Test Error in Noise:thought_balloon:ICML
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
NeurIPS
Metric Learning for Adversarial Robustness
ARXIV
Defending Adversarial Attacks by Correcting logits
ICCV
Adversarial Learning With Margin-Based Triplet Embedding Regularization
ICCV
CIIDefence: Defeating Adversarial Attacks by Fusing Class-Specific Image Inpainting and Image Denoising
NIPS
Adversarial Examples Are Not Bugs, They Are Features
ICML
Using Pre-Training Can Improve Model Robustness and Uncertainty
NIPS
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training:thought_balloon:ICCV
Improving Adversarial Robustness via Guided Complement Entropy
NIPS
Robust Attribution Regularization :thought_balloon:NIPS
Are Labels Required for Improving Adversarial Robustness?
ICLR
Theoretically Principled Trade-off between Robustness and Accuracy
CVPR
[Adversarial defense by stratified convolutional sparse coding]ICML
[On the Convergence and Robustness of Adversarial Training]CVPR
[Robustness via Curvature Regularization, and Vice Versa]CVPR
[ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples]ICML
[Improving Adversarial Robustness via Promoting Ensemble Diversity]ICML
[Towards the first adversarially robust neural network model on {MNIST}]NIPS
[Unlabeled Data Improves Adversarial Robustness]ICCV
[Evaluating Robustness of Deep Image Super-Resolution Against Adversarial Attacks]ICML
[Using Pre-Training Can Improve Model Robustness and Uncertainty]ARXIV
[Improving adversarial robustness of ensembles with diversity training]ICML
[Adversarial Robustness Against the Union of Multiple Perturbation Models]CVPR
[Robustness via Curvature Regularization, and Vice Versa]NIPS
[Robustness to Adversarial Perturbations in Learning from Incomplete Data]ICML
[Improving Adversarial Robustness via Promoting Ensemble Diversity]NIPS
[Adversarial Robustness through Local Linearization]ARXIV
[Adversarial training can hurt generalization]NIPS
[Adversarial training for free!]ICLR
[Improving the generalization of adversarial training with domain adaptation]CVPR
[Disentangling Adversarial Robustness and Generalization]NIPS
[Adversarial Training and Robustness for Multiple Perturbations]ICCV
[Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks]ICML
[On the Convergence and Robustness of Adversarial Training]ICML
[Rademacher Complexity for Adversarially Robust Generalization]ARXIV
[Adversarially Robust Generalization Just Requires More Unlabeled Data]ARXIV
[You only propagate once: Accelerating adversarial training via maximal principle]NIPS
Cross-Domain Transferability of Adversarial Perturbations
ARXIV
[Adversarial Robustness as a Prior for Learned Representations]ICLR
[Structured Adversarial Attack: Towards General Implementation and Better Interpretability]ICLR
[Defensive Quantization: When Efficiency Meets Robustness]NeurIPS
[A New Defense Against Adversarial Images: Turning a Weakness into a Strength]ICLR
Jacobian Adversarially Regularized Networks for Robustness
CVPR
What it Thinks is Important is Important: Robustness Transfers through Input Gradients
ICLR
Adversarially Robust Representations with Smooth Encoders :thought_balloon:ARXIV
Heat and Blur: An Effective and Fast Defense Against Adversarial Examples
ICML
Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference
CVPR
Wavelet Integrated CNNs for Noise-Robust Image Classification
ARXIV
Deflecting Adversarial Attacks
ICLR
Robust Local Features for Improving the Generalization of Adversarial Training
ICLR
Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier
CVPR
A Self-supervised Approach for Adversarial Robustness
ICLR
Improving Adversarial Robustness Requires Revisiting Misclassified Examples :thumbsup:ARXIV
Manifold regularization for adversarial robustness
NeurIPS
DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles
ARXIV
A Closer Look at Accuracy vs. Robustness
NeurIPS
Energy-based Out-of-distribution Detection
ARXIV
Out-of-Distribution Generalization via Risk Extrapolation (REx)
CVPR
Adversarial Examples Improve Image Recognition
ICML
[Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks] :thumbsup:ICML
[Efficiently Learning Adversarially Robust Halfspaces with Noise]ICML
[Implicit Euler Skip Connections: Enhancing Adversarial Robustness via Numerical Stability]ICML
[Friendly Adversarial Training: Attacks Which Do Not Kill Training Make Adversarial Learning Stronger]ICML
[Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization] :thumbsup:ICML
[Overfitting in adversarially robust deep learning] :thumbsup:ICML
[Proper Network Interpretability Helps Adversarial Robustness in Classification]ICML
[Randomization matters How to defend against strong adversarial attacks]ICML
[Reliable Evaluation of Adversarial Robustness with an Ensemble of Diverse Parameter-free Attacks]ICML
[Towards Understanding the Regularization of Adversarial Robustness on Neural Networks]CVPR
[Defending Against Universal Attacks Through Selective Feature Regeneration]ARXIV
[Understanding and improving fast adversarial training]ARXIV
[Cat: Customized adversarial training for improved robustness]ICLR
[MMA Training: Direct Input Space Margin Maximization through Adversarial Training]ARXIV
[Bridging the performance gap between fgsm and pgd adversarial training]CVPR
[Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization]ARXIV
[Towards understanding fast adversarial training]ARXIV
[Overfitting in adversarially robust deep learning]ICLR
[Robust local features for improving the generalization of adversarial training]ICML
[Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks]ARXIV
[Regularizers for single-step adversarial training]CVPR
[Single-step adversarial training with dropout scheduling]ICLR
[Improving Adversarial Robustness Requires Revisiting Misclassified Examples]ARXIV
[Fast is better than free: Revisiting adversarial training.]ARXIV
[On the Generalization Properties of Adversarial Training]ARXIV
[A closer look at accuracy vs. robustness]ICLR
[Adversarially robust transfer learning]ARXIV
[On Saliency Maps and Adversarial Robustness]ARXIV
[On Detecting Adversarial Inputs with Entropy of Saliency Maps]ARXIV
[Detecting Adversarial Perturbations with Saliency]ARXIV
[Detection Defense Against Adversarial Attacks with Saliency Map]ARXIV
[Model-based Saliency for the Detection of Adversarial Examples]CVPR
[Auxiliary Training: Towards Accurate and Robust Models]CVPR
[Single-step Adversarial training with Dropout Scheduling]CVPR
[Achieving Robustness in the Wild via Adversarial Mixing With Disentangled Representations]ICML
Test-Time Training with Self-Supervision for Generalization under Distribution Shifts
NeurIPS
[Improving robustness against common corruptions by covariate shift adaptation]CCS
[Gotta Catch'Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks]ECCV
[A simple way to make neural networks robust against diverse image corruptions]CVPRW
[Role of Spatial Context in Adversarial Robustness for Object Detection]WACV
[Local Gradients Smoothing: Defense against localized adversarial attacks]NeurIPS
[Adversarial Weight Perturbation Helps Robust Generalization]MM
[DIPDefend: Deep Image Prior Driven Defense against Adversarial Examples]ECCV
[Adversarial Data Augmentation via De
formation Statistics]ARXIV
On the Limitations of Denoising Strategies as Adversarial Defenses
AAAI
[Understanding catastrophic overfitting in single-step adversarial training]ICLR
[Bag of tricks for adversarial training]ARXIV
[Bridging the Gap Between Adversarial Robustness and Optimization Bias]ICLR
[Perceptual Adversarial Robustness: Defense Against Unseen Threat Models]AAAI
[Adversarial Robustness through Disentangled Representations]ARXIV
[Understanding Robustness of Transformers for Image Classification]CVPR
[Adversarial Robustness under Long-Tailed Distribution]ARXIV
[Adversarial Attacks are Reversible with Natural Supervision]AAAI
[Attribute-Guided Adversarial Training for Robustness to Natural Perturbations]ICLR
[LEARNING PERTURBATION SETS FOR ROBUST MACHINE LEARNING]ICLR
[Improving Adversarial Robustness via Channel-wise Activation Suppressing]AAAI
[Efficient Certification of Spatial Robustness]ARXIV
[Domain Invariant Adversarial Learning]ARXIV
[Learning Defense Transformers for Counterattacking Adversarial Examples]ICLR
[ONLINE ADVERSARIAL PURIFICATION BASED ON SELF-SUPERVISED LEARNING]ARXIV
[Removing Adversarial Noise in Class Activation Feature Space]ARXIV
[Improving Adversarial Robustness Using Proxy Distributions]ARXIV
[Decoder-free Robustness Disentanglement without (Additional) Supervision]ARXIV
[Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks]ARXIV
[Reversible Adversarial Attack based on Reversible Image Transformation]ICLR
[ONLINE ADVERSARIAL PURIFICATION BASED ON SELF-SUPERVISED LEARNING]ARXIV
[Towards Corruption-Agnostic Robust Domain Adaptation]ARXIV
[Adversarially Trained Models with Test-Time Covariate Shift Adaptation]ICLR workshop
[COVARIATE SHIFT ADAPTATION FOR ADVERSARIALLY ROBUST CLASSIFIER]ARXIV
[Self-Supervised Adversarial Example Detection by Disentangled Representation]AAAI
[Adversarial Defence by Diversified Simultaneous Training of Deep Ensembles]ARXIV
[Understanding Catastrophic Overfitting in Adversarial Training]ACM Trans. Multimedia Comput. Commun. Appl
[Towards Corruption-Agnostic Robust Domain Adaptation]ICLR
[TENT: FULLY TEST-TIME ADAPTATION BY ENTROPY MINIMIZATION]ARXIV
[Attacking Adversarial Attacks as A Defense]ICML
[Adversarial purification with Score-based generative models]ARXIV
[Adversarial Visual Robustness by Causal Intervention]CVPR
[MaxUp: Lightweight Adversarial Training With Data Augmentation Improves Neural Network Training]MM
[AdvFilter: Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning]CVPR
[Robust and Accurate Object Detection via Adversarial Learning]ARXIV
[Markpainting: Adversarial Machine Learning meets Inpainting]ICLR
[EFFICIENT CERTIFIED DEFENSES AGAINST PATCH ATTACKS ON IMAGE CLASSIFIERS]ARXIV
[Learning Defense Transformers for Counterattacking Adversarial Examples]ARXIV
[Towards Robust Vision Transformer]ARXIV
[Reveal of Vision Transformers Robustness against Adversarial Attacks]ARXIV
[Intriguing Properties of Vision Transformers]ARXIV
[Vision transformers are robust learners]ARXIV
[On Improving Adversarial Transferability of Vision Transformers]ARXIV
[On the adversarial robustness of visual transformers]ARXIV
[On the robustness of vision transformers to adversarial examples]ARXIV
[Understanding Robustness of Transformers for Image Classification]ARXIV
[Regional Adversarial Training for Better Robust Generalization]CCS
[DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks]ARXIV
[MODELLING ADVERSARIAL NOISE FOR ADVERSARIAL DEFENSE]ICCV
[Adversarial Example Detection Using Latent Neighborhood Graph]ARXIV
[Identification of Attack-Specific Signatures in Adversarial Examples]Neurips
[How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?]ARXIV
[Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs]ARXIV
[Learning Defense Transformers for Counterattacking Adversarial Examples]ADVM
[Detecting Adversarial Patch Attacks through Global-local Consistency]ICCV
[Can Shape Structure Features Improve Model Robustness under Diverse Adversarial Settings?]ICLR
[Undistillable: Making A Nasty Teacher That CANNOT teach students]ICCV
[Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better]ARXIV
[Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart]ARXIV
[Consistency Regularization for Adversarial Robustness]ICML
[CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection]NeurIPS
[Adversarial Neuron Pruning Purifies Backdoored Deep Models]ICCV
[Towards Understanding the Generative Capability of Adversarially Robust Classifiers]NeurIPS
[Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training]NeurIPS
[Data Augmentation Can Improve Robustness]NeurIPS
[When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?]ARXIV
[$\alpha$ Weighted Federated Adversarial Training]AAAI
[Safe Distillation Box]USENIX
[Transferring Adversarial Robustness Through Robust Representation Matching]ARXIV
[Robustness and Accuracy Could Be Reconcilable by (Proper) Definition]ARXIV
[IMPROVING ADVERSARIAL DEFENSE WITH SELF SUPERVISED TEST-TIME FINE-TUNING]ARXIV
[Exploring Memorization in Adversarial Training]IJCV
[Open-Set Adversarial Defense with Clean-Adversarial Mutual Learning]ARXIV
[Adversarial Detection and Correction by Matching Prediction Distribution]ARXIV
[Enhancing Adversarial Training with Feature Separability]ARXIV
[An Eye for an Eye: Defending against Gradient-based Attacks with Gradients]ICCV 2017
CVAE-GAN: Fine-Grained Image Generation Through Asymmetric Training
ICML 2016
Autoencoding beyond pixels using a learned similarity metric
ARXIV 2019
Natural Adversarial Examples
ICML 2017
Conditional Image Synthesis with Auxiliary Classifier {GAN}s
ICCV 2019
SinGAN: Learning a Generative Model From a Single Natural Image
ICLR 2020
Robust And Interpretable Blind Image Denoising Via Bias-Free Convolutional Neural Networks
ICLR 2020
Pay Attention to Features, Transfer Learn Faster CNNs
ICLR 2020
On Robustness of Neural Ordinary Differential Equations
ICCV 2019
Real Image Denoising With Feature Attention
ICLR 2018
Multi-Scale Dense Networks for Resource Efficient Image Classification
ARXIV 2019
Rethinking Data Augmentation: Self-Supervision and Self-Distillation
ICCV 2019
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
ARXIV 2019
Adversarially Robust Distillation
ARXIV 2019
Knowledge Distillation from Internal Representations
ICLR 2020
Contrastive Representation Distillation :thought_balloon:NIPS 2018
Faster Neural Networks Straight from JPEG
ARXIV 2019
A Closer Look at Double Backpropagation:thought_balloon:CVPR 2016
Learning Deep Features for Discriminative Localization
ICML 2019
Noise2Self: Blind Denoising by Self-Supervision
ARXIV 2020
Supervised Contrastive Learning
CVPR 2020
High-Frequency Component Helps Explain the Generalization of Convolutional Neural Networks
NIPS 2017
[Counterfactual Fairness]ARXIV 2020
[An Adversarial Approach for Explaining the Predictions of Deep Neural Networks]CVPR 2014
[Rich feature hierarchies for accurate object detection and semantic segmentation]ICLR 2018
[Spectral Normalization for Generative Adversarial Networks]NIPS 2018
[MetaGAN: An Adversarial Approach to Few-Shot Learning]ARXIV 2019
[Breaking the cycle -- Colleagues are all you need]ARXIV 2019
[LOGAN: Latent Optimisation for Generative Adversarial Networks]ICML 2020
[Margin-aware Adversarial Domain Adaptation with Optimal Transport]ICML 2020
[Representation Learning Using Adversarially-Contrastive Optimal Transport]ICLR 2021
[Free Lunch for Few-shot Learning: Distribution Calibration]CVPR 2019
[Unprocessing Images for Learned Raw Denoising]TPAMI 2020
[Image Quality Assessment: Unifying Structure and Texture Similarity]CVPR 2020
[Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion]ICLR 2021
[WHAT SHOULD NOT BE CONTRASTIVE IN CONTRASTIVE LEARNING]ARXIV
[MT3: Meta Test-Time Training for Self-Supervised Test-Time Adaption]ARXIV
[UNSUPERVISED DOMAIN ADAPTATION THROUGH SELF-SUPERVISION]ARXIV
[Estimating Example Difficulty using Variance of Gradients]ICML 2020
[Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources]ARXIV
[DATASET DISTILLATION]ARXIV 2022
[Debugging Differential Privacy: A Case Study for Privacy Auditing]ARXIV
[Adversarial Robustness and Catastrophic Forgetting]