Eth Sri Diffai Versions Save

A certifiable defense against adversarial examples by training neural networks to be provably robust

v3.0

5 years ago

Version from the Arxiv paper https://arxiv.org/abs/1903.12519

Updates

  • Added DSL to specify complex objectives and complex training scheduling.
  • Added abstract layers for increasing precision in deeper networks
  • Added onyx exporting
  • Included examples of trained nets such as ResNet34

Abstract

We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100. Our approach is based on differentiable abstract interpretation and introduces two novel concepts: (i) abstract layers for fine-tuning the precision and scalability of the abstraction, (ii) a flexible domain specific language (DSL) for describing training objectives that combine abstract and concrete losses with arbitrary specifications. Our training method is implemented in the DiffAI system.

v1.0

5 years ago

The initial version used to reproduce the results in the ICML Paper