Training PyTorch models with differential privacy
Highlight: Upgraded to PyTorch 1.13+ as required dependency
Highlight: Upgraded to PyTorch 1.13+ as required dependency
PRVAccountant
based on the paper Numerical Composition of Differential Privacy (#493)nn.EmbeddingBag
(#519)make_private_with_epsilon
with make_private
(#509, #526)We're glad to present Opacus v1.2, which contains some major updates to per sample gradient computation mechanisms and includes all the good stuff from the recent PyTorch releases.
With the recent release of functorch it's now easy to compute per sample gradients for any module, without any limitations we've had to set before.
Here's the new default behaviour:
You can also force functorch-based grad sampler for every layer by passing grad_sample_mode="functorch"
to PrivacyEngine.make_private()
or force_functorch=False
to GradSampleModule
's constructor.
If you're using functorch for your training pipeline already, consider using GradSampleModuleNoOp
(grad_sample_mode="no_op"
) . As suggested by the name, is performs no action and expects client to compute per sample gradients themselves. See our CIFAR-10 example for code demonstration.
Note, that this functionality is still in beta and we haven't fully explored it's limitations. Please report any weird behaviour or inconsistencies you encounter to out github issues, we greatly appreciate the feedback.
One more exciting feature now available in core PyTorch is ExpandedWeights
. This feature uses old Opacus' approach of manually-written vectorized per sample gradient computations, but achieves much better performance.
To activate ExpandedWeights
pass grad_sample_mode="ew"
to PrivacyEngine.make_private()
or use GradSampleModuleExpandedWeights
With the recent updates, Opacus now supports 3 different ways to compute per sample gradients. Below is the quick comparison. For more details refer to the grad sample README.md
TL;DR: If you want stable implementation, use GradSampleModule
(grad_sample_mode="hooks"
).
If you want to experiment with the new functionality, you have two options. Try
GradSampleModuleExpandedWeights
(grad_sample_mode="ew"
) for better performance and grad_sample_mode=functorch
if your model is not supported by GradSampleModule
.
Please switch back to GradSampleModule
(grad_sample_mode="hooks"
) if you encounter strange errors or unexpexted behaviour.
We'd also appreciate it if you report these to us
xxx | Hooks | Expanded Weights | Functorch |
---|---|---|---|
Required PyTorch version | 1.8+ | 1.13+ | 1.12 (to be updated) |
Development status | Underlying mechanism deprecated | Beta | Beta |
Runtime Performance† | baseline | ✅ ~25% faster | 🟨 0-50% slower |
Any DP-allowed†† layers | Not supported | Not supported | ✅ Supported |
Most popular nn.* layers | ✅ Supported | ✅ Supported | ✅ Supported |
torchscripted models | Not supported | ✅ Supported | Not supported |
Client-provided grad sampler | ✅ Supported | Not supported | ✅ Not needed |
batch_first=False |
✅ Supported | Not supported | ✅ Supported |
Recurrent networks | ✅ Supported | Not supported | ✅ Supported |
Padding same in Conv |
✅ Supported | Not supported | ✅ Supported |
† Note, that performance differences are unstable and can vary a lot depending on the exact model and batch size. Numbers above are averaged over benchmarks with small models consisting of convolutional and linear layers. Note, that performance differences are only observed on GPU training, CPU performance seem to be almost identical for all approaches.
†† Layers that produce joint computations on batch samples (e.g. BatchNorm) are not allowed under any approach
utils.unfold2d
with non-symmetric pad/dilation/kernel_size/stride (#443)set_to_none
(#471).defaults
field to match pytorch Optimizer (#329).step()
when p.grad_sample=None (#331)closure
call after applying DP noise (#330)