Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"
We have released adabelief-pytorch==0.2.0
and adabelief-tf==0.2.0
. Please use the latest version from pip. Source code is available under folder pypi_packages/adabelief_pytorch0.2.0
and pypi_packages/adabelief_tf0.2.0
.
Project Page, arXiv , Reddit , Twitter, BiliBili (中文), BiliBili (Engligh), Youtube
adabelief-pytorch==0.0.5
, and the default hyper-parameters is different from adabelief-pytorch==0.1.0
. Please check your version of adabelief, and whether you specify all hyper-parameters, or does the default is what you want.) adabelief-pytorch==0.2.0
(Crucial)In the next release of adabelief-pytorch
, we will modify the default of several arguments, in order to fit the needs of for general tasks such as GAN and Transformer. Please check if you specify these arguments or use the default when upgrade from version 0.0.5 to higher.
Version | epsilon | weight_decouple | rectify |
---|---|---|---|
adabelief-pytorch=0.0.5 | 1e-8 | False | False |
latest version 0.2.0>0.0.5 | 1e-16 | True | True |
adabelief-tf==0.2.0
(Crucial)In adabelief-tf==0.1.0
, we modify adabelief-tf
to have the same feature as adabelief-pytorch
, inlcuding decoupled weight decay and learning rate rectification. Furthermore, we will add support for TensorFlow>=2.0 and Keras. The source code is in pypi_packages/adabelief_tf0.1.0
. We tested with a text classification task and a word embedding task.
The default value is updated, please check if you specify these arguments or use the default when upgrade from version 0.0.1 to higher.:
Version | epsilon | weight_decouple | rectify |
---|---|---|---|
adabelief-tf=0.0.1 | 1e-8 | Not supported | Not supported |
latest version 0.2.0>0.0.1 | 1e-14 | Supported (Not an option in arguments) | default: True |
Check if the code is from the latest official implementation (adabelief-pytorch==0.1.0, adabelief-tf==0.1.0) Default hyper-parameters are different from the old version.
check all hyper-parameters, DO NOT simply use the default,
Epsilon in AdaBelief is different from Adam (typically eps_adabelief = eps_adam*eps_adam)
( eps of Adam in Tensorflow is 1e-7, in PyTorch is 1e-8, need to consider this when use AdaBelief in Tensorflow)
If SGD is better than Adam -> Set a large eps (1e-8) in AdaBelief-pytorch (1e-7 in Tensorflow )
If SGD is worse than Adam -> Set a small eps (1e-16) in AdaBelief-pytorch (1e-14 in Tensorflow, rectify=True often helps)
If AdamW is better than Adam -> Turn on “weight_decouple” in AdaBelief-pytorch (this is on in adabelief-tf==0.1.0 and cannot shut down).
Note that default weight decay is very different for Adam and AdamW, you might need to consider this when using AdaBelief with and without decoupled weight decay.
Check ALL hyper-parameters. Refer to our github page for a list of recommended hyper-parameters
Task | lr | beta1 | beta2 | epsilon | weight_decay | weight_decouple | rectify | fixed_decay | amsgrad |
---|---|---|---|---|---|---|---|---|---|
Cifar | 1e-3 | 0.9 | 0.999 | 1e-8 | 5e-4 | False | False | False | False |
ImageNet | 1e-3 | 0.9 | 0.999 | 1e-8 | 1e-2 | True | False | False | False |
Object detection (PASCAL) | 1e-4 | 0.9 | 0.999 | 1e-8 | 1e-4 | False | False | False | False |
LSTM-1layer | 1e-3 | 0.9 | 0.999 | 1e-16 | 1.2e-6 | False | False | False | False |
LSTM 2,3 layer | 1e-2 | 0.9 | 0.999 | 1e-12 | 1.2e-6. | False | False | False | False |
GAN (small) | 2e-4 | 0.5 | 0.999 | 1e-12 | 0 | True=False (decay=0) | False | False | False |
SN-GAN (large) | 2e-4 | 0.5 | 0.999 | 1e-16 | 0 | True=False (decay=0) | True | False | False |
Transformer | 5e-4 | 0.9 | 0.999 | 1e-16 | 1e-4 | True | True | False | False |
Reinforcement (Rainbow) | 1e-4 | 0.9 | 0.999 | 1e-10 | 0.0 | True=False (decay=0) | True | False | False |
Reinforcement (HalfCheetah-v2) | 1e-3 | 0.9 | 0.999 | 1e-12 | 0.0 | True=False (decay=0) | True | False | False |
epsilon
is used in a different way in Tensorflow (default 1e-7) compared to PyTorch (default 1e-8), so eps in Tensorflow might needs to be larger than in PyTorch (perhaps 100 times larger in Tensorflow, e.g. eps=1e-16 in PyTorch v.s eps=1e-14 in Tensorflow). But personally I don't have much experience with Tensorflow, it's likely that you need to slightly tune eps.
( Results in the paper are all generated using the PyTorch implementation in adabelief-pytorch
package, which is the ONLY package that I have extensively tested for now.)
Please install latest version (0.2.0), previous version (0.0.5) uses different default arguments.
pip install adabelief-pytorch==0.2.0
from adabelief_pytorch import AdaBelief
optimizer = AdaBelief(model.parameters(), lr=1e-3, eps=1e-16, betas=(0.9,0.999), weight_decouple = True, rectify = False)
pip install ranger-adabelief==0.1.0
from ranger_adabelief import RangerAdaBelief
optimizer = RangerAdaBelief(model.parameters(), lr=1e-3, eps=1e-12, betas=(0.9,0.999))
pip install adabelief-tf==0.2.0
from adabelief_tf import AdaBeliefOptimizer
optimizer = AdaBeliefOptimizer(learning_rate=1e-3, epsilon=1e-14, rectify=False)
See folder PyTorch_Experiments
, for each subfolder, execute sh run.sh
. See readme.txt
in each subfolder for visualization, or
refer to jupyter notebook for visualization.
Please install the latest version from pip, old versions might suffer from bugs. Source code for up-to-date package is available in folder pypi_packages
.
AdaBelief uses a different denominator from Adam, and is orthogonal to other techniques such as recification, decoupled weight decay, weight averaging et.al. This implies when you use some techniques with Adam, to get a good result with AdaBelief you might still need those techniques.
epsilon
in AdaBelief plays a different role as in Adam, typically when you use epslison=x
in Adam, using epsilon=x*x
will give similar results in AdaBelief. The default value epsilon=1e-8
is not a good option in many cases, in version >0.1.0 the default eps is set as 1e-16.
If you task needs a "non-adaptive" optimizer, which means SGD performs much better than Adam(W), such as on image recognition, you need to set a large epsilon
(e.g. 1e-8) for AdaBelief to make it more non-adaptive
; if your task needs a really adaptive
optimizer, which means Adam is much better than SGD, such as GAN and Transformer, then the recommended epsilon
for AdaBelief is small (1e-12, 1e-16 ...).
If decoupled weight decay is very important for your task, which means AdamW is much better than Adam, then you need to set weight_decouple
as True to turn on decoupled decay in AdaBelief. Note that many optimizers uses decoupled weight decay without specifying it as an options, e.g. RAdam, but we provide it as an option so users are aware of what technique is actually used.
Don't use "gradient threshold" (clamp each element independently) in AdaBelief, it could result in division by 0 and explosion in update; but "gradient clip" (shrink amplitude of the gradient vector but keeps its direction) is fine, though from my limited experience sometimes the clip range needs to be the same or larger than Adam.
Decoupling (argument weight_decouple
appears in AdaBelief
and RangerAdaBelief
):
Currently there are two ways to perform weight decay for adaptive optimizers, directly apply it to the gradient (Adam), or decouple
weight decay from gradient descent (AdamW). This is passed to the optimizer by argument weight_decouple (default: False)
.
Fixed ratio (argument fixed_decay (default: False)
appears in AdaBelief
):
(1) If weight_decouple == False
, then this argument does not affect optimization.
(2) If weight_decouple == True
:
fixed_decay == False
, the weight is multiplied by 1 -lr x weight_decay
fixed_decay == True
, the weight is multiplied by 1 - weight_decay
. This is implemented as an option but not used to produce results in the paper. What is the acutal weight-decay we are using?
This is seldom discussed in the literature, but personally I think it's very important. When we set weight_decay=1e-4
for SGD, the weight is scaled by 1 - lr x weight_decay
. Two points need to be emphasized: (1) lr
in SGD is typically larger than Adam (0.1 vs 0.001), so the weight decay in Adam needs to be set as a larger number to compensate. (2) lr
decays, this means typically we use a larger weight decay in early phases, and use a small weight decay in late phases.
AdaBelief seems to require a different epsilon
from Adam. In CV tasks in this paper, epsilon
is set as 1e-8
. For GAN training it's set as 1e-16
. We recommend try different epsilon
values in practice, and sweep through a large region. We recommend use eps=1e-8
when SGD outperforms Adam, such as many CV tasks; recommend eps=1e-16
when Adam outperforms SGD, such as GAN and Transformer. Sometimes you might need to try eps=1e-12
, such as in some reinforcement learning tasks.
rectify
in AdaBelief
):Whether to turn on the rectification as in RAdam. The recitification basically uses SGD in early phases for warmup, then switch to Adam. Rectification is implemented as an option, but is never used to produce results in the paper.
amsgrad (default: False)
in AdaBelief
):Whether to take the max (over history) of denominator, same as AMSGrad. It's set as False for all experiments.
adabelief-pytorch
package. This is the ONLY package that I have extensively tested for now. ranger
optimizer in ranger-adabelief
which combines RAdam + LookAhead + Gradient Centralization + AdaBelief
, but this is not used in the paper and is not extensively tested.adabelief-tf
is a naive implementation in Tensorflow. It lacks many features such as decoupled weight decay
, and is not extensively tested. Currently I don't have plans to improve it since I seldom use Tensorflow, please contact me if you want to collaborate and improve it.adabelief-tf==0.1.0
supports the same feature as adabelief-pytorch==0.1.0
, including decoupled weight decay
and rectification. But personally I don't have the chance to perform extensive tests as with the PyTorch version.The experiments on Cifar is the same as demo in AdaBound, with the only difference is the optimizer. The ImageNet experiment uses a different learning rate schedule, typically is decayed by 1/10 at epoch 30, 60, and ends at 90. For some reasons I have not extensively experimented, AdaBelief performs good when decayed at epoch 70, 80 and ends at 90, using the default lr schedule produces a slightly worse result. If you have any ideas on this please open an issue here or email me.
I got some feedbacks on RNN on reddit discussion, here are a few tips:
Please contact me at [email protected]
or open an issue here if you would like to help improve it, especially the tensorflow version, or explore combination with other methods, some discussion on the theory part, or combination with other methods to create a better optimizer. Any thoughts are welcome!
RAdam
implementation.PyTorch-studioGAN
).PyTorch_Experiments/LSTM
fairseq
is incompatible with new version PyTorch, works fine with latest fairseq
. adabelief-pytorch==0.1.0
and adabelief-tf==0.1.0
. The Tensorflow version now supports TF>=2.0 and Keras, with the same features as in the PyTorch version, including decoupled weight decay and rectification.adabelief-pytorch==0.2.0
. Fix the error with coupled weight decay in adabelief-pytorch==0.1.0
, fix the amsgrad
update in adabelief-pytorch==0.1.0
. Add options to disable the message printing, by specify print_change_log=False
when initiating the optimizer.adabelief-tf==0.2.0
. Add options to disable the message printing, by specify print_change_log=False
when initiating the optimizer. Delte redundant computations, so 0.2.0
should be faster than 0.1.0
. Removed dependencies on tensorflow-addons
.adabelief-pytorch==0.2.1
is compatible with mixed-precision training.@article{zhuang2020adabelief,
title={AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients},
author={Zhuang, Juntang and Tang, Tommy and Ding, Yifan and Tatikonda, Sekhar and Dvornek, Nicha and Papademetris, Xenophon and Duncan, James},
journal={Conference on Neural Information Processing Systems},
year={2020}
}
@article{zhuang2021acprop,
title={Momentum Centering and Asynchronous Update for Adaptive Gradient Methods},
author={Zhuang, Juntang and Ding, Yifan and Tang, Tommy and Dvornek, Nicha and Tatikonda, Sekhar and Duncan, James},
journal={Conference on Neural Information Processing Systems},
year={2021}
}