An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
nni.retiarii
is no longer maintained and tested. Please migrate to nni.nas
.
nni.nas.nn.pytorch.ModelSpace
, rather than use @model_wrapper
.nni.choice
, rather than nni.nas.nn.pytorch.ValueChoice
.nni.nas.experiment.NasExperiment
and NasExperimentConfig
, rather than RetiariiExperiment
.nni.nas.model_context
, rather than nni.nas.fixed_arch
.freeze()
and simplify()
APIs.nni.choice
, nni.uniform
, nni.normal
and etc.MutableModule
, ModelSpace
and ParamterizedModule
.nni.contrib.compression
.
input
, ouptut
and any registered paramters.granularity
in pruners. view doc
nni/examples/compression
nni/examples/compression/evaluator
nni/examples/compression/pruning
nni/examples/compression/quantization
nni/examples/compression/fusion
reuse_mode
now defaults to False
; setting it to True
will fallback to v2.x remote training servicenni.retiarii
is no longer maintained and tested. Please migrate to nni.nas
.
nni.nas.nn.pytorch.ModelSpace
, rather than use @model_wrapper
.nni.choice
, rather than nni.nas.nn.pytorch.ValueChoice
.nni.nas.experiment.NasExperiment
and NasExperimentConfig
, rather than RetiariiExperiment
.nni.nas.model_context
, rather than nni.nas.fixed_arch
.freeze()
and simplify()
APIs.nni.choice
, nni.uniform
, nni.normal
and etc.MutableModule
, ModelSpace
and ParamterizedModule
.nni.contrib.compression
.
input
, output
and any registered parameters. view doc
granularity
in pruners. view doc
nni/examples/compression
nni/examples/compression/evaluator
nni/examples/compression/pruning
nni/examples/compression/quantization
nni/examples/compression/fusion
reuse_mode
now defaults to False
; setting it to True
will fallback to v2.x remote training servicePyTorch Lightning
in NAS.torch._C.parse_schema
in pytorch 1.8.0 in ModelSpeedup.rand_like_with_shape
is easy to overflow when dtype=torch.int8
.fit_kwargs
in lightning evaluator. (doc)drop_path
and auxiliary_loss
in NASNet. (doc)export_probs
to monitor the architecture weights.nni.retiarii
code-base to nni.nas
.weighted_sum
.TorchEvaluator
, LightningEvaluator
, TransformersEvaluator
to ease the expression of training logic in pruner. (doc, API)Evaluator
, the old API is deprecated and will be removed in v3.0. (doc)lr_scheduler
in pruning by using Evaluator
.ActivationAPoZRankPruner
and ActivationMeanRankPruner
.training_steps
, regular_scale
, movement_mode
, sparse_granularity
for MovementPruner
. (doc)GroupNorm
replacement in pruning speedup. Thanks external contributor @cin-xing .balance
mode performance in LevelPruner
.dependency_aware
mode in scheduled pruners.bias
mask cannot be generated.max_sparsity_per_layer
has no effect.Linear
and LayerNorm
speedup replacement in NLP task.LightningModule
failed in pytorch_lightning >= 1.7.0
.adaptive_parzen_normal
of TPE.${envId}_run.sh
to replace run.sh
.None
.[batch, seq, hidden]
.repeat
and cell
CompressionExperiment
config.tuner.name
case insensitiveSupport launching multiple HPO experiments in one process
Internal refactors and improvements
RecursiveScriptModule
in speedupA full-size upgrade of the documentation, with the following significant improvements in the reading experience, practical tutorials, and examples:
merge_op
, preprocessor, postprocessor). (doc)depth
in the Repeat
API allows ValueChoice. (doc)state_dict
between sub-net and super-net. (doc, example in spos)balance
is supported in LevelPruner
.(doc)ADMMPruner
.(doc)NOTE: NNI v2.6 is the last version that supports Python 3.6. From next release NNI will require Python 3.7+.
nni.experiment.Experiment
APIs as backend. The output message of create, resume, and view commands have changed.~/.config/nni
.seed
. (doc)
seed
.tpe_args
for expert users to customize algorithm behavior.tpe_args.constant_liar_type
to null
(or None
in Python).parallel_optimize
and constant_liar_type
has been removed. If you are using them please update your config to use tpe_args.constant_liar_type
instead.nni.trace
wrapped Optimizer
in Pruning V2. In the case of not affecting the user experience as much as possible, trace the input parameters of the optimizer. (doc)
masks_file
of ModelSpeedup
now accepts pathlib.Path
object. (Thanks to @dosemeion) (doc)
New emoticons!
Install from pypi
fixed_arch
on Retiarii (#3972)view
mode (#3872)exclude
not supported in some config_list
cases (#3815)export_model
in model compression (#3968)UnSqueeze
in ModelSpeedup (#3960)