Fast and flexible AutoML with learning guarantees.
PredictionOutput
that are not in the best_export_outputs
.warm_start
support to adanet Estimators
.AutoEnsembleTPUEstimator
.adanet.experimental
Keras ModelFlow APIs.model_dir
.adanet.ensemble.MeanEnsembler
with a basic implementation for taking the mean of logits of subnetworks. This also supports including the mean of last_layer (helpful if subnetworks have same configurations) in the predictions
and export_outputs
of the EstimatorSpec.adanet.Evaluator
is changing. The Evaluator.evaluate_adanet_losses(sess, adanet_losses)
function is being replaced with Evaluator.evaluate(sess, ensemble_metrics)
. The ensemble_metrics
parameter contains all computed metrics for each candidate ensemble as well as the adanet_loss
. Code which overrides evaluate_adanet_losses
must migrate over to use the new evaluate
method (we suspect that such cases are very rare).adanet.Evaluator
before Estimator#evaluate
, Estimator#predict
, and Estimator#export_saved_model
. This can have the effect of changing the best candidate chosen at the final round. When the user passes an Evaluator, we run it to establish the best candidate during evaluation, predict, and export_saved_model. Previously they used the adanet_loss moving average collected during training. While the previous ensemble would have been established by the Evaluator, the current set of candidate ensembles that were not done training would be considered according to the adanet_loss. Now when a user passes an Evaluator that, for example, uses a hold-out set, AdaNet runs it before making predictions or exporting a SavedModel to use the best new candidate according to the hold-out set.tf.keras.metrics.Metrics
during evaluation.OutOfRangeError
raised during bagging.max_steps
and steps
are both None
.TPUEmbedding
.max_iteration_steps=None
.adanet.AutoEnsembleSubestimator
for training subestimators on different training data partitions and implement ensemble methods like bootstrap aggregating (a.k.a bagging).AutoEnsembleEstimator
during distributed training.AutoEnsembleEstimator's
candidate_pool
argument to be a lambda
in order to create Estimators
lazily.adanet.subnetwork.Builder#prune_previous_ensemble
for abstract class. This behavior is now specified using adanet.ensemble.Strategy
subclasses.adanet.AutoEnsembleEstimator
. This bug incremented the global_step by n+1 for n canned Estimators
like DNNEstimator
.adanet.TPUEstimator
with adanet.Estimator
feature parity.adanet.AutoEnsembleEstimator
constructor to specify human-readable candidate names.tf.estimator.Estimator
subclasses.adanet.ensemble
which contains interfaces and examples of ways to learn ensembles using AdaNet. Users can now extend AdaNet to use custom ensemble-learning methods.scalar
, image
, histogram
, and audio
summaries on TPU during training.tf.train.SessionRunHook
support to handle more edge cases.adanet.subnetwork
package using from adanet.core import subnetwork
will no longer work, because the package was moved to the adanet/subnetwork
directory. Most users should already be using adanet.subnetwork
or from adanet import subnetwork
, and should not be affected.adanet.TPUEstimator
.tf.train.SessionRunHook
instances for training with adanet.subnetwork.TrainOpSpec
.shared
field to adanet.Subnetwork
to deprecate, replace, and be more flexible than persisted_tensors
.Tensors
between iterations including Python primitives, objects, and lambdas for greater flexibility. Eliminating reliance on a MetaGraphDef
proto also eliminates I/O allowing for faster training, and better future-proofing.adanet.Estimator
.adanet.AutoEnsembleEstimator
for learning to ensemble tf.estimator.Estimator
instances.adanet.subnetwork.Builder
's build_subnetwork
method.adanet.subnetwork.Builder
, so not passing var_list
to the optimizer.minimize
will lead to the same behavior as passing it in by default.tf.summary
inside adanet.subnetwork.Builder
is now equivalent to using the adanet.Summary
object.global_step
from within an adanet.subnetwork.Builder
will return the iteration_step
variable instead, so that the step starts at zero at the beginning of each iteration. One subnetwork incrementing the step will not affect other subnetworks.tf.name_scope("")
hack.adanet.Estimator
models.