Time series forecasting with PyTorch
epoch_end
has been renamed to on_epoch_end
and replacing model.summarize()
with ModelSummary(model, max_depth=-1)
and Tuner(trainer)
is its own class, so trainer.tuner
needs replacing. (#1280)predict()
interface returning named tuple - see tutorials.N-HiTS
network that has consistently beaten N-BEATS
(#890)EncoderNormalizer()
with limited data history using max_length
argument (#782)MultiEmbedding()
with convenience output_size
and input_size
properties (#829)TemporalFusionTransformer.optimize_hyperparameters
(#619)Removed dropout_categoricals
parameter from TimeSeriesDataSet
.
Use categorical_encoders=dict(<variable_name>=NaNLabelEncoder(add_nan=True)
) instead (#518)
Rename parameter allow_missings
for TimeSeriesDataSet
to allow_missing_timesteps
(#518)
Transparent handling of transformations. Forward methods should now call two new methods (#518):
transform_output
to explicitly rescale the network outputs into the de-normalized spaceto_network_output
to create a dict-like named tuple. This allows tracing the modules with PyTorch's JIT. Only prediction
is still required which is the main network output.Example:
def forward(self, x):
normalized_prediction = self.module(x)
prediction = self.transform_output(prediction=normalized_prediction, target_scale=x["target_scale"])
return self.to_network_output(prediction=prediction)
< 1e-7
(#429)output_size
for multi-target forecasting with the TemporalFusionTransformer (#328)