Scalable and user friendly neural :brain: forecasting algorithms.
.columns
produces list rather than Pandas Index @akmalsoliev (#862)flake8
to ruff
@Borda (#871)horizon_weight
parameter to losses and BasePointLoss
in https://github.com/Nixtla/neuralforecast/pull/704
horizon_weight
in https://github.com/Nixtla/neuralforecast/pull/706
Full Changelog: https://github.com/Nixtla/neuralforecast/compare/v1.6.1...v1.6.2
fit
and cross_validation
methods with use_init_models
parameter to restore models to initial parameters.HuberLoss
, TukeyLoss
, HuberQLoss
, and HuberMQLoss
.DistributionLoss
to build temporal classifiers.exclude_insample_y
parameter to all models to build models only based on exogenous regressors.NBEATSx
and NHITS
models.predict
method of windows-based models to create batches to control memory usage. Can be controlled with the new inference_windows_batch_size
parameter.HINT
family of hierarchical models: identity reconciliation, AutoHINT
, and reconciliation methods in hyperparameter selection.inference_input_size
hyperparameter to recurrent-based methods to control historic length during inference to better control memory usage and inference times.MinMax
scalers that returned NaN values when the mask had 0 values.y_loc
and y_scale
being in different devices.early_stopping_steps
to the HINT
method.fit
method of all models to stop training when training or validation loss becomes NaN. Print input and output tensors for debugging.val_check_step
> max_steps
from causing an error when early stopping is enabled.AutoNBEATSx
to core's MODEL_DICT
.NBEATSx-i
model where horizon
=1 causes an error due to collapsing trend and seasonality basis.Full Changelog: https://github.com/Nixtla/neuralforecast/compare/v1.4.0...v1.5.0
Recurrent models (RNN, LSTM, GRU, DilatedRNN) can now take static, historical, and future exogenous variables. These variables are combined with lags to produce "context" vectors based on MLP decoders, based on the MQ-RNN model (https://arxiv.org/pdf/1711.11053.pdf).
The new DistributionLoss
class allows for producing probabilistic forecasts with all available models. By changing the loss
hyperparameter to one of these losses, the model will learn and output the distribution parameters:
predict
method can return samples, quantiles, or distribution parameters.sCRPS loss in PyTorch to minimize errors generating prediction intervals.
We included new optimization features commonly used to train neural models:
torch.optim.lr_scheduler.StepLR
scheduler. The new num_lr_decays
hyperparameter controls the number of decays (evenly distributed) during training.early_stop_patience_steps
controls the number of validation steps with no improvement after which training will be stopped.Training, scheduler, validation loss computation, and early stopping are now defined in steps (instead of epochs) to control the training procedure better. Use max_steps
to define the number of training iterations. Note: max_epochs
will be deprecated in the future.
Full Changelog: https://github.com/Nixtla/neuralforecast/compare/v1.2.0...v1.3.0
languages
in https://github.com/Nixtla/neuralforecast/pull/355
num_samples
to Distribution's initialization in https://github.com/Nixtla/neuralforecast/pull/359
Full Changelog: https://github.com/Nixtla/neuralforecast/compare/v1.1.0...v1.2.0