Pytorch Widedeep Versions Save

A flexible package for multimodal-deep-learning to combine tabular data with text and images using Wide and Deep models in Pytorch

v1.5.1

1 month ago

Mostly fixed issue #204

v.1.5.0

2 months ago

Added two new embedding methods for numerical features described in On Embeddings for Numerical Features in Tabular Deep Learning and adjusted all models and functionalities accordingly

v.1.4.0

5 months ago

This release mainly adds the functionality to be able to deal with large datasets via the load_from_folder module.

This module is inspired by the ImageFolder class in the torchvision library but adapted to the needs of our library. See the docs for details.

v.1.3.2

9 months ago
  1. Added Flash Attention
  2. Added Linear Attention
  3. Revisited and polished the docs

v1.3.1

9 months ago
  1. Added example scripts and notebooks on how to use the library in the context of recommendation systems using this notebook as example. This is a response to issue #133
  2. Used the opportunity to add the movielens 100k dataset to the library, so that now it can be imported from the datasets module
  3. Added a simple (not pre-trained) transformer model to to the text component
  4. Added citation file
  5. Fix a bug regarding the padding index not being 1 when using the fastai transforms

v1.3.0

9 months ago
  • Added a new functionality to access feature importance via attention weights for all DL models for Tabular data except for the TabPerceiver. This functionality is accessed via the feature_importance attribute in the trainer (computed during training with a sample of observations) and at predict time via de explain method.
  • Fix all restore weights capabilities in all forms of training. Such capabilities are present in two callbacks, the EarlyStopping and the ModelCheckpoint Callbacks. Prior to this release there was a bug and the weights were not restored.

joss_paper_package_version_v1.2.0

11 months ago

v1.2.2

1 year ago
  1. Fixed a bug related to the option of adding a FC head on top of the "backbone" models
  2. Added a notebook to illustrate how one could use a Hugginface model along with any other model in the library

v.1.2.1

1 year ago

Simple minor release fixing the implementation of the additive attention (see #110 )

v1.2.0

1 year ago

There are a number of changes and new features in this release, here is a summary:

  1. Refactored the code related to the 3 forms of training in the library:

    • Supervised Training (via the Trainer class)
    • Self-Supervised pre-training: we have implemented two methods or routines for self-supervised pre-training. These are:
      • Encoder-Decoder Pre-Training (via the EncoderDecoderTrainer class): this is inspired by the TabNet paper
      • Constrastive-Denoising Pre-Training (via de ConstrastiveDenoising class): this is inspired by the SAINT paper
    • Bayesian or Probabilistic Training (via the BayesianTrainer: this is inspired by the paper Weight Uncertainty in Neural Networks

    Just as a reminder, the current deep learning models for tabular data available in the library are:

  2. The text related component has now 3 available models, all based on RNNs. There are reasons for that although the integration with the Hugginface Transformer library is the next step in the development of the library. The 3 models available are:

    • BasicRNN
    • AttentiveRNN
    • StackedAttentiveRNN

    The last two are based on Hierarchical Attention Networks for Document Classification. See the docs for details

  3. The image related component is now fully integrated with the latest torchvision release, with a new Multi-Weight Support API. Currently, the model variants supported by our library are:

    • resnet
    • shufflenet
    • resnext
    • wide_resnet
    • regnet
    • densenet
    • mobilenet
    • mnasnet
    • efficientnet
    • squeezenet