Re-Cythonized cython files to fix compilation errors with newer compilers.
Fixed np.object usage in tests.
1.16
3 years ago
Addded
Set the LIGHTFM_NO_CFLAGS environment variable when building LightFM to prevent it from setting
-ffast-math or -march=native compiler flags.
Changed
predict now returns float32 predictions.
1.15
5 years ago
Added
Added a check that there is no overlap between test and train in predict_ranks (thanks to @artdgn).
Added dataset builder functionality.
Fixed
Fixed error message when item features have the wrong dimensions.
Predict now checks for overflow in inputs to predict.
WARP fitting is now numerically stable when there are very few items to
draw negative samples from (< max_sampled).
1.14
6 years ago
Added
added additional input checks for non-normal inputs (NaNs, infinites) for features
added additional input checks for non-normal inputs (NaNs, infinites) for interactions
cross validation module with dataset splitting utilities
Changed
LightFM model now raises a ValueError (instead of assertion) when the number of supplied
features exceeds the number of estimated feature embeddings.
Warn and delete downloaded file when Movielens download is corrputed. This happens in the wild
cofuses users terribly.
1.13
6 years ago
Added
added get_{user/item}_representations functions to facilitate extracting the latent representations out of the model.
Fixed
recall_at_k and precision_at_k now work correctly at k=1 (thanks to Zank Bennett).
Moved Movielens data to data release to prevent grouplens server flakiness from affecting users.
Fix segfault when trying to predict from a model that has not been fitted.
1.12
7 years ago
Changed
Ranks are now computed pessimistically: when two items are tied, the positive item is assumed to have higher rank. This will lead to zero precision scores for models that predict all zeros, for example.
The model will raise a ValueError if, during fitting, any of the parameters become non-finite (NaN or +/- infinity).
Added mid-epoch regularization when a lot of regularization is used. This reduces the likelihood of numerical instability at high regularization rates.