NMFLibrary: Non-negative Matrix Factorization (NMF) Library: Version 2.1
Authors: Hiroyuki Kasai
Last page update: July 22, 2022
Latest library version: 2.1 (see Release notes for more info)
We are very welcome to your contribution. Please tell us
The NMFLibrary is a pure-Matlab library of a collection of algorithms of non-negative matrix factorization (NMF). The solvers can be also called from python (see demo.py).
If this library is useful for you, please cite this as presented below:
@misc{kasai_NMFLibrary_2017,
Author = {Kasai, Hiroyuki},
Title = {{NMFLibrary}: MATLAB library for non-negative matrix factorization (NMF)},
Year = {2017},
Howpublished = {\url{https://github.com/hiroyuki-kasai/NMFLibrary}}
}
Frobenius-norm
Fro-MU (multiplicative updates)
PGD (projected gradient descent)
ALS (alternative least squares)
ANLS (alternative non-negative least squares)
Divergence-based
Div-MU
Div-ADMM
KL-FPA (First-order primal-dual algorithm)
KL-BMD
Semi
Semi-MU
Semi-BCD
Variant
GNMF (Graph Regularized NMF)
NeNMF (NMF with Nesterov's gradient acceleration)
SDNMF (NMF with Sinkhorn Distance)
Robust
Robust-MU
Sparse
Sparse-MU (Sparse multiplicative upates (MU))
Sparse-MU-V
sparseNMF (Sparse NMF)
SC-NMF (NMF with sparseness constraints)
Nonsmooth-NMF
NS-NMF (Fast nonsmooth NMF)
Proj-Sparse
PALM-Sparse-Smooth-NMF
Orthogonal
DTPP (Orthogonal multiplicative upates (MU))
Orth-MU (Orthogonal multiplicative upates (MU))
ALT-ONMF
HALS-SO (Hierarchical ALS with soft orthogonal constraint)
Symmetric
SymmANLS (Symmetric ANLS)
D. Kuang, C. Ding, H. Park, "Symmetric Nonnegative Matrix Factorization for Graph Clustering," SIAM SDM'12, 2012.
D. Kuang, S. Yun, H. Park, "SymNMF Nonnegative low-rank approximation of a similarity matrix for graph clustering," Journal of Global Optimization, vol.62, no.3, pp.545-574, 2015.
Z. Zhu, X. Li, K. Liu, Q. Li, "Dropping Symmetry for Fast Symmetric Nonnegative Matrix Factorization," NIPS, 2018.
SymmHALS (Symmetric HALS)
SymmNewton (Symmetric Newton)
Online/stochastic
Incremental-MU and Online-MU
SPG (Stochastic projected gradient descent)
Robust-Online-MU (Robust online NMF)
ASAG-MU-NMF (Asymmetric stochastic averaging gradient multiplicative updates)
SVRMU-NMF (Stochastic multiplicative updates) and SVRMU (Stochastic variance reduced multiplicative updates)
SAGMU-NMF (Stochastic averaging gradient multiplicative multiplicative updates)
Probabilistic
PNMF-GIBBS (Gibbs sampler for non-negative matrix factorisation, with ARD.) (not included)
M. N. Schmidt, O. Winther, L.K. Hansen, "Bayesian non-negative matrix factorization," International Conference on Independent Component Analysis and Signal Separation, Springer Lecture Notes in Computer Science, Vol. 5441, 2009.
T. Brouwer, P. Lio, "Bayesian Hybrid Matrix Factorisation for Data Integration," AISTATS, 2017.
PNMF-VB (Variational Bayesian inference for non-negative matrix factorisation, with ARD)
Prob-NMF
Deep
Deep-Semi and Deep-Bidir-Semi
G. Trigeorgis, K. Bousmalis, S. Zafeiriou and B. Schuller, "A deep semi-NMF model for learning hidden representations," ICML, 2014.
G. Trigeorgis, K. Bousmalis, S. Zafeiriou and B. Schuller, "A deep matrix factorization method for learning attribute representations," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.39, no.3, pp.417-429, 2017.
Deep-nsNMF
Deep-Multiview-Semi
Convex
Convex-MU and Kernel-Convex-MU
C. Ding, T. Li, and M.I. Jordan, "Convex and semi-nonnegative matrix factorizations," IEEE Transations on Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 45-55, 2010.
T. Li and C. Ding, "The relationships among various nonnegative matrix factorization methods for clustering," ICDM, 2006.
Y. Li and A. Ngom, "A new kernel non-Negative matrix factorization and its application in microarray data analysis," CIBCB, 2012.
Separable
SPA
SNPA
Convolutive
MU-Conv
Heur-MU-Conv
ADMM-Y-Conv
ADMM-Seq-Conv
Projective
Rank2
Nonnegative matrix tri-factorization
Nonnegative under-approximation
N. Gillis and F. Glineur, "Using underapproximations for sparse nonnegative matrix factorization," Pattern Recognition, vol.43, no.4, pp.1676-1687, 2010.
N. Gillis and R.J. Plemmons, "Dimensionality reduction, classification, and spectral mixture analysis using nonnegative underapproximationm," Optical Engineering 50, 027001, 2011.
Minimum-volume
Weighted Low-Rank matrix approximation
Category | Name in example codes | function | options.alg |
other options |
---|---|---|---|---|
Frobenius-norm | Fro-MU | fro_mu_nmf |
mu |
metric='euc' |
Modified Fro-MU | fro_mu_nmf |
mod_mu |
||
Accelerated Fro-MU | fro_mu_nmf |
acc_mu |
||
PGD | pgd_nmf |
pgd |
||
Direct PGD | pgd_nmf |
direct_pgd |
||
Adaptive-step PGD | pgd_nmf |
adp_step_pgd |
||
ALS | als_nmf |
als |
||
Hierarchical ALS | als_nmf |
hals_mu |
||
Accelerated Hierarchical ALS | als_nmf |
acc_hals_mu |
||
ASGROUP | anls_nmf |
anls_asgroup |
||
ASGIVENS | anls_nmf |
anls_asgivens |
||
BPP | anls_nmf |
anls_bpp |
||
Divergence | Div-MU-KL | div_mu_nmf |
metric='kl-div' |
|
Div-MU-ALPHA | div_mu_nmf |
metric='alpha-div' |
||
Div-MU-BETA | div_mu_nmf |
metric='beta-div' |
||
Div-MU-IS | div_mu_nmf |
metric='beta-div' d_beta=0 |
||
Div-MU-KL | div_mu_nmf |
metric='beta-div' d_beta=1 |
||
Div-ADMM-IS | div_admm_nmf |
metric='beta-div' d_beta=0 |
||
Div-ADMM-KL | div_admm_nmf |
metric='beta-div' d_beta=1 |
||
KL-FPA | kl_fpa_nmf |
|||
KL-BMD | kl_bmd_nmf |
|||
Semi | Semi-MU | semi_mu_nmf |
||
Semi-BCD | semi_bcd_nmf |
|||
Variant | NeNMF | nenmf |
||
GNMF | GNMF |
|||
SDNMF | SDNMF |
|||
Robust | Robust-MU | robust_mu_nmf |
||
Sparse | Sparse-MU-EUC | sparse_mu_nmf |
metric='euc' |
|
Sparse-MU-KL | sparse_mu_nmf |
metric='kl-div' |
||
sparseNMF | sparse_nmf |
|||
SC-NMF | sc_nmf |
|||
Nonsmooth-NMF | ns_nmf |
metric='euc' , update_alg='apg' |
||
Proj-Sparse | proj_sparse_nmf |
|||
PALM-Sparse-Smooth | palm_sparse_smooth_nmf |
|||
Orthogonal | DTPP | dtpp_nmf |
||
Orth-MU | orth_mu_nmf |
|||
NMF-HALS-SO | hals_so_nmf |
|||
ALT-ONMF | alternating_onmf |
|||
Symmetric | SymmANLS | symm_anls |
||
SymmHALS | symm_halsacc |
|||
SymmNewton | symm_newton |
|||
Online | Incremental-NMF | incremental_mu_nmf |
||
Online-MU | online_mu_nmf |
|||
Accelerated Online-MU | acc_online_mu_nmf |
|||
SPG | spg_nmf |
|||
Robust-Online-MU | robust_online_mu_nmf |
|||
ASAG-MU-NMF | asag_mu_nmf |
|||
Stochastic-MU | smu_nmf |
|||
SVRMU | svrmu_nmf |
|||
R-SVRMU | svrmu_nmf |
robust=true |
||
SAGMU | sagmu_nmf |
|||
Probabilistic | PNMF-VB | vb_pro_nmf |
||
PNMF-VB-ARD | vb_pro_nmf |
ard=true |
||
Prob-NM | prob_nmf |
|||
Deep | Deep-Semi | deep_semi_nmf |
||
Deep-Bidir-Semi | deep_bidirectional_nmf |
|||
Deep-nsNMF | deep_ns_nmf |
|||
Deep-Multiview-Semi | deep_multiview_semi_nmf |
|||
Convex | Convex-MU | convex_mu_nmf |
sub_mode='std' |
|
Kernel-Convex-MU | convex_mu_nmf |
sub_mode='kernel' |
||
Separable | SPA | spa |
||
SNPA | snpa |
|||
Convolutive | MU-Conv | mu_conv_nmf |
||
Heur-MU-Conv | heuristic_mu_conv_nmf |
|||
ADMM-Y-Conv | admm_y_conv_nmf |
|||
ADMM-Seq-Conv | admm_seq_conv_nmf |
|||
Rank2 | Rank2-NMF | rank2nmf |
||
Nonnegative matrix tri-factorization | Sep-Symm-NMTF | sep_symm_nmtf |
||
Nonnegative under-approximation | recursive_nmu | recursive_nmu |
||
Minimum-volume | minvol-NMF | minvol_nmf |
||
Weighted Low-Rank matrix approximation | WLRA | wlra |
./ - Top directory. ./README.md - This readme file. ./run_me_first.m - The scipt that you need to run first. ./demo.m - Demonstration script to check and understand this package easily. ./demo_face.m - Demonstration script to check and understand this package easily. ./demo.py - Demonstration script to use this package easily from python. |plotter/ - Contains plotting tools to show convergence results and various plots. |auxiliary/ - Some auxiliary tools for this project. |solver/ - Contains various optimization algorithms. |--- frobenius_norm/ - NMF solvers with Frobenius norm metric. |--- divergence/ - NMF solvers with various divertence metrics (KL, beta, alpha, IS). |--- online/ - Online/stochstic NMF solvers. |--- sparse/ - Sparse NMF solvers. |--- robust/ - Robust NMF solvers. |--- orthogonal/ - Orthogonal NMF solvers. |--- symmetric/ - Symmetric NMF solvers. |--- semi/ - Semi NMF solvers. |--- deep/ - Deep NMF solvers. |--- probabilistic/ - Probabilistic NMF solvers. |--- convex/ - Convex NMF solver. |--- convolutive/ - Convolutive NMF solvers. |--- minvol/ - Minimum-volume rank-deficient NMF. |--- nm_under_approx/ - Recursive non-negative matrix underapproximation. |--- nmtf/ - Separable symmetric nonnegative matrix tri-factorization. |--- projective_nmf/ - Projective NMF solver. |--- rank2/ - rank-two NMF solver. |--- weight_lowrank_aprox/ - Weighted Low-Rank matrix Approximation algorithm. |--- nenmf/ - Nesterov's accelerated NMF solver. |--- nnls/ - Solvers for nonnegativity-constrained least squares. |--- 3rd_party/ - Solvers provided by 3rd_party. |--- solver_health_check.m - Health check scripts for solvers. |applications/ - Some appplications using NMF.
Run run_me_first
for path configurations.
%% First run the setup script
run_me_first;
Just execute demo
for the simplest demonstration of this package. .
%% Execute the demonstration script
demo;
The "demo.m" file contains below.
%% generate synthetic data non-negative matrix V size of (mxn)
m = 500;
n = 100;
V = rand(m,n);
%% Initialize rank to be factorized
rank = 5;
%% perform factroization
% Fro-MU
options.alg = 'mu';
[w_mu, infos_mu] = fro_mu_nmf(V, rank, options);
% Hierarchical ALS
options.alg = 'hals';
[w_hals, infos_hals] = als_nmf(V, rank, options);
%% plot
display_graph('epoch','cost', {'MU', 'HALS'}, {w_mu, w_hals}, {infos_mu, infos_hals});
Let's take a closer look at the code above bit by bit. The procedure has only 4 steps!
Step 1: Generate data
First, we generate synthetic data of V of size (mxn).
m = 500;
n = 100;
V = rand(m,n);
Step 2: Define rank
We set the rank value.
rank = 5;
Step 3: Perform solver
Now, you can perform nmf solvers, e.g., Frobenius-norm MU and Hierarchical ALS (HALS), calling solver functions, i.e., fro_mu_nmf()
function and als_nmf()
function after setting some optimization options.
% Fro-MU
options.alg = 'mu';
[w_mu, infos_mu] = fro_mu_nmf(V, rank, options);
% Hierarchical ALS
options.alg = 'hals';
[w_hals, infos_hals] = als_nmf(V, rank, options);
They return the final solutions of w
and the statistics information that include the histories of epoch numbers, cost values, norms of gradient, the number of gradient evaluations and so on.
Step 4: Show result
Finally, display_graph()
provides output results of decreasing behavior of the cost values in terms of the number of iterrations (epochs) and time [sec].
display_graph('epoch','cost', {'Fro-MU', 'HALS'}, {w_mu, w_hals}, {infos_mu, infos_hals});
display_graph('time','cost', {'Fro-MU', 'HALS'}, {w_mu, w_hals}, {infos_mu, infos_hals});
That's it!
"demo_face.m" illustrates the learned basis (dictrionary) in case of CBCL face datasets.
The dataset is first loaded into V instead of generating synthetic data in Step 1.
V = importdata('./data/CBCL_face.mat');
Then, we can display basis elements (W: dictionary) obtained with different algorithms additionally in Step 4.
plot_dictionnary(w_mu.W, [], [7 7]);
plot_dictionnary(w_hals.W, [], [7 7]);
Step 1: Find the path to the MATLAB folder
Run matlabroot
in the MATLAB command window.
matlabroot;
Step 2: Install the Engine API
To install the engine API, choose one of the following. You must call this python install command in the specified folder. The followings are examples in case of R2022a.
cd "c:\Program Files\MATLAB\R2022a\extern\engines\python"
python setup.py install
cd "/usr/local/MATLAB/R2022a/bin/matlab/extern/engines/python"
python setup.py install
cd "/Applications/MATLAB_R2022a.app/extern/engines/python"
python setup.py install
Step 3: Run demonstration code
python demo.py
As for Steps 1 and 2, see more details here.
fro_mu_nmf.m
, als_nmf.m
, wlra.m
, spa.m
, snpa.m
, proj_sparse_nmf.m
, rank2nmf.m
, projective_nmf.m
, alternating_onmf.m
, recursive_nmu.m
, sep_symm_nmtf.m
, minvol_nmf.m
, nnls_*.m
, semi_bcd_nmf.m
and others) are ported from the codes of NMF book written by Nicolas Gillis.nnlsm_activeset.m
, nnls1_asgivens.m
, nnlsm_blockpivot.m
, and normalEqComb.m
written by Jingu Kim.nlssubprob.m
.GNMF.m
, GNMF_Multi.m
, constructW.m
and litekmeans.m
writtnen by Deng Cai.SDNMF.m
and SDNMF_Multi.m
writtnen by Wei Qian.kl_fpa_nmf.m
writtnen by Felipe Yanez.BMD.m
writtnen by by LTK Hien.deep_semi_nmf.m
, deep_bidirectional_nmf.m` writtnen by G.Trigeorgis, and 'deep_multiview_semi_nmf.m' writtnen by H.Zhao.palm_sparse_smooth_nmf.m
writtnen by Raimon Fabregat.convex_mu_nmf.m
writtnen by Yifeng Li.mu_conv_nmf.m
, heuristic_mu_conv_nmf.m
, admm_y_conv_nmf.m
, and admm_seq_conv_nmf.m
writtnen by lyn202206.prob_nmf.m
by NMF DTU Toolbox, Lars Kai Hansen.plot_dictionnary.m
, rescale.m
, and getoptions.m
.If you have any problems or questions, please contact the author: Hiroyuki Kasai (email: hiroyuki dot kasai at waseda dot jp)