tensorly.solvers.nnls.hals_nnls

hals_nnls(UtM, UtU, V=None, n_iter_max=500, tol=1e-08, sparsity_coefficient=None, ridge_coefficient=None, nonzero_rows=False, exact=False, epsilon=0.0, callback=None)[source]

Non Negative Least Squares (NNLS)

Computes an approximate solution of a nonnegative least squares problem (NNLS) with an exact block-coordinate descent scheme. M is m by n, U is m by r, V is r by n. All matrices are nonnegative componentwise.

This algorithm is a simplified implementation of the accelerated HALS defined in [1]. It features an early stop stopping criterion. It is simplified to ensure reproducibility and expose a simple API to control the number of inner iterations.

This function is made for being used repetively inside an outer-loop alternating algorithm, for instance for computing nonnegative matrix Factorization or tensor factorization. To use as a stand-alone solver, set the exact flag to True.

Parameters:
UtM: r-by-n array

Pre-computed product of the transposed of U and M, used in the update rule

UtU: r-by-r array

Pre-computed product of the transposed of U and U, used in the update rule

V: r-by-n initialization matrix (mutable)

Initialized V array By default, is initialized with one non-zero entry per column corresponding to the closest column of U of the corresponding column of M.

n_iter_max: Positive integer

Upper bound on the number of iterations Default: 500

tolfloat in [0,1]

early stop criterion, while err_k > delta*err_0. Set small for almost exact nnls solution, or larger (e.g. 1e-2) for inner loops of a PARAFAC computation. Default: 10e-8

sparsity_coefficient: float or None

The coefficient controling the sparisty level in the objective function. If set to None, the problem is solved unconstrained. Default: None

ridge_coefficient: float or None

The coefficient controling the ridge (l2) penalty in the objective function. If set to None, no ridge penalty is imposed. Default: None

nonzero_rows: boolean

True if the lines of the V matrix can’t be zero, False if they can be zero Default: False

exact: If it is True, the algorithm gives a results with high precision but it needs high computational cost.

If it is False, the algorithm gives an approximate solution Default: False

epsilon: float

Small constant such that V>=epsilon instead of V>=0. Required to ensure convergence, avoids division by zero and reset. Default: 0

callback: callable, optional

A callable called after each iteration. The supported signature is

callback(V: tensor, error: float)

where V is the last estimated nonnegative least squares solution, and error is the squared Euclidean norm of the difference between V at the current iteration k, and V at iteration k-1 (therefore error is not the loss function which is costly to compute). Moreover, the algorithm will also terminate if the callback callable returns True. Default: None

Returns:
V: array

a r-by-n nonnegative matrix, see notes.

Notes

We solve the following problem

\[\min_{V >= \epsilon} ||M-UV||_F^2\]

The matrix V is updated linewise. The update rule for this resolution is

\[\begin{equation} V[k,:]_{(j+1)} = V[k,:]_{(j)} + (UtM[k,:] - UtU[k,:]\times V_{(j)})/UtU[k,k] \end{equation}\]

with j the update iteration index. V is then thresholded to be larger than epsilon.

This problem can also be defined by adding respectively a sparsity coefficient and a ridge coefficients

\[\lambda_s, \lambda_r\]

enhancing sparsity or smoothness in the solution [2]. In this sparse/ridge version, the update rule becomes

\[\begin{equation} V[k,:]_{(j+1)} = V[k,:]_{(j)} + (UtM[k,:] - UtU[k,:]\times V_{(j)} - \lambda_s)/(UtU[k,k]+2\lambda_r) \end{equation}\]

Note that the data fitting is halved but not the ridge penalization.

References

[1]

N. Gillis and F. Glineur, Accelerated Multiplicative Updates and Hierarchical ALS Algorithms for Nonnegative Matrix Factorization, Neural Computation 24 (4): 1085-1105, 2012.

[2]

J. Eggert, and E. Korner. “Sparse coding and NMF.” 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541). Vol. 4. IEEE, 2004.