tensorly.solvers.admm.admm

admm(UtM, UtU, x, dual_var, n_iter_max=100, n_const=None, order=None, non_negative=None, l1_reg=None, l2_reg=None, l2_square_reg=None, unimodality=None, normalize=None, simplex=None, normalized_sparsity=None, soft_sparsity=None, smoothness=None, monotonicity=None, hard_sparsity=None, tol=0.0001)[source]

Alternating direction method of multipliers (ADMM) algorithm to minimize a quadratic function under convex constraints.

Parameters:
UtM: ndarray

Pre-computed product of the transposed of U and M.

UtU: ndarray

Pre-computed product of the transposed of U and U.

x: init

Default: None

dual_varndarray

Dual variable to update x

n_iter_maxint

Maximum number of iteration Default: 100

n_constint

Number of constraints. If it is None, function solves least square problem without proximity operator If ADMM function is used with a constraint apart from constrained parafac decomposition, n_const value should be changed to ‘1’. Default : None

orderint

Specifies which constraint to implement if several constraints are selected as input Default : None

non_negativebool or dictionary

This constraint is clipping negative values to ‘0’. If it is True, non-negative constraint is applied to all modes.

l1_regfloat or list or dictionary, optional

Penalizes the factor with the l1 norm using the input value as regularization parameter.

l2_regfloat or list or dictionary, optional

Penalizes the factor with the l2 norm using the input value as regularization parameter.

l2_square_regfloat or list or dictionary, optional

Penalizes the factor with the l2 square norm using the input value as regularization parameter.

unimodalitybool or dictionary, optional

If it is True, unimodality constraint is applied to all modes. Applied to each column seperately.

normalizebool or dictionary, optional

This constraint divides all the values by maximum value of the input array. If it is True, normalize constraint is applied to all modes.

simplexfloat or list or dictionary, optional

Projects on the simplex with the given parameter Applied to each column seperately.

normalized_sparsityfloat or list or dictionary, optional

Normalizes with the norm after hard thresholding

soft_sparsityfloat or list or dictionary, optional

Impose that the columns of factors have L1 norm bounded by a user-defined threshold.

smoothnessfloat or list or dictionary, optional

Optimizes the factors by solving a banded system

monotonicitybool or dictionary, optional

Projects columns to monotonically decreasing distrbution Applied to each column seperately. If it is True, monotonicity constraint is applied to all modes.

hard_sparsityfloat or list or dictionary, optional

Hard thresholding with the given threshold

tolfloat
Returns:
xUpdated ndarray
x_splitUpdated ndarray
dual_varUpdated ndarray

Notes

ADMM solves the convex optimization problem

\[\min_ f(x) + g(z),\; A(x_{split}) + Bx = c.\]

Following updates are iterated to solve the problem

\[x_{split} = argmin_{x_{split}}~ f(x_{split}) + (\rho/2)\|Ax_{split} + Bx - c\|_2^2\]
\[x = argmin_x~ g(x) + (\rho/2)\|Ax_{split} + Bx - c\|_2^2\]
\[dual\_var = dual\_var + (Ax + Bx_{split} - c)\]

where rho is a constant defined by the user.

Let us define a least square problem such as \(\|Ux - M\|^2 + r(x)\).

ADMM can be adapted to this least square problem as following

\[x_{split} = (UtU + \rho\times I)\times(UtM + \rho\times(x + dual\_var)^T)\]
\[x = argmin_{x}~ r(x) + (\rho/2)\|x - x_{split}^T + dual\_var\|_2^2\]
\[dual\_var = dual\_var + x - x_{split}^T\]

where r is the regularization operator. Here, x can be updated by using proximity operator of \(x_{split}^T - dual\_var\).

References

[1]

Huang, Kejun, Nicholas D. Sidiropoulos, and Athanasios P. Liavas. “A flexible and efficient algorithmic framework for constrained matrix and tensor factorization.” IEEE Transactions on Signal Processing 64.19 (2016): 5052-5065.