tensorly.decomposition
.non_negative_tucker_hals
- non_negative_tucker_hals(tensor, rank, n_iter_max=100, init='svd', svd='truncated_svd', tol=1e-08, sparsity_coefficients=None, core_sparsity_coefficient=None, fixed_modes=None, random_state=None, verbose=False, normalize_factors=False, return_errors=False, exact=False, algorithm='fista')[source]
Non-negative Tucker decomposition with HALS
Uses HALS to update each factor columnwise and uses fista or active set algorithm to update the core, see [1]
- Parameters:
- tensorndarray
- rankNone, int or int list
size of the core tensor,
(len(ranks) == tensor.ndim)
if int, the same rank is used for all modes- n_iter_maxint
maximum number of iteration
- init{‘svd’, ‘random’}, optional
- svdstr, default is ‘truncated_svd’
function to use to compute the SVD, acceptable values in tensorly.SVD_FUNS
- tolfloat, optional
tolerance: the algorithm stops when the variation in the reconstruction error is less than the tolerance Default: 1e-8
- sparsity_coefficientsarray of float (as much as the number of modes)
The sparsity coefficients are used for each factor If set to None, the algorithm is computed without sparsity Default: None
- core_sparsity_coefficientarray of float. This coefficient imposes sparsity on core
when it is updated with fista. Default: None
- fixed_modesarray of integers (between 0 and the number of modes)
Has to be set not to update a factor, 0 and 1 for U and V respectively Default: None
- verboseboolean
Indicates whether the algorithm prints the successive reconstruction errors or not Default: False
- normalize_factorsif True, aggregates the norms of the factors in the core.
- return_errorsboolean
Indicates whether the algorithm should return all reconstruction errors and computation time of each iteration or not Default: False
- exactIf it is True, the HALS nnls subroutines give results with high precision but it has a higher computational cost.
If it is False, the algorithm gives an approximate solution. Default: False
- algorithm{‘fista’, ‘active_set’}
Non negative least square solution to update the core. Default: ‘fista’
- Returns:
- factorsndarray list
list of positive factors of the CP decomposition element i is of shape
(tensor.shape[i], rank)
- errors: list
A list of reconstruction errors at each iteration of the algorithm.
Notes
Tucker decomposes a tensor into a core tensor and list of factors:
\[tensor = [| core; factors[0], ... ,factors[-1] |],\]We solve the following problem for each factor:
\[\min_{tensor >= 0} ||tensor_i - factors[i]\times core_i \times (\prod_{i\neq j}(factors[j]))^T||^2,\]If we define two variables such as:
\[\begin{split}U = core_i \times (\prod_{i \neq j}(factors[j] \times factors[j]^T)), \\ M = tensor_i,\end{split}\]Gradient of the problem becomes:
\[\delta = -U^TM + factors[i] \times U^TU,\]In order to calculate UTU and UTM, we define two variables:
\[\begin{split}CoreCross = \prod_{i\neq j}(core_i \times (\prod_{i\neq j}(factors[j]\times factors[j]^T)) \\ TensorCross = \prod_{i\neq j} tensor_i \times factors[i],\end{split}\]Then UTU and UTM becomes:
\[\begin{split}U^TU = CoreCross_j \times core_j^T, \\ U^TM = (TensorCross_j \times core_j^T)^T,\end{split}\]References
[1]G.Kolda and B.W.Bader, “Tensor Decompositions and Applications”, SIAM REVIEW, vol. 51, n. 3, pp. 455-500, 2009.