tensorly.contrib.sparse.decomposition.tucker
- tucker(tensor, rank, fixed_factors=None, n_iter_max=100, init='svd', return_errors=False, svd='truncated_svd', tol=0.0001, random_state=None, mask=None, verbose=False)
Tucker decomposition via Higher Order Orthogonal Iteration (HOI)
Decomposes tensor into a Tucker decomposition:
tensor = [| core; factors[0], ...factors[-1] |]
[1]- Parameters:
- tensorndarray
- rankNone, int or int list
size of the core tensor,
(len(ranks) == tensor.ndim)
if int, the same rank is used for all modes- fixed_factorsint list or None, default is None
if not None, list of modes for which to keep the factors fixed. Only valid if a Tucker tensor is provided as init.
- n_iter_maxint
maximum number of iteration
- init{‘svd’, ‘random’}, optional
- return_errorsboolean
Indicates whether the algorithm should return all reconstruction errors and computation time of each iteration or not Default: False
- svdstr, default is ‘truncated_svd’
function to use to compute the SVD, acceptable values in tensorly.SVD_FUNS
- tolfloat, optional
tolerance: the algorithm stops when the variation in the reconstruction error is less than the tolerance
- random_state{None, int, np.random.RandomState}
- maskndarray
array of booleans with the same shape as
tensor
should be 0 where the values are missing and 1 everywhere else. Note: if tensor is sparse, then mask should also be sparse with a fill value of 1 (or True).- verboseint, optional
level of verbosity
- Returns:
- corendarray of size ranks
core tensor of the Tucker decomposition
- factorsndarray list
list of factors of the Tucker decomposition. Its
i
-th element is of shape(tensor.shape[i], ranks[i])
References
[1]tl.G.Kolda and B.W.Bader, “Tensor Decompositions and Applications”, SIAM REVIEW, vol. 51, n. 3, pp. 455-500, 2009.