tensorly.contrib.decomposition.matrix_product_state_cross

matrix_product_state_cross(input_tensor, rank, tol=1e-05, n_iter_max=100)[source]

MPS (tensor-train) decomposition via cross-approximation (TTcross) [1]

Decomposes input_tensor into a sequence of order-3 tensors of given rank. (factors/cores) Rather than directly decompose the whole tensor, we sample fibers based on skeleton decomposition. We initialize a random tensor-train and sweep from left to right and right to left. On each core, we shape the core as a matrix and choose the fibers indices by finding maximum-volume submatrix and update the core.

Advantage: faster
The main advantage of TTcross is that it doesn’t need to evaluate all the entries of the tensor. For a tensor_shape^tensor_order tensor, SVD needs O(tensor_shape^tensor_order) runtime, but TTcross’ runtime is linear in tensor_shape and tensor_order, which makes it feasible in high dimension.
Disadvantage: less accurate
TTcross may underestimate the error, since it only evaluates partial entries of the tensor. Besides, in contrast to its practical fast performance, there is no theoretical guarantee of it convergence.
Parameters:

input_tensor : tensorly.tensor

The tensor to decompose.

rank : {int, int list}

maximum allowable MPS rank of the factors if int, then this is the same for all the factors if int list, then rank[k] is the rank of the kth factor

tol : float

accuracy threshold for outer while-loop

n_iter_max : int

maximum iterations of outer while-loop (the ‘crosses’ or ‘sweeps’ sampled)

Returns:

factors : MPS factors

order-3 tensors of the MPS decomposition

Notes

Pseudo-code [2]: 1. Initialization tensor_order cores and column indices 2. while (error > tol) 3. update the tensor-train from left to right:

for Core 1 to Core tensor_order
approximate the skeleton-decomposition by QR and maxvol
  1. update the tensor-train from right to left:
    for Core tensor_order to Core 1
    approximate the skeleton-decomposition by QR and maxvol
  2. end while

Acknowledgement: the main body of the code is modified based on TensorToolbox by Daniele Bigoni

References

[R11]Ivan Oseledets and Eugene Tyrtyshnikov. Tt-cross approximation for multidimensional arrays. LinearAlgebra and its Applications, 432(1):70–88, 2010.
[R12]Sergey Dolgov and Robert Scheichl. A hybrid alternating least squares–tt cross algorithm for parametricpdes. arXiv preprint arXiv:1707.04562, 2017.

Examples

Generate a 5^3 tensor, and decompose it into tensor-train of 3 factors, with rank = [1,3,3,1] >>> tensor = tl.tensor(np.arange(5**3).reshape(5,5,5)) >>> rank = [1, 3, 3, 1] >>> factors = matrix_product_state_cross(tensor, rank) print the first core: >>> print(factors[0]) .[[[ 24. 0. 4.]

[ 49. 25. 29.] [ 74. 50. 54.] [ 99. 75. 79.] [124. 100. 104.]]]