1. Quick-Start
A short overview of TensorLy to get started quickly and get familiar with the organization of TensorLy.
1.1. Organization of TensorLy
TensorLy is organized in several submodules:
Module |
Description |
---|---|
Core operations and dynamically dispatched functions |
|
Manipulate tensors in decomposed Tucker form |
|
Manipulate tensors decomposed in CP (or Parafac) form |
|
Manipulate tensors in decomposed Tensor-Train format |
|
Manipulate tensors decomposed in the TT-Matrix format |
|
Manipulate tensors in decomposed PARAFAC-2 form |
|
Tensor algebraic operations |
|
Perform tensor decomposition |
|
Perform (low-rank) tensor regression |
|
Sample random tensors |
|
Error measures |
|
Experimental features, including sparse tensor decomposition and cross decomposition |
|
Loading data |
|
Plug-and-play add-ons for TensorLy |
1.2. TensorLy Backend
Earlier, we mentioned that all function for manipulating arrays can be accessed through tensorly
or tensorly.backend.
For instance, if you have a tensor t
, to take its mean, you should use tensorly.mean(t)
, not, for instance, numpy.mean(t)
(or torch, JAX, etc).
Why is that?
Important
This is because we support several backends: the code you write in TensorLy can be transparently executed with several frameworks, without having to change anything in your code! For instance, you can execute your code normally using NumPy, but you can also have it run on GPU or multiple machines, using PyTorch, TensorFlow, CuPy, JAX or Paddle. Without having to adapt your code!
This is why you should always manipulate tensors using tensorly backend functions only. For instance, tensorly.max calls either the NumPy or PyTorch version depending on the backend you selected. There are other subtleties that are handled by the backend to allow a common API regardless of the backend use.
Note
By default, the backend is set to NumPy. You can change the backend using tensorly.set_backend
.
For instance, to switch to PyTorch, simply type tensorly.set_backend('pytorch')
.
For more information on the backend, refer to TensorLy’s backend system.
Tensors can be created, e.g. from numpy arrays:
import tensorly as tl
from tensorly import random
Now, let’s create a random tensor of size 10x10x10:
tensor = random.random_tensor((10, 10, 10))
# This will be a NumPy array by default
Now, if you want to use PyTorch instead:
tl.set_backend('pytorch')
# TensorLy now uses TensorLy for all operations
tensor = random.random_tensor((10, 10, 10))
# This will be a PyTorch array by default
In all cases, you manipulate tensors in the same way:
tl.max(tensor)
tl.mean(tensor)
tl.dot(tl.unfold(tensor, 0), tl.transpose(tl.unfold(tensor, 0)))
Note that you can also access the backend functions explicitely through tensorly.backend:
import tensorly.backend as T
T.max(tensor)
1.3. Tensor manipulation
You can then easily perform basic tensor operations
, such as folding, unfolding, etc:
# mode-1 unfolding (i.e. zeroth mode)
unfolded = tl.unfold(tensor, mode=0)
# refold the unfolded tensor
tl.fold(unfolded, mode=0, shape=tensor.shape)
You can as easily manipulate tensors in decomposed form:
tensor = random.random_tucker(shape=(3, 4, 5), rank=(2, 3, 4))
# We created a tensor of size 3x4x5 in decomposed (Tucker) form with rank (2, 3, 4)
tl.tucker_tensor.tucker_to_vec(tensor) # Vectorize the tucker tensor
Generally you can manipulate decomposed tensors using the corresponding submodule:
tensorly.tucker_tensor
,
tensorly.cp_tensor
,
tensorly.tt_tensor
,
tensorly.tt_matrix
,
tensorly.parafac2_tensor
.
1.4. Tensor algebra
More ‘advanced’ tensor algebra functions are located in the aptly named tensorly.tenalg
module.
This includes for instance, n-mode product, Kronecker product, etc.
We now provide a backend system for tensor algebra, which allows to either use our “hand-crafter” implementations or to dispatch all the operations to einsum. By default, we use the hand-crafted implementations. To switch to einsum, or change the tenalg backend:
import tensorly.tenalg as tg
tg.set_tenalg_backend('core') # This is the default
tg.kronecker([matrix1, matrix2]) # Hand crafted implementation
tg.set_tenalg_backend('einsum')
tg.kronecker([matrix1, matrix2]) # Dispatched to einsum
1.5. Tensor decomposition
Decompositions are in the tensorly.decomposition
module.
from tensorly.decomposition import tucker, parafac, non_negative_tucker
# decompositions are one-liners:
factors = parafac(tensor, rank=5)
core, factors = tucker(tensor, ranks=[5, 5, 5])
core, factors = non_negative_tucker(tensor, ranks=[5, 5, 5])
1.6. Tensor regressions
Located in the tensorly.regression
module, tensor regression are objects that have a scikit-learn-like API, with a fit method for optimising the parameters and a predict one for applyting the method to new unseen data.
1.7. Metrics
Whether you are training a tensor regression method or combining deep learning and tensor methods,
you will need metrics to train and assess your method.
These are implemented in the tensorly.metrics
module.
1.8. Sampling random tensors
To create random tensors, you can use the tensorly.random
module.
For instance:
from tensorly import random
# full tensor
tensor = random.random_tensor((3, 4, 5))
# CP tensor
tensor = random.random_cp(shape=(3, 4, 5), rank=3)
# A full tensor with a low-rank CP structure
tensor = random.random_cp(shape=(3, 4, 5), rank=3, full=True)
1.9. Experimental features
The module tensorly.contrib
contains experimental features.
These are fully tested features, completely integrated in TensorLy but
for which the API or implementation might still be changing.
Currently, this includes tensor-train cross approximation, as well as various sparse tensor decompositions (using PyData sparse structures).
1.10. Datasets
The tensorly.datasets
module contains utility functions for loading and creating data
for testing tensor methods.