API reference

tltorch: Tensorized Deep Neural Networks

Factorized Tensors

TensorLy-Torch builds on top of TensorLy and provides out of the box PyTorch layers for tensor based operations. The core of this is the concept of factorized tensors, which factorize our layers, instead of regular, dense PyTorch tensors.

You can create any factorized tensor through the main class using:

FactorizedTensor(*args, **kwargs)

Tensor in Factorized form

You can create a tensor of any form using FactorizedTensor.new(shape, rank, factorization), where factorization can be Dense, CP, Tucker or TT. Note that if you use factorization = 'dense' you are just creating a regular, unfactorized tensor. This allows to manipulate any tensor, factorized or not, with a simple, unified interface.

Alternatively, you can also directly create a specific subclass:

DenseTensor(*args, **kwargs)

Dense tensor

CPTensor(*args, **kwargs)

CP Factorization

TuckerTensor(*args, **kwargs)

Tucker Factorization

TTTensor(*args, **kwargs)

Tensor-Train (Matrix-Product-State) Factorization

Tensorized Matrices

In TensorLy-Torch , you can also represent matrices in tensorized form, as low-rank tensors.

Just as for factorized tensor, you can create a tensorized matrix through the main class using:

TensorizedTensor(*args, **kwargs)

Matrix in Tensorized Format

You can create a tensor of any form using TensorizedTensor.new(tensorized_shape, rank, factorization), where factorization can be Dense, CP, Tucker or BlockTT.

You can also explicitly create the type of tensor you want using the following classes:

DenseTensorized(*args, **kwargs)

Methods

TensorizedTensor(*args, **kwargs)

Matrix in Tensorized Format

CPTensorized(*args, **kwargs)

Methods

BlockTT(*args, **kwargs)

Attributes:

Complex Tensors

In theory, you can simply specify dtype=torch.cfloat in the creation of any of the tensors of tensorized matrices above, to automatically create a complex valued tensor. However, in practice, there are many issues in complex support. Distributed Data Parallelism in particular, is not supported.

In TensorLy-Torch, we propose a convenient and transparent way around this: simply use ComplexTensor instead. This will store the factors of the decomposition in real form (by explicitly storing the real and imaginary parts) but will transparently return you a complex valued tensor or reconstruction.

ComplexDenseTensor(*args, **kwargs)

Complex Dense Factorization

ComplexCPTensor(*args, **kwargs)

Complex CP Factorization

ComplexTuckerTensor(*args, **kwargs)

Complex Tucker Factorization

ComplexTTTensor(*args, **kwargs)

Complex TT Factorization

ComplexDenseTensorized(*args, **kwargs)

Complex DenseTensorized Factorization

ComplexTuckerTensorized(*args, **kwargs)

Complex TuckerTensorized Factorization

ComplexCPTensorized(*args, **kwargs)

Complex Tensorized CP Factorization

ComplexBlockTT(*args, **kwargs)

Complex BlockTT Factorization

You can also transparently instanciate any of these using directly the main classes, TensorizedTensor or FactorizedTensor and specifying factorization="ComplexCP" or in general ComplexFactorization with Factorization any of the supported decompositions.

Initialization

Initialization is particularly important in the context of deep learning. We provide convenient functions to directly initialize factorized tensor (i.e. their factors) such that their reconstruction follows approximately a centered Gaussian distribution.

tensor_init(tensor[, std])

Initializes directly the parameters of a factorized tensor so the reconstruction has the specified standard deviation and 0 mean

cp_init(cp_tensor[, std])

Initializes directly the weights and factors of a CP decomposition so the reconstruction has the specified std and 0 mean

tucker_init(tucker_tensor[, std])

Initializes directly the weights and factors of a Tucker decomposition so the reconstruction has the specified std and 0 mean

tt_init(tt_tensor[, std])

Initializes directly the weights and factors of a TT decomposition so the reconstruction has the specified std and 0 mean

block_tt_init(block_tt[, std])

Initializes directly the weights and factors of a BlockTT decomposition so the reconstruction has the specified std and 0 mean

Tensor Regression Layers

TRL(input_shape, output_shape[, bias, ...])

Tensor Regression Layers

Tensor Contraction Layers

TCL(input_shape, rank[, verbose, bias, ...])

Tensor Contraction Layer [R4c5b93526459-1]

Factorized Linear Layers

FactorizedLinear(in_tensorized_features, ...)

Tensorized Fully-Connected Layers

Factorized Convolutions

General N-Dimensional convolutions in Factorized forms

FactorizedConv(in_channels, out_channels, ...)

Create a factorized convolution of arbitrary order

Factorized Embeddings

A drop-in replacement for PyTorch’s embeddings but using an efficient tensor parametrization that never reconstructs the full table.

FactorizedEmbedding(num_embeddings, ...[, ...])

Tensorized Embedding Layers For Efficient Model Compression Tensorized drop-in replacement for torch.nn.Embedding

Tensor Dropout

These functions allow you to easily add or remove tensor dropout from tensor layers.

tensor_dropout(factorized_tensor[, p, ...])

Tensor Dropout

remove_tensor_dropout(factorized_tensor)

Removes the tensor dropout from a TensorModule

You can also use the class API below but unless you have a particular use for the classes, you should use the convenient functions provided instead.

TensorDropout(proba[, min_dim, min_values, ...])

Decomposition Hook for Tensor Dropout on FactorizedTensor

L1 Regularization

L1 Regularization on tensor modules.

tensor_lasso([factorization, penalty, ...])

Generalized Tensor Lasso from a factorized tensors

remove_tensor_lasso(factorized_tensor)

Removes the tensor lasso from a TensorModule

Utilities

Utility functions

get_tensorized_shape(in_features, out_features)

Factorizes in_features and out_features such that: * they both are factorized into the same number of integers * they should both be factorized into order integers * each of the factors should be at least min_dim