API reference

tltorch: Tensorized Deep Neural Networks

Factorized Tensors

TensorLy-Torch builds on top of TensorLy and provides out of the box PyTorch layers for tensor based operations. The core of this is the concept of factorized tensors, which factorize our layers, instead of regular, dense PyTorch tensors.

You can create any factorized tensor through the main class, or directly create a specific subclass:

FactorizedTensor(*args, **kwargs)

Tensor in Factorized form

CPTensor(*args, **kwargs)

CP Factorization

TuckerTensor(*args, **kwargs)

Tucker Factorization

TTTensor(*args, **kwargs)

Tensor-Train (Matrix-Product-State) Factorization

Tensorized Matrices

In TensorLy-Torch , you can also represent matrices in tensorized form, as low-rank tensors .

TensorizedTensor(*args, **kwargs)

Matrix in Tensorized Format

CPTensorized(*args, **kwargs)

Methods

BlockTT(*args, **kwargs)

Attributes:

Initialization

Module for initializing tensor decompositions

tensor_init(tensor[, std])

Initializes directly the parameters of a factorized tensor so the reconstruction has the specified standard deviation and 0 mean

cp_init(cp_tensor[, std])

Initializes directly the weights and factors of a CP decomposition so the reconstruction has the specified std and 0 mean

tucker_init(tucker_tensor[, std])

Initializes directly the weights and factors of a Tucker decomposition so the reconstruction has the specified std and 0 mean

tt_init(tt_tensor[, std])

Initializes directly the weights and factors of a TT decomposition so the reconstruction has the specified std and 0 mean

Tensor Regression Layers

TRL(input_shape, output_shape[, bias, ...])

Tensor Regression Layers

Tensor Contraction Layers

TCL(input_shape, rank[, verbose, bias, ...])

Tensor Contraction Layer [R4c5b93526459-1]

Factorized Linear Layers

FactorizedLinear(in_tensorized_features, ...)

Tensorized Fully-Connected Layers

Factorized Convolutions

General N-Dimensional convolutions in Factorized forms

FactorizedConv(in_channels, out_channels, ...)

Create a factorized convolution of arbitrary order

Factorized Embeddings

A drop-in replacement for PyTorch’s embeddings but using an efficient tensor parametrization that never reconstructs the full table.

FactorizedEmbedding(num_embeddings, ...[, ...])

Tensorized Embedding Layers For Efficient Model Compression Tensorized drop-in replacement for torch.nn.Embedding Parameters ---------- num_embeddings : int, number of entries in the lookup table embedding_dim : int, number of dimensions per entry auto_reshape : bool, whether to use automatic reshaping for the embedding dimensions d : int or int tuple, number of reshape dimensions for both embedding table dimension tensorized_num_embeddings : int tuple, tensorized shape of the first embedding table dimension tensorized_embedding_dim : int tuple, tensorized shape of the second embedding table dimension factorization : str, tensor type rank : int tuple or str, tensor rank

Tensor Dropout

These functions allow you to easily add or remove tensor dropout from tensor layers.

tensor_dropout(factorized_tensor[, p, ...])

Tensor Dropout

remove_tensor_dropout(factorized_tensor)

Removes the tensor dropout from a TensorModule

You can also use the class API below but unless you have a particular use for the classes, you should use the convenient functions provided instead.

TensorDropout(proba[, min_dim, min_values, ...])

Decomposition Hook for Tensor Dropout on FactorizedTensor

L1 Regularization

L1 Regularization on tensor modules.

tensor_lasso([factorization, penalty, ...])

Generalized Tensor Lasso from a factorized tensors

remove_tensor_lasso(factorized_tensor)

Removes the tensor lasso from a TensorModule