tltorch.factorized_layers.TRL

class tltorch.factorized_layers.TRL(input_shape, output_shape, bias=False, verbose=0, factorization='cp', rank='same', n_layers=1, device=None, dtype=None, **kwargs)[source]

Tensor Regression Layers

Parameters:
input_shapeint iterable

shape of the input, excluding batch size

output_shapeint iterable

shape of the output, excluding batch size

verboseint, default is 0

level of verbosity

References

[1]

Tensor Regression Networks, Jean Kossaifi, Zachary C. Lipton, Arinbjorn Kolbeinsson, Aran Khanna, Tommaso Furlanello, Anima Anandkumar, JMLR, 2020.

Methods

forward(x)

Performs a forward pass

init_from_linear(linear[, unsqueezed_modes])

Initialise the TRL from the weights of a fully connected layer

init_from_random([decompose_full_weight])

Initialize the module randomly

forward(x)[source]

Performs a forward pass

init_from_random(decompose_full_weight=False)[source]

Initialize the module randomly

Parameters:
decompose_full_weightbool, default is False

if True, constructs a full weight tensor and decomposes it to initialize the factors otherwise, the factors are directly initialized randomlys

init_from_linear(linear, unsqueezed_modes=None, **kwargs)[source]

Initialise the TRL from the weights of a fully connected layer

Parameters:
lineartorch.nn.Linear
unsqueezed_modesint list or None

For Tucker factorization, this allows to replace pooling layers and instead learn the average pooling for the specified modes (“unsqueezed_modes”). for factorization=’Tucker’ only