tltorch._tensor_lasso
.TuckerL1Regularizer
-
class
tltorch._tensor_lasso.
TuckerL1Regularizer
(penalty=0.01, clamp_weights=True, threshold=1e-06, normalize_loss=True)[source] Decomposition Hook for Tensor Lasso on Tucker tensors
Applies a generalized Lasso (l1 regularization) on the tensor layers the regularization it is applied to.Parameters: - penaltyfloat, default is 0.01
scaling factor for the loss
- clamp_weightsbool, default is True
if True, the lasso weights are clamp between -1 and 1
- thresholdfloat, default is 1e-6
if a lasso weight is lower than the set threshold, it is set to 0
- normalize_lossbool, default is True
If True, the loss will be between 0 and 1. Otherwise, the raw sum of absolute weights will be returned.
Examples
First you need to create an instance of the regularizer:
>>> regularizer = TuckerL1Regularizer(penalty=penalty)
You can apply the regularizer to one or several layers:
>>> trl = TRL((5, 5), (5, 5), rank='same') >>> trl2 = TRL((5, 5), (2, ), rank='same') >>> regularizer.apply(trl) >>> regularizer.apply(trl2)
The lasso is automatically applied:
>>> x = trl(x) >>> pred = trl2(x) >>> loss = your_loss_function(pred)
Add the Lasso loss:
>>> loss = loss + regularizer.loss
You can now backpropagate through your loss as usual:
>>> loss.backwards()
After you finish updating the weights, don’t forget to reset the regularizer, otherwise it will keep accumulating values!
>>> loss.reset()
You can also remove the regularizer with regularizer.remove(trl).
Attributes: loss
Returns the current Lasso (l1) loss for the layers that have been called so far.
Methods
__call__
(module, tucker_tensor)Call self as a function. apply
(module)Apply an instance of the L1Regularizer to a tensor module apply_lasso
(tucker_tensor, lasso_weights)Applies the lasso to a decomposed tensor remove
(module)Remove the Regularization from a module. reset
()Reset the loss, should be called at the end of each iteration. -
reset
()[source] Reset the loss, should be called at the end of each iteration.
-
property
loss
Returns the current Lasso (l1) loss for the layers that have been called so far.
Returns: - float
l1 regularization on the tensor layers the regularization has been applied to.
-
apply_lasso
(tucker_tensor, lasso_weights)[source] Applies the lasso to a decomposed tensor
-
apply
(module)[source] Apply an instance of the L1Regularizer to a tensor module
Parameters: - moduleTensorModule
module on which to add the regularization
Returns: - TensorModule (with Regularization hook)
-
remove
(module)[source] Remove the Regularization from a module.