tensorly.plugins.use_cuquantum

use_cuquantum(optimize='auto-hq')[source]

Plugin to use cuQuantum to precompute (and cache) a better contraction path

Examples

>>> import tensorly as tl

Use your favourite backend, here PyTorch:

>>> tl.set_backend('pytorch')

Use the convenient backend system to automatically dispatch all tenalg operations to einsum

>>> from tensorly import tenalg
>>> tenalg.set_backend('einsum')

Now you can transparently cache the optimal contraction path, transparently:

>>> from tensorly import plugins
>>> plugins.use_cuquantum()

That’s it! Now opt-einsum will be used for finding an (near) optimal contraction path and cuQuantum will be used to actually perform the tensor contractions!

You can revert to the original einsum just as easily:

>>> plugings.use_default_einsum()

Revert to the original tensor algebra backend:

>>> tenalg.set_backend('core')