tensorly
.matmul
- matmul(x1, x2, /, out=None, *, casting='same_kind', order='K', dtype=None, subok=True[, signature, axes, axis])
Matrix product of two arrays.
- Parameters:
- x1, x2array_like
Input arrays, scalars not allowed.
- outndarray, optional
A location into which the result is stored. If provided, it must have a shape that matches the signature (n,k),(k,m)->(n,m). If not provided or None, a freshly-allocated array is returned.
- **kwargs
For other keyword-only arguments, see the ufunc docs.
- Returns:
- yndarray
The matrix product of the inputs. This is a scalar only when both x1, x2 are 1-d vectors.
- Raises:
- ValueError
If the last dimension of x1 is not the same size as the second-to-last dimension of x2.
If a scalar value is passed in.
See also
vecdot
Complex-conjugating dot product for stacks of vectors.
matvec
Matrix-vector product for stacks of matrices and vectors.
vecmat
Vector-matrix product for stacks of vectors and matrices.
tensordot
Sum products over arbitrary axes.
einsum
Einstein summation convention.
dot
alternative matrix product with different broadcasting rules.
Notes
The behavior depends on the arguments in the following way.
If both arguments are 2-D they are multiplied like conventional matrices.
If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.
If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed. (For stacks of vectors, use
vecmat
.)If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed. (For stacks of vectors, use
matvec
.)
matmul
differs fromdot
in two important ways:Multiplication by scalars is not allowed, use
*
instead.Stacks of matrices are broadcast together as if the matrices were elements, respecting the signature
(n,k),(k,m)->(n,m)
:>>> a = np.ones([9, 5, 7, 4]) >>> c = np.ones([9, 5, 4, 3]) >>> np.dot(a, c).shape (9, 5, 7, 9, 5, 3) >>> np.matmul(a, c).shape (9, 5, 7, 3) >>> # n is 7, k is 4, m is 3
The matmul function implements the semantics of the
@
operator introduced in Python 3.5 following PEP 465.It uses an optimized BLAS library when possible (see numpy.linalg).
Examples
For 2-D arrays it is the matrix product:
>>> import numpy as np
>>> a = np.array([[1, 0], ... [0, 1]]) >>> b = np.array([[4, 1], ... [2, 2]]) >>> np.matmul(a, b) array([[4, 1], [2, 2]])
For 2-D mixed with 1-D, the result is the usual.
>>> a = np.array([[1, 0], ... [0, 1]]) >>> b = np.array([1, 2]) >>> np.matmul(a, b) array([1, 2]) >>> np.matmul(b, a) array([1, 2])
Broadcasting is conventional for stacks of arrays
>>> a = np.arange(2 * 2 * 4).reshape((2, 2, 4)) >>> b = np.arange(2 * 2 * 4).reshape((2, 4, 2)) >>> np.matmul(a,b).shape (2, 2, 2) >>> np.matmul(a, b)[0, 1, 1] 98 >>> sum(a[0, 1, :] * b[0 , :, 1]) 98
Vector, vector returns the scalar inner product, but neither argument is complex-conjugated:
>>> np.matmul([2j, 3j], [2j, 3j]) (-13+0j)
Scalar multiplication raises an error.
>>> np.matmul([1,2], 3) Traceback (most recent call last): ... ValueError: matmul: Input operand 1 does not have enough dimensions ...
The
@
operator can be used as a shorthand fornp.matmul
on ndarrays.>>> x1 = np.array([2j, 3j]) >>> x2 = np.array([2j, 3j]) >>> x1 @ x2 (-13+0j)