Matrix multiplication was first described in 1812 by the French mathematician Jacques Philippe Marie Binet,[2] to represent the composition of linear maps represented by matrices. Matrix multiplication is therefore a fundamental tool of linear algebra and, as such, has many applications in many areas of mathematics as well as applied mathematics, statistics, physics, economics and engineering. [3] [4] The computation of matrix products is a central operation in all computer applications of linear algebra. If A is a matrix n m × and B is a matrix p n ×, [A + B = begin{pmatrix} 1 & -5 & 4 2 & 5 & 3 end{pmatrix} + begin{pmatrix} 8 & -3 & -4 4 & -2 & 9 end{pmatrix} = begin{pmatrix} 1 + 8 & -5 – 3 & 4 – 4 2 + 4 & 5 -2 & 3 + 9 end{pmatrix} = begin{pmatrix} 9 & -8 & 0 6 & 3 & 12 end{pmatrix}] If we change the order of multiplication, The answer is (usually) different. The definition of matrix product requires that the entries belong to a half-ring and does not require that the multiplication of the elements of the half-ring be commutative. In many applications, the elements of the matrix belong to a field, although the tropical half-ring is also a common choice for problems with the shortest path. [13] Even for field matrices, the product is generally not commutative, although it is associative and distributive on matrix addition. Identity matrices (that is, square matrices whose inputs are zero outside the main diagonal and 1 are diagonal principal diagonal) are identity elements of the matrix product. It follows that the n × n matrices on a ring form a ring that is not commutative, unless n = 1 and the fundamental ring is commutative. To support the transposition of a matrix, simply change the rows and columns of a matrix. The transposition of (A) can be denoted (A`) or (A^T). This is mainly used in various programming languages such as C, Java, etc.

for online multiplication. The most common are 2×2, 3×3 and 4×4, matrix multiplication. (begin{array}{l}If A = begin{bmatrix} 1 & 23 & 4 end{bmatrix} and B = begin{bmatrix} 3 & 2 1 & 4 end{bmatrix} are the two matrices, thenend{array} ) This results from the fact that the conjugation of a sum is the sum of the conjugates of summations and that the conjugation of a product is the product of the conjugates of the factors. (begin{array}{l}Btimes A = begin{bmatrix} 3 & 2 1 & 4 end{bmatrix}times begin{bmatrix} 1 & 2 3 & 4 end{bmatrix}end{array} ) We can mathematically define the multiplication of a matrix by a scalar as follows: More generally, any bilinear form on a finite-dimensional vector space can be expressed as a matrix product. If A is a matrix and c is scalar, then the matrices c A {displaystyle cmathbf {A} } and A c {displaystyle mathbf {A} c} are obtained by multiplying to the left or right of all entries in A by c. If scalars have the commutative property, then c A = A c. {displaystyle cmathbf {A} =mathbf {A} c.} Surprisingly, this complexity is not optimal, as Volker Strassen showed in 1969, who provided an algorithm now called Strassen`s algorithm with a complexity of O (n log 2 7) ≈ O (n 2.8074). {displaystyle O(n^{log _{2}7})approx O(n^{2.8074}).} [14] Strassen`s algorithm can be parallelized to further improve performance. [ref.

needed] As of December 2020, the best matrix multiplication algorithm comes from Josh Alman and Virginia Vassilevska Williams and has complexity O (n2.3728596). [15] It is not known whether matrix multiplication can be performed in time n2 + o(1). This would be optimal because you have to read the n 2 elements {displaystyle n^{2}} of a matrix to multiply them by another matrix. For example, if A, B, and C are matrices of the respective sizes 10×30, 30×5, 5×60, the calculation of (AB)C requires 10×30×5 + 10×5×60 = 4,500 multiplications, while the calculation of A (BC) requires 30×5×60 + 10×30×60 = 27,000 multiplications. In recent years, there has been a lot of work in the field of matrix multiplication algorithms, as it has found its application in many areas. There are four types of algorithms: If A, B and C are the three matrices, the associative property of matrix multiplication states that if a vector space has a finite basis, its vectors are each uniquely represented by a finite sequence of scalars called the coordinate vector, whose elements are the coordinates of the vector on the basis. These coordinate vectors form another vector space isomorphic to the original vector space. A coordinate vector is usually organized as a column matrix (also called a column vector), which is a matrix with a single column. Thus, a column vector represents both a coordinate vector and a vector of the original vector space.

Consider two matrices M1 and M2 of the order of m1 × n1 and m2 × n2. is defined and does not depend on the order of multiplications if the order of the matrices is fixed. [ x cdot y = begin{pmatrix} 1 & -5 & 4 end{pmatrix} * begin{pmatrix} 4 & -2 & 5 end{pmatrix} = 1*4 + (-5)*(-2) + 4*5 = 4+10+20 = 34] In M n ( R ) {displaystyle {mathcal {M}}_{n}(R)} the product is defined for each pair of matrices. This makes M n ( R ) {displaystyle {mathcal {M}}_{n}(R)} a ring that has the identity matrix I as the identity element (the matrix whose diagonal entries are equal to 1 and all other entries are 0). This ring is also an associative R-algebra. To multiply a matrix by a scalar, also called scalar multiplication, multiply each element of the matrix by the scalar. [ C*D = begin{pmatrix} 3*7 + (-9)*(-2) + (-8)*6 & 3*(-3) + (-9)*3 + (-8)*2 2*7 + 4*(-2) + 3*6 & 2*(-3) + 4*3 + 3*2 end{pmatrix}] [A = begin{pmatrix} 1 & -5 & 4 2 & 5 & 3 end{pmatrix}] where * denotes the entry-level complex conjugation of a matrix. [ C*D = begin{pmatrix} 3 & -9 & -8 2 & 4 & 3 end{pmatrix} * begin{pmatrix} 7 & -3 -2 & 3 6 & 2 end{pmatrix} ] Matrix multiplication shares some properties with usual multiplication. However, matrix multiplication is not defined if the number of columns of the first factor differs from the number of rows of the second factor, and it is not commutative,[10] even if the product remains final after changing the order of the factors. [11] [12] Matrix algebra involves the operation of matrices, such as addition, subtraction, multiplication, etc. Then the matrix C = AB is defined as a matrix A × B.

Although there are many applications of matrices, matrix multiplication is essentially an operation in linear algebra.