Mathematics · Algebra & Calculus · Linear Algebra
Matrix Multiplication Calculator
Computes the product of two matrices (up to 3×3) by applying the standard row-by-column dot product rule.
Calculator
Formula
The element at row i and column j of the product matrix AB equals the sum of products of the i-th row of A with the j-th column of B. Here A is an m×n matrix, B is an n×p matrix, and the result AB is an m×p matrix. The index k runs from 1 to n, the shared inner dimension.
Source: Gilbert Strang, Introduction to Linear Algebra, 5th Edition, Wellesley-Cambridge Press, 2016.
How it works
Matrix multiplication is defined only when the number of columns in the first matrix equals the number of rows in the second matrix. For two 3×3 matrices A and B, both conditions are satisfied, and the result C is also a 3×3 matrix. The operation combines the rows of A with the columns of B through a sequence of dot products, capturing how one linear transformation is followed by another.
The defining formula is (AB)ij = Σ Aik · Bkj, summed over k from 1 to n. In plain terms: to find the element in row i and column j of the result, you take each element of row i from matrix A, multiply it by the corresponding element of column j from matrix B, and add all those products together. For 3×3 matrices this means three multiplications and two additions per output element, giving nine output values in total. Each output element therefore depends on an entire row of A and an entire column of B — this is what makes matrix multiplication fundamentally different from element-wise multiplication.
In practice, matrix multiplication encodes the composition of linear transformations. If A rotates a vector and B scales it, then AB first scales and then rotates. This composition property makes matrix multiplication central to 3D rendering pipelines (where translation, rotation, and projection matrices are multiplied together), to solving linear systems via LU decomposition, and to training neural networks via forward and backward propagation. The non-commutativity of matrix multiplication — AB ≠ BA in general — is one of its most important and often surprising properties.
Worked example
Let matrix A and matrix B be defined as follows:
A = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] and B = [[9, 8, 7], [6, 5, 4], [3, 2, 1]]
To find C11 (row 1 of A dotted with column 1 of B):
(1×9) + (2×6) + (3×3) = 9 + 12 + 9 = 30
To find C12 (row 1 of A dotted with column 2 of B):
(1×8) + (2×5) + (3×2) = 8 + 10 + 6 = 24
To find C13 (row 1 of A dotted with column 3 of B):
(1×7) + (2×4) + (3×1) = 7 + 8 + 3 = 18
To find C21 (row 2 of A dotted with column 1 of B):
(4×9) + (5×6) + (6×3) = 36 + 30 + 18 = 84
To find C22:
(4×8) + (5×5) + (6×2) = 32 + 25 + 12 = 69
To find C23:
(4×7) + (5×4) + (6×1) = 28 + 20 + 6 = 54
To find C31:
(7×9) + (8×6) + (9×3) = 63 + 48 + 27 = 138
To find C32:
(7×8) + (8×5) + (9×2) = 56 + 40 + 18 = 114
To find C33:
(7×7) + (8×4) + (9×1) = 49 + 32 + 9 = 90
The complete result is: C = [[30, 24, 18], [84, 69, 54], [138, 114, 90]]
Limitations & notes
This calculator is fixed at 3×3 matrices. Real-world applications frequently require multiplication of non-square matrices (e.g., a 4×3 matrix multiplied by a 3×5 matrix) or very large matrices used in machine learning models with thousands of dimensions — for those cases, dedicated software such as NumPy, MATLAB, or a CAS is more appropriate. The calculator assumes all inputs are finite real numbers; complex-valued matrices (used in quantum mechanics and signal processing) are not supported. It also does not check dimensional compatibility dynamically — if you intend to multiply smaller sub-matrices (e.g., 2×2), you should set unused entries to zero with care, understanding that this alters the mathematical meaning of the product. Numerical precision is limited to standard IEEE 754 double-precision floating point, which may introduce small rounding errors for very large or very small input values. Finally, this tool computes AB only; if you need BA, you must swap the inputs manually, which highlights the non-commutative nature of matrix multiplication.
Frequently asked questions
Why is matrix multiplication defined as a dot product of rows and columns rather than element-wise?
The row-column definition arises from the need to represent the composition of linear transformations. When you apply transformation B followed by transformation A to a vector, the combined effect is captured exactly by the product AB computed via dot products. Element-wise multiplication (the Hadamard product) exists but represents a different, less general operation with fewer algebraic properties.
Is matrix multiplication commutative? Does AB always equal BA?
No — matrix multiplication is generally not commutative. For most matrices A and B, AB ≠ BA. For example, with A = [[1,2],[3,4]] and B = [[0,1],[1,0]], AB = [[2,1],[4,3]] while BA = [[3,4],[1,2]]. Commutativity holds only for special cases, such as when one matrix is a scalar multiple of the identity matrix.
What is the dimensional requirement for two matrices to be multipliable?
Matrix A must have the same number of columns as matrix B has rows. If A is m×n and B is n×p, then AB is defined and produces an m×p matrix. If A is 3×2 and B is 2×4, the result is 3×4. If the inner dimensions do not match — for example A is 3×2 and B is 3×4 — multiplication is undefined.
How is matrix multiplication used in computer graphics and 3D rendering?
In 3D graphics, objects are transformed by multiplying their vertex coordinate vectors by transformation matrices. Rotation, scaling, translation (using homogeneous coordinates), and projection are each represented as 4×4 matrices. Composing these transformations — for example rotating then translating — is achieved by multiplying their matrices together first, then applying the single combined matrix to all vertices, which is computationally efficient.
What is the computational complexity of multiplying two n×n matrices?
The naive algorithm using the standard dot-product definition runs in O(n³) time, requiring n³ scalar multiplications and roughly n³ additions. For 3×3 matrices this means 27 multiplications, which is trivial. For large n — such as in deep learning with n in the thousands — this becomes expensive. The Strassen algorithm reduces complexity to approximately O(n^2.807), and modern research has pushed this further, though the naive method remains standard for small matrices due to lower overhead.
Last updated: 2025-01-15 · Formula verified against primary sources.