Matrix Norm — Definition, Formula & Examples
A matrix norm is a function that assigns a non-negative real number to a matrix, measuring its 'size' or 'magnitude' in a way that satisfies specific properties analogous to vector norms.
A matrix norm is a function satisfying: (1) with equality iff , (2) for all scalars , and (3) (triangle inequality). A submultiplicative norm additionally satisfies .
Key Formula
Where:
- = An $m \times n$ matrix with entries $a_{ij}$
- = The Frobenius norm of $A$
- = The entry in row $i$, column $j$ of $A$
How It Works
Several matrix norms are commonly used. The Frobenius norm treats the matrix entries as a single vector and computes the Euclidean length. The induced (operator) norm measures the maximum factor by which the matrix can stretch a vector. For example, the induced 2-norm equals the largest singular value of the matrix. Your choice of norm depends on the application — the Frobenius norm is easy to compute, while operator norms connect directly to how the matrix acts on vectors.
Worked Example
Problem: Find the Frobenius norm of the matrix .
Step 1: Square each entry of the matrix.
Step 2: Sum all the squared entries.
Step 3: Take the square root of the sum.
Answer:
Why It Matters
Matrix norms are essential for analyzing the stability of numerical algorithms, such as solving linear systems and computing eigenvalues. In fields like data science and control engineering, condition numbers — defined as ratios of matrix norms — tell you how sensitive a solution is to small changes in input data.
Common Mistakes
Mistake: Confusing the Frobenius norm with the induced 2-norm (spectral norm).
Correction: The Frobenius norm sums the squares of all entries, while the induced 2-norm equals the largest singular value. They generally give different values for the same matrix.
