Orthogonal Matrix — Definition, Formula & Examples
An orthogonal matrix is a square matrix whose columns (and rows) are orthonormal vectors, which means its transpose is also its inverse. Multiplying an orthogonal matrix by its transpose gives the identity matrix.
A real matrix is orthogonal if and only if , where denotes the transpose and is the identity matrix. Equivalently, , and .
Key Formula
Where:
- = An $n \times n$ real matrix
- = The transpose of $Q$
- = The $n \times n$ identity matrix
How It Works
To check whether a matrix is orthogonal, compute and see if you get the identity matrix. This amounts to verifying two things: each column has length 1, and every pair of distinct columns has a dot product of 0. Because the inverse of an orthogonal matrix is simply its transpose, solving systems or reversing transformations becomes computationally cheap. Orthogonal matrices represent geometric operations — rotations and reflections — that preserve lengths and angles.
Worked Example
Problem: Determine whether the matrix is orthogonal.
Step 1: Compute the transpose of .
Step 2: Multiply and check if the result is the identity matrix.
Step 3: Verify the determinant as a quick sanity check.
Answer: Since and , the matrix is orthogonal. It represents a 90° counterclockwise rotation.
Why It Matters
Orthogonal matrices are central to the QR decomposition, singular value decomposition (SVD), and principal component analysis (PCA). In computer graphics, they encode rotations of 3D objects efficiently. Their numerical stability — they never amplify rounding errors — makes them essential in scientific computing.
Common Mistakes
Mistake: Assuming any matrix with orthogonal columns is an orthogonal matrix.
Correction: The columns must be orthonormal — both mutually perpendicular and each of unit length. A matrix with orthogonal but non-unit columns satisfies (a diagonal matrix), not .
