Permutation Matrix — Definition, Formula & Examples
A permutation matrix is a square matrix obtained by rearranging the rows (or columns) of the identity matrix. Each row and each column contains exactly one entry equal to 1, with all other entries equal to 0.
A permutation matrix of order is an matrix in which every row and every column has precisely one nonzero entry, that entry being 1. Equivalently, is obtained from by applying a permutation of to its rows. Permutation matrices are orthogonal: , and .
How It Works
Multiplying a matrix on the left by a permutation matrix rearranges the rows of . Multiplying on the right by rearranges the columns instead. Because is orthogonal, its inverse is simply its transpose, so undoing the permutation costs nothing extra. Permutation matrices appear prominently in LU decomposition with partial pivoting, where the factorization takes the form .
Worked Example
Problem: Let and . Compute .
Identify the row permutation: Row 1 of has its 1 in column 2, row 2 in column 3, and row 3 in column 1. So sends row 2 of to row 1, row 3 to row 2, and row 1 to row 3.
Multiply: Apply the permutation to the rows of .
Answer: . The rows of have been cyclically shifted upward.
Why It Matters
Permutation matrices are central to numerical linear algebra. In Gaussian elimination with partial pivoting, the algorithm records row swaps as a permutation matrix so the factorization remains stable. They also encode symmetries in group theory and combinatorics.
Common Mistakes
Mistake: Confusing left-multiplication and right-multiplication by a permutation matrix.
Correction: permutes the rows of , while permutes the columns. The effects are different, so the side of multiplication matters.
