Vector Space Projection — Definition, Formula & Examples
A vector space projection is the operation of mapping a vector onto a subspace (such as a line or plane) so that the difference between the original vector and its image is orthogonal to that subspace. The result is the component of the vector that lies entirely within the subspace.
Given a subspace of an inner product space and a vector , the orthogonal projection of onto is the unique vector such that is orthogonal to every vector in . When , this reduces to projection onto a single vector.
Key Formula
Where:
- = The vector being projected
- = The vector (or direction) onto which you project
- = The dot product of v and u
How It Works
To project a vector onto a vector , you compute the scalar ratio of their dot product to the dot product of with itself, then scale by that ratio. The resulting vector points in the direction of and represents how much of lies along . The leftover part, , is perpendicular to . For projection onto a higher-dimensional subspace with basis columns in a matrix , you use the projection matrix .
Worked Example
Problem: Project the vector v = (3, 4) onto the vector u = (1, 0).
Compute the dot products: Find v · u and u · u.
Apply the formula: Multiply u by the scalar ratio.
Verify orthogonality: The residual v − proj should be perpendicular to u.
Answer: The projection of (3, 4) onto (1, 0) is (3, 0).
Why It Matters
Projection is the engine behind least-squares regression: when a system has no exact solution, you project onto the column space of to find the best approximate solution. It also appears in computer graphics for lighting calculations and in signal processing for decomposing signals into orthogonal components.
Common Mistakes
Mistake: Dividing by v · v instead of u · u in the denominator.
Correction: The denominator must be the dot product of the vector you are projecting onto with itself (u · u), not the vector being projected.
