Kernel — Definition, Formula & Examples
The kernel of a linear transformation is the set of all input vectors that get mapped to the zero vector. For a matrix , the kernel is the set of all vectors satisfying .
Given a linear transformation between vector spaces, the kernel of is defined as . For an matrix , this coincides with the null space , which is always a subspace of .
Key Formula
Where:
- = An m × n matrix
- = A vector in ℝⁿ
- = The zero vector in ℝᵐ
How It Works
To find the kernel of a matrix , set up the homogeneous system and row-reduce the augmented matrix to reduced row echelon form. Free variables in the solution correspond to basis vectors of the kernel. The dimension of the kernel is called the nullity of . By the Rank-Nullity Theorem, , where is the number of columns.
Worked Example
Problem: Find the kernel of the matrix .
Step 1: Set up the homogeneous system and row-reduce.
Step 2: Read off the solution. With and as free variables, the first equation gives .
Answer: The kernel is a 2-dimensional subspace of spanned by . The nullity is 2.
Why It Matters
The kernel tells you exactly when a linear system has non-unique solutions and measures how much information a transformation loses. In differential equations, the kernel of a differential operator gives the space of homogeneous solutions. Understanding the kernel is also essential in data science applications like principal component analysis and least-squares fitting.
Common Mistakes
Mistake: Confusing the kernel with the set of outputs (image/column space) of the transformation.
Correction: The kernel lives in the domain (input space) and consists of vectors mapped to zero. The image lives in the codomain and consists of all possible outputs.
