**Linear Algebra Primer: Part 6- Rank-Nullity Theorem**

This tutorial will cover range, rank, null space, nullity, the rank-nullity theorem, invertibility, and isomorphisms. A typeset version of the tutorial is attached below:

Linear_Algebra_Part6_Tutorial.pdf

**(119.8K)**

Number of downloads: 208

**Rank**

The range of a linear transformation, denoted R(T), is defined as the subset of the codomain that is mapped. This can easily be determined form the matrix representation of T, based on the largest subset of linearly independent column vectors from the matrix. The cardinality of this set is called the rank. The rank property describes the space that R(T) spans. For example, if A is a 3x3 Matrix over the real numbers and rank(A) = 2, then there are two linearly independent column vectors in A. This means that the span(A) is a plane in

**R**, or that the set of column vectors in A describes a two-dimensional subspace of

^{2}**R**.

^{3}Let's look at a couple examples. Let A be the following matrix. Here, rank(A) = 2, as (1, 3, 4) and (2, 6, 8) are linearly dependent. Thus, there are two linearly independent column vectors in A. From here, the range of a linear transformation represented by the transformation matrix A is defined as R(T) = span( {(1, 2, 3), (2, 5, 6)} ).

|1 2 2| |3 5 6| |4 6 8|

Consider matrix B to be defined as follows. Here, the column vectors are all in

**R**. Thus, by the linear independence tests, any set of vectors from

^{2}**R**with more than two elements is linearly dependent. As none of these vectors are multiples of each other, any subset of the column vectors with at most two elements is linearly independent. Thus, rank(B) = 2. The range, R(T), of a linear transformation represented by B is denoted by R(T) = span( {(1, 5), (2, 6)} ) =

^{2}**R**, though any two column vectors of B can be used in defining the range.

^{2}|1 2 3 4| |5 6 7 8|

**Nullity**

The null space of a linear transformation, denoted N(T), is defined as the set of vectors such that for all

**v**in the null space, T(

**v**) =

**0**. The nullity is defined as the number of elements in the null space. There is one exception to this rule. If a Transformation's matrix is linearly independent, then N(T) = {

**0**} and the nullity = 0. If the transformation's matrix is linearly dependent, the null space will not explicitly include the zero vector.

In order to find the null space of a matrix or linear transformation, it is helpful to row-reduce the matrix, then solve the matrix equation: A

**x**=

**0**, where A is the reduced matrix and

**x**is an element of the null space. For the purpose of this tutorial, familiarity with row-reduction techniques is assumed.

Let's look at some examples. Consider the linear transformation T be represented matrix A defined below. Here, N(T) = {

**0**}, with nullity(T) = 0. as the only vector that will result in a

**0**vector is

**0**. Another way of looking at it is that the matrix A is linearly independent with only a single column vector.

|1| |2| |3|

Consider another example. Let the linear transformation T be represented by the matrix B defined below. Here, there is only one independent vector, so Rank(T) = 1. Row reducing this matrix leaves only the top row. So the null space N(T) = {(4, 0, 1), (-3, 1, 0)}, with nullity = (2).

|1 3 -4 | |5 15 -20| |3 9 -12|

Note how in both cases, the rank and nullity add up to the dimension of the vector space for the domain of T. This is called the Rank-Nullity Theorem. It states that for a linear transformation on vector spaces V and W (T: V -> W), rank(T) + nullity(T) = dim(V). This theorem provides a lot of information about a linear transformation and makes it significantly easier to determine the nullity of a linear transformation given its rank.

**Function Properties of Linear Transformations**

At this point, a strong familiarity with functions is assumed. This section will discuss strategies for determining if linear transformations are one-to-one, onto, or bijective. When dealing with functions, bijections are established based on counting properties. While certain Vector Spaces (like Edge Space) are over finite or countable fields (like the integers modulo 2), Vector spaces are often times over fields that are uncountably infinite, such as the real or complex numbers. The properties of linear transformations allow the use of dimension to aid in determining whether a linear transformation is one-to-one, onto, or bijective.

When dealing with functions in general, a function can be one-to-one when |Domain| <= |Codomain|. Applying this to a linear transformation T: V -> W, if dim(V) <= dim(W), then T has the potential to be one-to-one. Consider if T maps

**R**into

^{2}**R**. Since

^{3}**R**is a plane, it can be mapped into

^{2}**R**without loss of generality. For example, T(x, y) = (x, y, 0) is a one-to-one function.

^{3}Similarly, a function can be onto only when |Domain| >= |Codomain|. Thus, a linear transformation T: V -> W requires that dim(V) >= dim(W). Consider a mapping T: P

_{3}(

**R**) -> P

_{2}(

**R**) by T(f(x)) = f'(x). Clearly this isn't a one-to-one mapping, as T(2x + 3) = T(2x + 5). However, the transformation does ensure a complete mapping of the codomain. Thus, T is onto.

Now having looked at both one-to-one and onto linear transformations, it stands to reason that to be a bijective linear transformation, dim(V) = dim(W). This logic is the same as having |Domain| = |Codomain| in a function like f(x) = 2x. An example of a bijective linear transformation would be T:

**R**-> P

^{3}_{2}(

**R**) by T(a, b, c) = ax

^{2}+ bx + c. As a final note, bijections between vector spaces where dim(V) != dim(W) can occur. However, such functions are non-linear.

**Invertibility**

This section will discuss the conditions on which a linear transformation is invertible. Similar as when dealing with functions, a linear transformation is only invertible when it is bijective. This ensures that every element of both the domain and codomain has a unique mapping. In fact, a generalization of the Rank-Nullity Theorem can be used when testing to see if a linear transformation is invertible. Remember that the Rank-Nullity Theorem states that for any linear transformation T:V -> W, rank(T) + nullity(T) = dim(V). If T is bijective, then rank(T) = dim(V).

Let's unpack this intuition. Suppose nullity(T) != 0 and T is bijective. This implies that there are vectors other than

**0**such that T(

**v**) =

**0**. This implies that the transformation matrix's column vectors will not be linearly independent, which leads to the conclusion that T only maps V to a subspace of W. Thus, T is not bijective (even if dim(V) = dim(W)), which is a contradiction. Thus, in order for T to be invertible, rank(T) = dim(V), which is equivalent to saying that T is a bijective transformation.

Just as linear transformations can be described with transformation matrices, the inverse transformations are described similarly. Remember that functions have a property that states f(f

^{-1}(x)) = f

^{-1}(f(x)) = x, which is called the identity property. As linear transformations are functions, they share this property, as do their matrix transformations. Thus, if T is represented by the matrix A, then T

^{-1}is represented by A

^{-1}.

**Isomorphisms**

Isomorphisms are a bit abstract, but provide a lot of power. They allow for representing vector spaces in different, but equivalent formats. For example, representing a 2x2 Matrix as a vector in

**F**without loss of generality, where

^{4}**F**is the Field, is an example of an isomorphism. Similarly, vectors in

**R**denoted (a, b, c) are isomorphic to the quadratics by ax

^{3}^{2}+ bx + c.

Let's put a little more definition to isomorphisms. Two vector spaces are considered isomorphic if they have the same field and same dimension. A linear transformation is considered an isomorphism if and only if it is bijective. Isomorphisms are used to denote equivalence, and are commonly seen in Abstract Algebra and Graph Theory as well. In fact, isomorphisms are considered equivalence relations. This means that isomorphisms are reflexive, symmetric, and transitive relations.

Proof of Equivalence:

Spoiler

Isomorphisms are incredibly powerful in answering a number of questions. One such question asks if V

_{0}is a subspace of V, and T:V -> W is an isomorphism, show that T(V

_{0}) is a subspace of W. The proof is incredibly simple.

Since T is an isomorphism, then it is known that V === W (is equivalent, which is different than saying is equal to. These are equivalent spaces, but not the same space.). As T is an isomorphism, V

_{0}and T(V

_{0}) map uniquely to each other on T and T

^{-1}. This shows they are in the same equivalence class. So by symmetry, T(V

_{0}) is a subspace of W. QED.

Another such question that isomorphisms answer well is that if B is a basis for V and T: V -> W is an isomorphism, show that T(B) is a basis for W.

Since T is an isomorphism, dim(V) = dim(W), which implies that their bases have the same cardinality. Since T is an isomorphism, it is a bijection, so each vector in B maps uniquely to an equivalent vector in T(B) on both T and T

^{-1}. Thus, B === T(B). So by symmetry, T(B) is a basis for W. QED.