**Linear Algebra Primer: Part 7- Introduction to Eigentheory**

This tutorial will introduce the basics of eigentheory, which is the study of eigenvalues, eigenvectors, and the characteristic polynomial. Eigentheory is renowned for its usefulness in topics such as dynamical systems. This tutorial will introduce solving for eigenvalues and eigenvectors, describing their multiplicities, and diagonalizing a matrix.

Attached is a typeset copy of this tutorial.

Eigentheory.pdf

**(109.73K)**

Number of downloads: 208

**Eigenvalues and Eigenvectors**

Only square matrices are defined to have eigenvalues and eigenvectors. In order to solve for the eigenvalues, it is first necessary to derive the characteristic polynomial: f(x) = det(A - xI) (lambda is the common character; but for the sake of this tutorial, x will be used). The eigenvalues are the roots of the characteristic polynomial. Their associated eigenvectors satisfy the equation A

**v**= x

**v**, where

**v**is a non-zero vector. This property does scale, so A

^{n}

**v**= x

^{n}

**v**.

Let's look at an example. Consider the matrix below. Its characteristic polynomial is derived from the determinant of A - xI. So f(x) = (3-x)[(4-x)(2-x) - 10] + (-6 + 2x) = -x

^{3}+ 9x

^{2}- 14x - 12. The eigenvalues are x = 3 +- sqrt(13), 3.

|3 0 1| |1 4 5| |2 2 2|

Now let's solve for the eigenvectors. To do this, solve for: (A - xI)

**v**=

**0**. Note that this is equivalent to solving A

**v**= x

**v**. Plug in an eigenvalue for x and solve for the vector

**v**that satisfies the above equation. This vector is the associated eigenvector for the eigenvalue.

Doing this yields the following eigenpairs (eigenvalue, eigenvector):

- x = 3 +- sqrt(13): (sqrt(13)/13, 1/26 * (13 +- 11sqrt(13)), 1)
- x = 3: (1, 1, 0)

The number of times an eigenvalue occurs is called its algebraic multiplicity. The number of eigenvectors associated with each eigenvalue is called the geometric multiplicity of the eigenvalue. So in the example, each eigenvalue has an algebraic and geometric multiplicity of 1.

Deriving eigenpairs for linear transformations consists simply of deriving a transformation matrix and solving for the eigenpairs as described above. Just as with matrices, if

**v**is an eigenvector for a linear transformation T, then T(

**v**) = x

**v**and T

^{n}(

**v**) = x

^{n}

**v**. So x

^{n}is an eigenvector for T

^{n}. This property can be shown by induction on the integer n:

Consider the base case of n = 1. Here, T(

**v**) = x

**v**by definition.

I will assume that T

^{n}(

**v**) = x

^{n}

**v**holds true from n = 1 up to an arbitrary integer k. I will now prove true for the k+1 case.

Consider T

^{k+1}(

**v**) = T

^{k}(T(

**v**)) = T

^{k}(x

**v**) by the inductive hypothesis.

By linearity, T

^{k}(x

**v**) = x(T

^{k}(

**v**)

= x(x

^{k}

**v**), also by the inductive hypothesis

= x

^{k+1}

**v**by rules of exponents.

I have assumed true from n = 1 up to k and proven true for the k+1 case. Therefore, T

^{n}(

**v**) = x

^{n}

**v**for all integers n >= 1, where x

^{n}is an eigenvector associated with T

^{n}and

**v**is the associated eigenvector.

Note that an analogous theorem holds true for matrices, and the proof is similar.

Now let's interpret eigenvalues and eigenvectors. Each eigenvalue is a solution to the equation det(A - xI) = 0. By the determinant test for linear independence, each eigenvalue renders a matrix linearly dependent. Similarly, an eigenvector is in the null space for A - xI when x takes on the value of a specific eigenvalue. So (1, 1, 0) is in the null space of A - 3I in the example above.

Further more, if the algebraic and geometric multiplicities are equal for each distinct eigenvalue, then the column vectors of the matrix are linearly independent. Since the characteristic polynomial is of degree n, it follows that there are n eigenvalues. If two eigenvalues have the same eigenvector, it follows that there exist two linearly dependent columns in the matrix. Looking at it differently, Eigenspace is a subspace of the vector space represented by the matrix. Each Eigenspace has a dimension represented by the geometric multiplicity of the corresponding eigenvalue. Thus, if each eigenvalue has a distinct eigenvector, then the set of all eigenvectors form a basis for the vector space represented by the matrix.

So if there are n eigenvalues: x

_{1}, x

_{2}, ..., x

_{n}, with the algebraic multiplicity for a given x

_{1}equivalent to its geometric multiplicity. Thus the Eigenspace associated with each x

_{i}has a basis B

_{i}consisting exactly of the eigenvectors associated with the given eigenvalue x

_{i}. Thus, a basis for the vector space associated with the original vector space is B = B

_{1}U B

_{2}U ... U B

_{n}.

**Diagonalization**

Using Eigenvectors as bases provides applications for diagonalization. Essentially, diagonalization allows a matrix A to be broken up based on the eigenvalues and eigenvectors. It is in the form: A = QDQ

^{-1}= Q

^{-1}DQ, where D is the matrix whose diagonal entries are the eigenvalues of A, and Q is the matrix whose column vectors are the eigenvectors corresponding to the eigenvalues. So if D

_{11}= x

_{1}, then the eigenvector associated with x

_{1}is the first entry in Q.

In order to diagonalize a matrix, two conditions must be met. The first condition states that the characteristic polynomial must split of the Field. This means that the characteristic polynomial can be written as a product of linear terms in the form below, where c, a

_{1}, a

_{2}, a

_{3}, etc., are all constants in the Field.

f(x) = c(x-a1)(x-a2)(x-a3)...

Consider the characteristic equation f(t) = t

^{2}+ 1. This does not split over the real numbers as it cannot be factored; however, it does split over the complex numbers. Consider t

^{2}+ 1 = (t + i)(t - i).

The second condition for diagonalization states that each eigenvalue must have the same algebraic and geometric multiplicity. This point was addressed to a certain extent in the previous section. If there exists an eigenvalue with a geometric multiplicity smaller than the algebraic multiplicity, then then the matrix Q will not be a square matrix, as there will be fewer than n eigenvectors (given that the original matrix is n x n). Thus, Q is not invertible.

The equation A = Q

^{-1}DQ can be thought of as a change of basis formula as well. If A is written in terms of a basis B, then Q

^{-1}is a matrix that changes the basis from B to C. The matrix D is in terms of the basis C. Finally, the matrix Q changes the basis back from C to B. Often times, dealing with a different basis can provide insights or make it easier to solve certain problems.

Let's look at a couple examples of diagonalizing matrices. Consider the matrix A below:

6 2 0 0 6 0 0 0 8

It has the characteristic polynomial f(x) = (x-8)(x-6)

^{2}, so it splits over the real numbers. The eigenvector associated with x = 8 is (0, 0, 1). The eigenvector associated with x = 6 is (1, 0, 0). Thus, the algebraic multiplicity for x = 6 is greater than its geometric multiplicity, so this matrix is not diagonalizable.

Consider a second example, with the matrix B below:

0 -4 2 6

Here, the characteristic equation is f(x) = (x-4)(x-2), with the eigenvectors (-1, 1) for x = 4, and (-2, 1) for x = 2. Thus, B is diagonalizable since its characteristic polynomial splits; and for each eigenvalue x

_{i}, the algebraic multiplicity and geometric multiplicity are the same for each x

_{i}.

So B can be written as:

B = Q D Q^-1 |0 -4| = |-1 -2| |4 0| |1 2| |2 6 | | 1 1| |0 2| |-1 -1|