0 Replies - 7439 Views - Last Post: 26 July 2013 - 09:31 PM Rate Topic: -----

#1 macosxnerd101  Icon User is online

  • Self-Trained Economist
  • member icon




Reputation: 10596
  • View blog
  • Posts: 39,257
  • Joined: 27-December 08

Linear Algebra Primer: Part 7- Intro to Eigentheory

Posted 26 July 2013 - 09:31 PM

Linear Algebra Primer: Part 7- Introduction to Eigentheory

This tutorial will introduce the basics of eigentheory, which is the study of eigenvalues, eigenvectors, and the characteristic polynomial. Eigentheory is renowned for its usefulness in topics such as dynamical systems. This tutorial will introduce solving for eigenvalues and eigenvectors, describing their multiplicities, and diagonalizing a matrix.

Attached is a typeset copy of this tutorial.
Attached File  Eigentheory.pdf (109.73K)
Number of downloads: 148

Eigenvalues and Eigenvectors
Only square matrices are defined to have eigenvalues and eigenvectors. In order to solve for the eigenvalues, it is first necessary to derive the characteristic polynomial: f(x) = det(A - xI) (lambda is the common character; but for the sake of this tutorial, x will be used). The eigenvalues are the roots of the characteristic polynomial. Their associated eigenvectors satisfy the equation Av = xv, where v is a non-zero vector. This property does scale, so Anv = xnv.

Let's look at an example. Consider the matrix below. Its characteristic polynomial is derived from the determinant of A - xI. So f(x) = (3-x)[(4-x)(2-x) - 10] + (-6 + 2x) = -x3 + 9x2 - 14x - 12. The eigenvalues are x = 3 +- sqrt(13), 3.
|3 0 1|
|1 4 5|
|2 2 2|



Now let's solve for the eigenvectors. To do this, solve for: (A - xI)v = 0. Note that this is equivalent to solving Av = xv. Plug in an eigenvalue for x and solve for the vector v that satisfies the above equation. This vector is the associated eigenvector for the eigenvalue.

Doing this yields the following eigenpairs (eigenvalue, eigenvector):
  • x = 3 +- sqrt(13): (sqrt(13)/13, 1/26 * (13 +- 11sqrt(13)), 1)
  • x = 3: (1, 1, 0)


The number of times an eigenvalue occurs is called its algebraic multiplicity. The number of eigenvectors associated with each eigenvalue is called the geometric multiplicity of the eigenvalue. So in the example, each eigenvalue has an algebraic and geometric multiplicity of 1.

Deriving eigenpairs for linear transformations consists simply of deriving a transformation matrix and solving for the eigenpairs as described above. Just as with matrices, if v is an eigenvector for a linear transformation T, then T(v) = xv and Tn(v) = xnv. So xn is an eigenvector for Tn. This property can be shown by induction on the integer n:

Consider the base case of n = 1. Here, T(v) = xv by definition.

I will assume that Tn(v) = xnv holds true from n = 1 up to an arbitrary integer k. I will now prove true for the k+1 case.

Consider Tk+1(v) = Tk(T(v)) = Tk(xv) by the inductive hypothesis.

By linearity, Tk(xv) = x(Tk(v)
= x(xkv), also by the inductive hypothesis
= xk+1v by rules of exponents.

I have assumed true from n = 1 up to k and proven true for the k+1 case. Therefore, Tn(v) = xnv for all integers n >= 1, where xn is an eigenvector associated with Tn and v is the associated eigenvector.

Note that an analogous theorem holds true for matrices, and the proof is similar.


Now let's interpret eigenvalues and eigenvectors. Each eigenvalue is a solution to the equation det(A - xI) = 0. By the determinant test for linear independence, each eigenvalue renders a matrix linearly dependent. Similarly, an eigenvector is in the null space for A - xI when x takes on the value of a specific eigenvalue. So (1, 1, 0) is in the null space of A - 3I in the example above.

Further more, if the algebraic and geometric multiplicities are equal for each distinct eigenvalue, then the column vectors of the matrix are linearly independent. Since the characteristic polynomial is of degree n, it follows that there are n eigenvalues. If two eigenvalues have the same eigenvector, it follows that there exist two linearly dependent columns in the matrix. Looking at it differently, Eigenspace is a subspace of the vector space represented by the matrix. Each Eigenspace has a dimension represented by the geometric multiplicity of the corresponding eigenvalue. Thus, if each eigenvalue has a distinct eigenvector, then the set of all eigenvectors form a basis for the vector space represented by the matrix.

So if there are n eigenvalues: x1, x2, ..., xn, with the algebraic multiplicity for a given x1 equivalent to its geometric multiplicity. Thus the Eigenspace associated with each xi has a basis Bi consisting exactly of the eigenvectors associated with the given eigenvalue xi. Thus, a basis for the vector space associated with the original vector space is B = B1 U B2 U ... U Bn.


Diagonalization
Using Eigenvectors as bases provides applications for diagonalization. Essentially, diagonalization allows a matrix A to be broken up based on the eigenvalues and eigenvectors. It is in the form: A = QDQ-1 = Q-1DQ, where D is the matrix whose diagonal entries are the eigenvalues of A, and Q is the matrix whose column vectors are the eigenvectors corresponding to the eigenvalues. So if D11 = x1, then the eigenvector associated with x1 is the first entry in Q.

In order to diagonalize a matrix, two conditions must be met. The first condition states that the characteristic polynomial must split of the Field. This means that the characteristic polynomial can be written as a product of linear terms in the form below, where c, a1, a2, a3, etc., are all constants in the Field.
f(x) = c(x-a1)(x-a2)(x-a3)...



Consider the characteristic equation f(t) = t2 + 1. This does not split over the real numbers as it cannot be factored; however, it does split over the complex numbers. Consider t2 + 1 = (t + i)(t - i).

The second condition for diagonalization states that each eigenvalue must have the same algebraic and geometric multiplicity. This point was addressed to a certain extent in the previous section. If there exists an eigenvalue with a geometric multiplicity smaller than the algebraic multiplicity, then then the matrix Q will not be a square matrix, as there will be fewer than n eigenvectors (given that the original matrix is n x n). Thus, Q is not invertible.

The equation A = Q-1DQ can be thought of as a change of basis formula as well. If A is written in terms of a basis B, then Q-1 is a matrix that changes the basis from B to C. The matrix D is in terms of the basis C. Finally, the matrix Q changes the basis back from C to B. Often times, dealing with a different basis can provide insights or make it easier to solve certain problems.

Let's look at a couple examples of diagonalizing matrices. Consider the matrix A below:
6 2 0
0 6 0
0 0 8



It has the characteristic polynomial f(x) = (x-8)(x-6)2, so it splits over the real numbers. The eigenvector associated with x = 8 is (0, 0, 1). The eigenvector associated with x = 6 is (1, 0, 0). Thus, the algebraic multiplicity for x = 6 is greater than its geometric multiplicity, so this matrix is not diagonalizable.


Consider a second example, with the matrix B below:
0 -4
2  6



Here, the characteristic equation is f(x) = (x-4)(x-2), with the eigenvectors (-1, 1) for x = 4, and (-2, 1) for x = 2. Thus, B is diagonalizable since its characteristic polynomial splits; and for each eigenvalue xi, the algebraic multiplicity and geometric multiplicity are the same for each xi.

So B can be written as:
   B   =    Q      D     Q^-1
|0 -4| = |-1 -2| |4 0| |1   2|
|2 6 |   | 1  1| |0 2| |-1 -1|



Is This A Good Question/Topic? 0
  • +

Page 1 of 1