0 Replies - 7340 Views - Last Post: 26 July 2013 - 09:38 PM Rate Topic: -----

#1 macosxnerd101  Icon User is online

  • Self-Trained Economist
  • member icon




Reputation: 10446
  • View blog
  • Posts: 38,686
  • Joined: 27-December 08

Linear Algebra Primer: Part 8- Determinants and Dynamical Systems

Posted 26 July 2013 - 09:38 PM

Linear Algebra Primer: Part 8- Determinants and Limits

This tutorial will cover matrix determinants, limits, and applications of limits with dynamical systems.

Attached is a typeset version of this tutorial.
Attached File  Eigentheory_Part2.pdf (187.33K)
Number of downloads: 95

Determinants
The determinant can be studied further by diagonalization and eigentheory. The determinant of the matrix describes the scalar change in volume (or analogue in higher dimensions), and the sign denotes the orientation of the object.

The determinant has a few properties that are important to introduce before proceeding. Let A and B be n x n Matrices. Then:
  • det(kA) = kn det(A), where k is a scalar constant
  • det(A) = 1/det(A-1)
  • det(AB) = det(A)det(B)/>
  • det(A) = det(AT), where AT denotes the transpose of A.


Consider A to be a diagonalizable matrix. Then A can be written as A = Q-1DQ.

So consider det(A) = det(Q-1DQ). By the determinant properties, det(A) = det(Q-1)det(D)det(Q). However, since det(Q) = 1/det(Q-1), and multiplication over a field (the reals, complex, or rationals) is associative, det(A) = det(D). By Laplace Expansion, it is easy to see that det(A) is the product of the eigenvalues for A.

Notice that D looks like a transformation matrix for scaling. In fact, each eigenvalue can be associated with an axis, describing the amount the component associated with the axis is scaled. Remember that the eigenvectors form a basis, and basis vectors can be thought of as coordinate axes.

Similarly, diagonalization provides insight into the first determinant property- det(kA) = kndet(A), where k is a constant. Consider kA = k(Q-1DQ). Thus, det(kA) = det(kQ-1DQ) = det(kQ-1)det(kD)det(kQ) = det(kD). As det(D) is the product of the diagonals (the eigenvalues); and kD multiplies each eigenvalue by k, det(kD) = kndet(D) = kn det(A).

The Determinant also answers questions regarding invertibility. To start, let's prove that if A is a square matrix and det(A) != 0, then A is invertible. From the determinant properties, notice that det(A-1) = 1/det(A). So if det(A) = 0, then det(A-1) is undefined. This implies that A-1 is undefined. Hence, it can be concluded that det(A) != 0 if and only if A is invertible. Similarly, since det(A) != 0 is a test for linear independence, a square matrix that is invertible is also linearly independent. Looking at the Determinant and invertibility in terms of eigentheory, it can also be concluded that an eigenvalue of 0 is an equivalent way of saying that A is not invertible. Since det(A) = det(D), and det(D) is the product of the eigenvalues; if det(D) = 0, then an eigenvalue must be 0.


Limits
Eigentheory and diagonalization both play a major role in efficiently exponentiating a matrix, including calculating limit(n-->infinity) An.

Consider A to be a diagonalizable matrix. So A = Q-1DQ. Thus, A2 = (Q-1DQ)(Q-1DQ). Since matrix multiplication is associative, A2 = (Q-1D)(QQ-1DQ) = Q-1D2Q. Similarly, A3 = Q-1D3Q. More generally, An = Q-1DnQ. It is much easier to exponentiate n eigenvalues in the Diagonal matrix D than it is to calculate An strictly by multiplying A by itself repeatedly.

In order for lim(n-->infinity) An to exist, two conditions have to be met. The first condition states that, each eigenvalue has to be in the interval (-1, 1]. This means that for each xi, -1 < xi <= 1. Since the eigenvalues are being exponentiated when calculating the limit, each eigenvalue would need to converge to a constant value (0 or 1) in order for lim(n-->infinity) An to converge.

The second condition states that if there is an eigenvalue such that xi = 1, then the dim(Exi), the dimension of the eigenspace associated with xi = 1, must be equal to the algebraic multiplicity of the eigenvalue. In other words, the matrix A must be diagonalizable.

Let's look at an example. Consider the matrix A below:
1 1
0 1



It has the characteristic polynomial f(x) = (x-1)2, so x = 1 with an algebraic multiplicity of 2. The only eigenvector is (1, 0), so the geometric multiplicity of x = 1 is 1. Thus, dim(E1) = 1, so A is not diagonalizable. Therefore, the matrix L = lim(n-->infinity) An does not exist.


Now consider the matrix B:
0.3  0.35
0.7  0.65



It has the characteristic polynomial f(x) = (x-1)(x + .05), with eigenvectors (-.45, -.89) for x = 1, and (-.71, .71) for x = 0.05.

Thus, B can be diagonalized as follows:
|-.75  -.75| |1  0   | |-.45  -.71|
|-.94   .47| |0  0.05| |-.89   .71|



Applying the limit on the diagonal matrix produces:
|1 0|
|0 0|



And thus, Q-1 (lim(n-->infinity) Dn) Q produce the final matrix for lim(n-->infinity)Bn:
1/3  1/3
2/3  2/3




Applications of Limits- Dynamical Systems
Using diagonalization to calculate the limit of a matrix has widespread applications in a number of different areas where dynamical systems are useful, such as solving systems of differential and difference equations, economics, biology, and graph dynamical systems (such as finite state machines and Markov chains). This section will introduce graph dynamical systems.

First, let's define a dynamical system. It is a model where there are moving parts. A Graph is a relation amongst objects, where the objects are modeled by vertices or nodes, and a relation is denoted by a line connected vertices. These lines are known as edges. A Graph Dynamical System models changes in these objects, which are affected based on the object relations.

Consider a system consisting of three restaurants and 1000 customers. If a customer has a poor experience, he may decide to try another restaurant the next time he eats out. A Graph Dynamical System can be used to model the number of customers who patronize a given restaurant on a given iteration (ie., at t = 0, Bill's Barbeque has 100 patrons, but at t = 1, it only has 90 patrons).

Let's flush out this example a little more with the graph below. Here, there are three restaurants: Bill's BBQ, Yard-house Milkshakes, and Love Shack. At t = 0, Bill's BBQ has 500 customers, Yard-house Milkshakes has 350 customers, and Love Shack has 150 customers. The directed edges denote the ratios or percentages of customers that either stay at the current restaurant or move to a different restaurant. So for example, the edge from Bill's BBQ to itself (this is called a loop) denotes that 65% of its current customers at a given time unit will stay at the restaurant on the next iteration of eating out. So at t = 1, 65% of the original 500 customers at Bill's BBQ will eat there again. Similarly, the directed edge from Bill's BBQ to Love Shack denotes that 20% of the customers at Bills will eat at Love Shack next time. Similarly, the directed edge from Love Shack to Bill's BBQ denotes that 20% of Love Shack's customers will eat at Bill's BBQ next time.



To model this problem, a cost matrix representation will be used. Each cell will model a directed adjacency from one vertex to another. If an adjacency exists (ie., there is directed edge from one vertex to another), the cell takes on the value of the cost. So if there is an edge A -> B, that cell gets the cost of transitioning from A -> B. However, if no directed edge exists from B -> A, then the cost is 0. Similarly, the cost from B -> A may be different than the cost from A -> B, as is shown in the graph above.

So let A = Love Shack, B = Bill's BBQ, and C = Yard-house Milkshakes. The cells are denoted Mij, where i represents the row index and j represents the column index. So M12 represents the directed edge from A -> B. Consider the cost matrix (call it M) below for the graph:
    A    | B    | C
A | 0.70 | 0.20 | 0.10
B | 0.20 | 0.65 | 0.15 
C | 0.25 | 0.25 | 0.50



Diagonalizing this matrix yields the eigenpairs:
  • x = 1: (-0.577, -0.577, -0.577)
  • x = 0.481: (-0.704, 0.692, 0.159)
  • x = 0.369: (0.031, -0.488, 0.872)


Thus, the matrix M can be written as:
| -0.727  -0.661 -0.344 | * |1    0    0  | * | -0.577  -0.704  0.031 |  
| -0.839   0.518  0.320 |   |0 0.481    0 |   | -0.577  0.692   0.159 |
| -0.329  -0.532  0.861 |   |0    0  0.369|   | -0.577  0.159   0.872 |



Taking lim(n-->infinity) Mn produces the steady state matrix:
|0.42  0.382  0.20|
|0.42  0.382  0.20|
|0.42  0.382  0.20|



Thus, applying the steady state matrix on the initial distributions through matrix multiplication leaves the final distributions as approximately 323 (exactly 323.282) customers per restaurant on a given iteration in the long run. So: Mn * (150, 500, 350) = (323.282, 323.282, 323.282).

Is This A Good Question/Topic? 0
  • +

Page 1 of 1