2 Replies - 5644 Views - Last Post: 29 June 2013 - 06:21 AM Rate Topic: -----

#1 macosxnerd101  Icon User is online

  • Self-Trained Economist
  • member icon




Reputation: 10560
  • View blog
  • Posts: 39,071
  • Joined: 27-December 08

Linear Algebra Primer: Part 5- Linear Transformations

Posted 15 June 2013 - 07:12 PM

Linear Algebra Primer: Part 5- Linear Transformations

Linear Transformations are at the heart of many applications of Linear Algebra, most notably game and graphics programming. The CSS specification even describes Linear Transformations for many common formatting techniques like italicization or scaling fonts. This tutorial will introduce the concepts of Linear Transformations, as well as using matrices to represent transformation. In support of these concepts, defining vectors in terms of bases will also be discussed.

Attached is a typeset copy:
Attached File  Linear_Algebra_Part5_Tutorial.pdf (111.31K)
Number of downloads: 137

Defining a Linear Transformation
A Linear Transformation is a function that operates on Vector Spaces, with the two following properties:
  • T(x + y) = T(x) + T(y)
  • T(cx) = cT(x) (c is a constant in the Field)


Thus, to prove a function is a linear transformation, it is only necessary to show that the above properties hold. Consider some familiar linear transformations: the derivative and integral.

Let T: P3® -> P3® by the transformation T(f(x)) = f'(x). Since a derivative is a limit, (f(x) + g(x))' = f'(x) + g'(x) by the limit rules. Similarly, scalar constants can be pulled out front by the limit rules, so (kf(x))' = kf'(x). Thus, T is a linear transformation. The proof to show an integral holds as a linear transformation works in essentially the same manner.

In contrast, it would be easy to show that T: R2 -> R2 defined by T(a, b) = (3a, 3b) + (1, 1) is not a linear transformation, as T(a + c, b + d) = (3(a+c), 3(b+d)) + (1, 1), which is not equal to T(a, b) + T(c, d) = 3(a + c, b + d) + (2, 2). So any transformation that appends constants to the resultant vectors fails to constitute a linear transformation.


Vectors as Basis Coordinates
Before proceeding to evaluating linear transformations as matrices, it is first important to understand how to treat vectors as coordinates with respect to bases. First, let's introduce the term ordered basis. An ordered basis simply orders the elements, which is important in treating a vector as a coordinate. Consider the (x, y) plane. In (x, y) coordinates, the x-coordinate comes fi rst and the y-coordinate comes second. So an ordered basis for R2 would be {(0, 1), (1, 0)}. This means that an (x, y) coordinate represents the linear combination x * (1, 0) + y * (0, 1).

Similarly, if the mapping of R2 was done using (y; x) coordinates, then the ordered basis would be {(0, 1), (1, 0)} with the vectors written in terms of y * (0, 1) + x * (1, 0).

Consider one nal example with a non-standard basis for R2: B = {(2, 1), (1, 3)}. Now let's write (5, 4) in terms of B. So (5, 4) = x * (2, 1) + y * (1, 3).

This equation can also be written in the form:
|2 1| |x| = |5|
|1 3| |y|   |4|



Solving for the (x, y) vector yields the result 1/5 * (11, 3). So (5, 4) is expressed as 1/5 * (11, 3) when using the basis B rather than the standard basis for R2.

Linear Transformations as Matrices
Linear Transformations (in the form T(v) = x) can be represented as matrices in the form: Av = x. So through matrix multiplication, v is mapped to x. This representation provides a number of tools for analyzing linear transformations with regards to topics like invertibility, isomorphisms, and eigentheory, which will be covered in later tutorials.

The first step however is to discuss how to represent a linear transformation as a matrix. For any vector space, there are often times an infinite number of bases, and the matrix representation is dependent on the bases used. Specifically, the matrix takes the basis vectors for the domain's basis, and transforms them into vectors based on the codomain's basis. Remember that basis vectors are analogous to coordinate axes, so think about the transformation matrix as mapping points from one vector space using the standard (American) system to another vector space using the metric system. The basis vectors just provide units of measurement.

This matrix can be determined quite algorithmically. The first step is to take a basis vector and transform it, so find T(Di) for basis D. Next, write T(Di), the vector in the codomain, in terms of the codomain basis. So if there are bases D (domain) and C (Codomain), the transformation matrix can be written as [T]DC.


Let's look at an example. Consider the transformation T: R2 -> R3 by T(a, b) = (a + 2b, b, 2a + b) with the bases D (Domain) = {(1, 3), (2, 1)} and C = {(1, 0, 0), (0, 1, 0), (0, 0, 1).

The first step is to transform in the first basis vector in D, which leaves T(1, 3) = (7, 3, 5). As C is the standard basis for R3, the resultant of the transformation will not need to be rewritten.

The second basis vector in D, when transformed, leaves T(2, 1) = (4, 1, 5). Thus, A = [T]DC can be written as follows:
7  4
3  1
5  5



Notice that when evaluating: A * (1, 0) = (7, 3, 5), which is the result of T(1, 3). The reason that (1, 0) is used is because it is written in terms of the basis vectors for D. Thus, (1, 0) is a point along the axes (basis vectors in D). Another way of saying this is that (1, 0) contains the coefficients of the linear equation: 1(1, 3) + 0(2, 1). This is an important point to remember when both developing and using transformation matrices. When the standard basis is not used, vectors will have to be rewritten in terms of "units" of the basis vectors used. Thus, (1, 0) is one unit of (1, 3) and zero units of (2, 1).


Composite Transformations
Often times, composite transformations are necessary to achieve a desired result. Throughout prior algebra and calculus classes, it is common to see composite functions such as f(g(x)). The notation is the same with linear transformations. It also follows that since the matrices can be used to represent linear transformations, matrix multiplication can be used to achieve the transformations. Since matrix multiplication is not commutative, the order of the matrices matters in the matrix representation of composite transformations. The rule is that the inner function's matrix goes to the right of the outer function's matrix. So T(U(v)) is represented by [T][U], where [T] and [U] are the matrix representations
of the respective linear transformations.

Consider scaling in two dimensions. So the transformation would look something like what is below.
|a b| * |x|
|c d|   |y|



What would the final matrix look like if the intent was to scale the x-coordinate by 2 and the y-coordinate by 3? Let's look at these as two distinct transformations- scaling the x-coordinate by 2, and scaling the y-coordinate by 3.

Consider the transformation T(x, y) = (2x, y), which scales the x-coordinate by 2. Using the standard basis for R2 (which is {(1, 0), (0, 1)}, the matrix for T can be derived by plugging in the basis vectors. So:
T(1, 0) = (2, 0)
T(0, 1) = (0, 1)

Thus, the matrix is:
|2 0|
|0 1|



Similarly, the matrix for scaling the y-coordinate by 3 would look like:
|1 0|
|0 3|



Multiplying these two matrices produces a transformation matrix that scales the x-coordinate by 2 and the y-coordinate by 3. Both the intermediate and final matrices are written in terms of the standard basis for R2. So the final matrix is:
|2 0|
|0 3|



Applying this logic to a more general case, the Aii entry in the transformation matrix is used to scale the ith component in the vector. This has clear and direct applications to scaling images.

Another type of transformation is the skew-symmetric transformation. Visually, this transformation would change a square or rectangle into a parallelogram. The rule for a skew-symmetric transformation is that the matrix representation of the transformation U, call it [U], is equal to -[U]T, which represents the transpose of the matrix. Thus, [U]ij = -[U]ji. An application of this transformation would
be italicizing fonts. So now linear algebra provides toolsets for scaling and italicizing text.

Finally, consider rotation in R2. The rotation matrix is in the form:
|cos(t)  -sin(t)|
|sin(t)   cos(t)|



From basic trig, it is known that x = cos(t) and y = sin(t) on the unit circle. Thus, the transformation matrix, aplied to the fi rst vector in the standard basis for R2.
|a  b| * |1| = |cos(t)|
|c  d|   |0|   |sin(t)|



This forces a = cos(t) and c = sin(t). Applying the transformation matrix to (0, 1), the second vector in the standard basis for R2, yields b = -sin(t) and d = cos(t). Thus, the transformation matrix.

Is This A Good Question/Topic? 0
  • +

Replies To: Linear Algebra Primer: Part 5- Linear Transformations

#2 IceHot  Icon User is offline

  • D.I.C Head

Reputation: 0
  • View blog
  • Posts: 210
  • Joined: 28-August 12

Re: Linear Algebra Primer: Part 5- Linear Transformations

Posted 29 June 2013 - 02:32 AM

Quote

(1, 0) is 1 unit of (1, 3) and 0 units of (2, 1)

So does it then follow that (m, n) is m units of (1, 3) and n units of (2, 1)?
Was This Post Helpful? 0
  • +
  • -

#3 macosxnerd101  Icon User is online

  • Self-Trained Economist
  • member icon




Reputation: 10560
  • View blog
  • Posts: 39,071
  • Joined: 27-December 08

Re: Linear Algebra Primer: Part 5- Linear Transformations

Posted 29 June 2013 - 06:21 AM

Yes. That is correct.
Was This Post Helpful? 0
  • +
  • -

Page 1 of 1