0 Replies - 7510 Views - Last Post: 31 May 2013 - 05:05 PM Rate Topic: -----

#1 macosxnerd101  Icon User is online

  • Self-Trained Economist
  • member icon




Reputation: 10780
  • View blog
  • Posts: 40,141
  • Joined: 27-December 08

Linear Algebra Primer: Part 3- Basis, Span, and Dimension

Posted 31 May 2013 - 05:05 PM

Linear Algebra Primer: Part 3- Span, Basis, and Dimension

This tutorial will focus on the concepts of span, basis, and dimension. Graph theory analogues will also be discussed in the context of basis.

Attached is a LaTeX typeset version for those who would prefer to view this tutorial in that format
Attached File  Linear_Algebra_Part3_Tutorial.pdf (111.13K)
Number of downloads: 201

Span
In my last tutorial, the concept of span was briefly introduced. Abstractly, given a set of vectors S, span(S) is the set of all vectors formed from linear combinations of the vectors in S. There are two predominant questions related to the span of a vector set S:
  • What is the span of S?
  • Does S span the vector space V?


The span of S relies on the number of linearly independent vectors in S. So if S = {(1, 0), (0, 1)}, then span(S) = R2. Clearly, the vectors are linearly independent, and they are both in the vector space R2.

Similarly, if S = {(1, 0, 0}, (0, 0, 1)}, then span(S) is the x-z plane in R3. The vectors in S are the x and z axes.

What about span(P2®), which is the span of the quadratics? Since P2® is the vector space, span(P2®) = P2®. Since linear combinations of vectors in the Vector Space belong to the Vector Space, span(V) = V for any Vector Space V.

Span is determined in a similar way as Linear Independence. However, one of the conditions for the Linear Independence of a set of vectors requires that there exist at most n vectors in the set if the dimension of the Vector Space is n. In order for a set of vectors to span the Vector Space, the set must contain at least n linearly independent vectors.

Let's unpack this intuition a little bit. Think of linearly independent vectors as coordinate axes. In R2, there are two coordinate axes- the x and y-axes. Thus, two independent vectors span the space. Any other vectors in addition to those coordinate axes won't change the fact that the axes span the space. Similarly, if just given a single vector in R2, a dependent vector won't change the fact that the set doesn't span. So if only the x-axis is given, there is no way to independently obtain the y-coordinates. Therefore, the x-axis cannot span the space. A more geometric argument is that R2 is a plane, and each axis is a line. Thus, a line is a subspace of the plane, but does not span the plane.


Basis
A basis is the minimal viable set of vectors required to span a Vector Space. It is defined as maximally linearly independent, but still spanning the space. From the last tutorial, if the dimension of the Vector Space is n, then a vector set can have no more than n vectors and still be linearly independent. Similarly, to span the Vector Space, that set of vectors must have at least n vectors. Thus, it makes sense that a basis has exactly n vectors. Looking at this from a Matroid perspective, it makes sense that all bases have the same cardinality. The third axiom of Matroids states that all maximally independent subsets of the ground set have the same cardinality. Thus, bases are all maximally independent subsets of a Vector Space.

Bases can analogously be thought of as spanning trees as well. A spanning tree on a graph is an acyclic (independent) subgraph that contains all the vertices in the graph and is connected. Not all spanning trees are isomorphic. In fact, there are usually multiple spanning trees for a given graph. Let's look at some examples. Consider the graphs below, with G as the original graph. The forest F, which is a set of trees, is clearly independent, as there are no cycles. However, since the trees in F are not connected, none of them can be thought of as a basis or a spanning tree on the graph. Both T1 and T2 are spanning trees. However, they are different. Notice how there is an edge used in T1 that is not present in T2; and similarly, there is an edge used T2 that is not used in T1. To get from T1 to T2, these edges are swapped. This is the premise for the basis substitution property.
Attached Image

Before unpacking the substitution property, let's determine the conditions that need to hold to determine if a vector set is a basis for the Vector Space V.
  • Cardinality-Dimension Test: If the dimension of V is n, then the set must have exactly n vectors. If it does not have n vectors, then it is not a basis.
  • Linear Independence: The multiples test is a quick way to determine if the vector set is linearly independent or dependent. If it passes the multiples test, then the determinant test should be used. Remember from the last tutorial that it is necessary to construct the matrix where vectors in S are the column vectors. From there, the vector set is linearly independent if and only if the determinant of the matrix is non-zero.


Now let's discuss the substitution principle for bases a little more. Essentially, as long as the vector set retains the same number of linearly independent elements, vectors can be substituted, and the vector set will remain a basis. Consider the standard basis for R2: S = {(1, 0), (0, 1)}. It's easy to see that this is linearly independent, and the vectors represent the standard x and y-axes. Now let's substitute (0, 1) for (3, 7). Thus, S is still linearly independent and has two vectors; thus, it is still a basis for R2.

Consider again S = {(1, 0), (0, 1)}. Swapping (1, 0) for (0, 5) is not an acceptable choice such that S remains a basis, as S is clearly linearly dependent by the multiples test.


Dimension
So far, dimension has been referenced a lot, but not clearly defined. The dimension of a Vector Space, denoted dim(V) describes the number of basis vectors for V. For the most part, dimension is pretty easy to see. The dim(R2) = 2, and dim(Pn(F)) = n + 1, as polynomials of degree n have n+1 terms. Similarly, an m x n Matrix has dimension mn. Think about it this way- the standard basis has a single 1 and the rest of the elements in each vector are 0's. So there needs to be an element where the 1 is in each position. Thus, it is clear to see why {(1, 0), (0, 1)} is the standard basis for R2, and it is now easier to see why Matrices have dimension mn.

Is This A Good Question/Topic? 1
  • +

Page 1 of 1