0 Replies - 10671 Views - Last Post: 25 May 2013 - 06:51 PM Rate Topic: -----

#1 macosxnerd101   User is offline

  • Games, Graphs, and Auctions
  • member icon

Reputation: 12769
  • View blog
  • Posts: 45,954
  • Joined: 27-December 08

Linear Algebra Primer: Part 2- Linear Independence and Matroids

Posted 25 May 2013 - 06:51 PM

Linear Algebra Primer: Part 2- Linear Independence and Matroids

This tutorial will introduce the concept Linear Independence. It will also briefly introduce Matroid Theory, which branches into the realms of graph theory and abstract algebra as well.

In Graph Theory, Abstract Algebra, and Linear Algebra, there is a concept known as (in)dependence. The idea behind independence is if a structure is acyclic. In Graph Theory, that is pretty easy to see. Any forest or tree is independent, as they are acyclic. Another way to look at independence from a Graph Theory perspective is that there exists a unique path between two vertices. If there is a cycle, there are at least two paths between two vertices. Consider the trivial cycle C3 with vertices labeled A, B, and C. From vertex A-C, there are two paths: A-C and A-B-C. There are two paths from A-B and B-C as well. Consider the examples below of independent and dependent graphs. It is clear that the first graph is independent, as it has no cycles. The second graph is dependent, as it contains a cycle as a subgraph. The last graph is a circuit itself, so it is clearly dependent.

Attached Image

Let's now discuss Linear Independence, which is what pertains to Linear Algebra. A set of vectors (set S) is defined to be linearly independent when no vector in the set can be formed from a linear combination of the other vectors. Another way to describe linear independence is in terms of span. So if S is linearly independent, then all the vectors in span(S) (the span of S is the set of all vectors formed from linear combinations of the vectors in S) are formed from a unique linear combination of the vectors in S. A set of vectors that is not linearly independent is called linearly dependent.

This concept of linear independence is a little abstract, so let's decompose it. Consider the following vector sets:
  • S = {(1, 2, 3), (4, 5, 6)}: Here, S is linearly independent. There is no way to form (4, 5, 6) from multiples of (1, 2, 3).
  • S = {(1, 2, 3), (2, 4, 6), (3, 4, 5)}: Here S is linearly dependent. It is clear that (2, 4, 6) = 2(1, 2, 3) + 0(3, 4, 5), so a vector in S is formed from a linear combination of the other two vectors.

Let's back up and revisit the graph theory intuition. A structure is graphically independent if there exists a unique path for all pairs of vertices. Regarding linear independence, if there is a unique linear combination for each vector in span(S), that could be thought of as a unique path. Similarly, if S is linearly dependent, then the linear combination to form an arbitrary vector v in S could be substituted in for v to create two linear combinations. Thus, there are two paths to the same end result.

It is easy for small sets to determine independence. It gets trickier to eyeball and construct linear combinations for larger vector sets, especially when the Vector Space is bigger than a 2-3 dimensions. Let's talk about some heuristics to use:
  • The Multiples Test: If there are two vectors in the set, a and b, such that ka = b, for some constant k, then the set of vectors is linearly dependent.
  • Determinant Test: If the Vector Space is of dimension n and |S| = n, then the determinant test can be used. Consider a matrix M whose column vectors are the vectors in S. The set S is linearly independent if and only if det(M) != 0.
  • Dimension Test: If the Vector Space is of dimension n and |S| > n, then S is linearly dependent. Let's explore the intuition behind this a little more. If there are n dimensions, then there are n independent coordinate axes. So a subset of independent axes (or vectors) is independent. However, any extra axes produce a dependence. It isn't necessary to have two y-axes when only one will do.
  • Linear Combinations Test: When all else fails, this is a good test to fall back on. Row reduction can help expedite the process. Consider a matrix M whose column vectors are the vectors in S. If |S| = n and there are n distinct vectors that will satisfy the equation Mx = 0, where x and 0 are vectors, then S is linearly independent. Otherwise, S is linearly dependent.

Introduction to Matroid Theory
Matroids are structures that encapsulate this concept of independence found in graph theory, abstract algebra, and linear algebra. A Matroid is constructed from a ground set G. From this ground set, a second set I is constructed that contains all the independent subsets of the ground set. Thus, if the ground set is independent, then the I = P(G), where P(G) is the power set of G.

Matroids allow for isomorphisms between Linear Algebra and Graph Theory. When dealing with Graphs, Matroids are constructed from the edge sets. Thus, the intuition developed above regarding graph theory and independence is more than just intuition. Matroids are a tool to answer graph questions using linear algebra, and linear algebra questions using graph theory. Some of these applications include path finding, matchings, scheduling, spanning trees, and planarity.

Matroids have three fundamental properties or axioms. This first property states that no subset of a circuit is a circuit. Think about it this way. If a set of vectors is linearly independent, that means that no vector in the set can be formed from a linear combination of the other vectors. So removing a vector from the set won't change this fact. From a graph theory perspective, a circuit is formed by adding edges, not removing them. Thus, this first property makes sense.

The second property states that the null set is independent, which follows from the first property. The null set has no elements; thus, no circuits.

The final property states that all maximally independent subsets of the ground set all have the same cardinality. Consider the graph Cn. Clearly, removing one edge from the graph leaves a spanning tree, which is independent. Any arbitrary edge can be removed for the same result- a maximally independent subgraph. The same argument can be made for a set of Vectors.

I hope that this tutorial has been helpful in introducing the concept of Linear Independence. The introduction of Matroids is but the beginning of where Linear Algebra overlaps with Graph Theory. There will be future tutorials on Algebraic Graph Theory, which utilizes Linear Algebra to analyze graphs.

Is This A Good Question/Topic? 1
  • +

Page 1 of 1