Adjacency lists relate vertices to each other based on the edges that connect them. Each vertex is related to its adjacent vertices through the use of a table and linked list structure. For example, given the Graph:
A ----- B | \ | \ C---D
The adjacency lists for it would be:
- Vertex A is adjacent to Vertices: B, C, D
- Vertex B is adjacent to Vertex: A
- Vertex C is adjacent to Vertices: A, D
- Vertex D is adjacent to Vertices: A, C
Adjacency lists provide efficient look-up time to determine which a vertex's adjacencies, also allowing developers to ignore non-adjacent vertices. For this reason, adjacency lists are commonly used in implementations of certain graph algorithms, like Dijkstra's algorithm. Hash tables are generally used to store the adjacency lists.
Adjacency matrices are another common way of representing graphs, dealing solely with the vertices. An adjacency matrix sets up the vertices on both the vertical and horizontal axes. Vertices that are connected by an edge are marked with a 1. Non-adjacent vertices are marked with a 0. In simple graphs, a vertex is not adjacent to itself. So using the same example above, the adjacency matrix would be:
A B C D A |0 | 1 | 1 | 1 | B |1 | 0 | 0 | 0 | C |1 | 0 | 0 | 1 | D |1 | 0 | 1 | 0 |
Notice how the adjacency matrix is symmetric, and there are 0's along the diagonals. Vertices with loops would be marked with a 1. Directed graphs can also change an adjacency matrix so it isn't symmetric. However, simple, undirected graphs are symmetric with 0's along the diagonal.
Unlike adjacency lists, adjacency matrices are not ideal for path-finding because they relate all the vertices to each other, even non-adjacent vertices. For this reason, there is more checking to do with path-finding, and code can get more cumbersome.
Adjacency matrices also have another useful property for counting the number of paths between two vertices. To accomplish this, it is necessary to go back to Linear Algebra for matrix exponentiation. When the matrix is raised to the nth power, the resultant matrix provides the number of paths between two vertices of size n. The cell in the resultant matrix provides the number of paths for two arbitrary vertices.
Given our example matrix above, when raised to the 3rd power, the resultant matrix is:
A B C D A |2 | 3 | 4 | 4 | B |3 | 0 | 1 | 1 | C |4 | 1 | 2 | 3 | D |4 | 1 | 3 | 2 |
So from A-B, there are three paths of length three. If we traces these paths out, we get:
A - B - A - B (there are four vertices noted, but the starting point does not count)
A - C - A - B
A - D - A - B
Distance matrices are a variant on the adjacency matrices, including the distances or weights between the vertices instead of simply noting whether or not an adjacency exists. This makes distance matrices better suited for path-finding algorithms than adjacency matrices, though not as ideal as adjacency lists.
Incidence lists are similar to adjacency lists. However, rather relating adjacent vertices, incidence lists relate vertices to edges. Because they provide more direct access to edges than adjacency lists and matrices, incidence lists are more ideal when dealing with constructs such as Euler circuits, which are circuits where each edge is visited exactly once and the start and end points meet at the same edge.
Given our example graph again:
1 A ----- B | \ 2 | \ 3 C---D 4
Our incidence lists would look like:
Vertex A is incident to Edges: 1, 2, 3
Vertex B is incident to Edge: 1
Vertex C is incident to Edges: 2, 4
Vertex D is incident to Edges: 3, 4
Incidence matrices are similar to adjacency matrices, relating vertices to edges like incidence lists. In the incidence matrix, the rows correspond to the vertices and the columns correspond to the edges. Similar to an adjacency matrix, a 0 indicates that a vertex is not incident to an edge and a 1 indicates incidence.
Using the example graph again:
1 A ----- B | \ 2 | \ 3 C---D 4
The incidence matrix would look like:
1 2 3 4 A |1 | 1 | 1 | 0 | B |1 | 0 | 0 | 0 | C |0 | 1 | 0 | 1 | D |0 | 0 | 1 | 1 |
This specific incidence matrix is for a simple graph, meaning edges are only incident to two vertices. In the case of a hypergraph, where edges are incident upon multiple vertices, each column may have more than two 1's marked.
Incidence matrices can also be transformed into adjacency matrices. Going back to linear algebra again, it is necessary to multiply the transpose of the incidence matrix by the incidence matrix, itself. From there, 2 * I is subtracted from the matrix. Multiplying a transpose matrix by the original matrix will produce a square matrix, thus it is possible to use the identity matrix for the given dimension here for subtraction. Formulaicly:
Adjacency = IncidenceT * Incidence - 2 * Identity