The Representability of Graphic Matroids as Vector Matroids

Edward Wibowo,

Matroids are mathematical structures that encapsulate the property of independence. Using a set of defining axioms, matroids can be used to describe concepts across both linear algebra and graph theory. Due to the abstract nature of matroids, it is possible to equivalently describe a matroid of a graph as a matroid of vectors. The main theorem of this paper is that any graphic matroid is representable as a vector matroid over any field F\mathbb{F}. We then use this theorem to propose the total unimodularity of a graph’s incident matrix.

1. Introduction

There are many equivalent ways to define a matroid. One such way is by defining it as a pair of sets (E,I)(E, \mathcal{I}) where EE is a finite set known as a ground set and I\mathcal{I} is a set of independent subsets of EE. Additionally, the following properties must be satisfied [2]:

  1. The set containing nothing is independent: I\emptyset \in \mathcal{I}.
  2. Hereditary: any subset of an independent set is also independent. More formally, for all SSES^\prime \subset S \subset E, if we have SIS \in \mathcal{I}, then SIS^\prime \in \mathcal{I}.
  3. Augmentation: for all S,SIS, S^\prime \in \mathcal{I} where S>S|S| > |S^\prime|, there exists xS\Sx \in S \backslash S^\prime such that S{x}IS^\prime \cup \{x\} \in \mathcal{I}.

Using this definition, a matroid can be defined relative to different fields of mathematics. For example, by borrowing concepts from graph theory, we define a graphic matroid as representing an undirected graph and an independent set containing the forests of the graph (which will be defined later). Alternatively, a vector matroid can be defined from linear algebra: we can take a matrix AA and let the independent set represent the sets of columns that are linearly independent.

We say that a matroid MM is F\mathbb{F}-representable if it is isomorphic to a vector matroid M(A)M(A) where AA is a matrix over a field F\mathbb{F}. In this paper, we will prove that every graphic matroid is representable over any field F\mathbb{F}.

Theorem A. All graphic matroids are representable over any field F\mathbb{F}.

Example 1.1. To motivate the main theorem, we start with an example of a graphic matroid and its corresponding vector matroid representation. We denote the graphic matroid of GG as M(G)M(G). Furthermore, let D(G)D(G) be an arbitrary orientation of GG. Both GG and D(G)D(G) are illustrated below in Figure 1.

A graph of $G$ and a corresponding arbitrary orientation $D(G)$.

Figure 1. A graph of GG and a corresponding arbitrary orientation D(G)D(G).

Based on D(G)D(G), we can define a matrix AA where each column encodes an edge and each row encodes a vertex. The method with which we define this matrix will be defined later in Definition 2.52.5.

A=[110011111000000010000001001000].A = \begin{bmatrix} 1 & 1 & 0 & 0 & 1 & -1 \\ -1 & -1 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ \end{bmatrix}.

It can be observed that any linearly independent subset of columns of AA correspond to a subset of edges that form a forest of GG and vice versa. So, each independent set in the matroid of AA (written M(A)M(A)), has a corresponding independent set in M(G)M(G). Hence, we say that M(G)M(G) is isomorphic to M(A)M(A).

Matroids were independently discovered by Takeo Nakasawa and Hassler Whitney [5]. In 1935, Whitney introduced the concept of a matroid to describe independence across both linear algebra and graph theory. Whitney discovered that describing graphs and matrices by their hereditary and augmentative properties results in an object that can be interpreted both geometrically and algebraically. Through this, Whitney related the concept of bases, ranks, and circuits to both linear algebra and graph theory, effectively uniting the two mathematical fields [5].

In 1958, Tutte classified matroids depending on whether they met certain conditions. Namely, Tutte related the condition of a matroid being graphic with a matroid being regular; meaning it is representable over any field F\mathbb{F} [4].

This paper is structured to build up towards the main Theorem A and explore its applications. Section 2 will introduce the necessary terminology, concepts, and propositions to prove the main theorem. Particularly, the validity of both vector matroids and graphic matroids will be proved. In Section 3, Theorem A will be proved. Section 4 covers a corollary that proposes the matrix representing a graph of a graphic matroid is totally unimodular as a result of Theorem A. Lastly, Section 5 provides questions to motivate additional study of matroids.

2. Background

This paper assumes the reader is familiar with fundamental linear algebra concepts. Namely, the paper requires prerequisite knowledge of linear independence and matrices as defined in [1, 2.17 and 3.30].

Proposition 2.1. Let AA be a matrix. If we set the ground set EE as the set of columns of AA and I\mathcal{I} as the set of linearly independent column subsets, then M(A)=(E,I)M(A) = (E, \mathcal{I}) is a matroid. A matroid defined in this manner is known as a vector matroid.

Proof. We can prove that M(A)M(A) is a matroid by showing that it satisfies the three properties outlined in the introduction. Throughout this proof, let AA be a matrix over an arbitrary field F\mathbb{F}.

Firstly, we know that I\emptyset \in \mathcal{I} because the empty set is linearly independent. Hence, the empty set is in I\mathcal{I}. Thus, the first property is satisfied.

Secondly, it can be observed that the hereditary property can be shown to be satisfied by showing that any subset of a linearly independent set is also linearly independent. Let SIS \in \mathcal{I} and SSS^\prime \subset S. We can define SS^\prime and SS as follows:

S={v1,,vk,,vn}S={v1,,vk}.S = \{v_1, \dots, v_k, \dots, v_n\} \\ S^\prime = \{v_1, \dots, v_k\}.

Next, let λiF\lambda_i \in \mathbb{F} such that we can equate a linear combination of SS^\prime to 00:

λ1v1++λkvk=0λ1v1++λkvk+0vk+1++0vnlinear combination of S=0. \lambda_1 v_1 + \cdots + \lambda_k v_k = 0 \\ \underbrace{\lambda_1 v_1 + \cdots + \lambda_k v_k + 0 \cdot v_{k + 1} + \cdots + 0 \cdot v_n}_{\text{linear combination of $S$}} = 0.

Since we know that SIS \in \mathcal{I}, we know that it is linearly independent such that if a linear combination of SS is equal to 00, then all scalars must equal 00. Thus, it must be the case that λ1==λk=0\lambda_1 = \cdots = \lambda_k = 0. Hence, SS^\prime is linearly independent which means SIS^\prime \in \mathcal{I}. So, if SSES^\prime \subset S \subset E and SIS \in I, then SIS^\prime \in \mathcal{I}. Therefore, the hereditary property is satisfied.

Thirdly and lastly, we can show that the augmentation property is satisfied using proof by contradiction. Let SS and SS^\prime be linearly independent subsets of the columns in AA such that S,SIS, S^\prime \in \mathcal{I}. Furthermore, let S>S|S| > |S^\prime|. Let VV be a vector space spanned by SSS \cup S^\prime over the field F\mathbb{F}. Hence, dimVS\dim V \geq |S|. Assume that for all xS\Sx \in S \backslash S^\prime, S{x}S^\prime \cup \{x\} is linearly dependent. Hence, VV is fully spanned by just SS^\prime, which means dimVS\dim V \leq |S^\prime|. So, we have the inequality

SdimVS. |S| \leq \dim V \leq |S^\prime|.

However, we reach a contradiction because S>S|S| > |S^\prime|, which means the above inequality cannot be true. Thus, for S,SIS, S^\prime \in \mathcal{I} and S>S|S| > |S^\prime|, there must exist an xS\Sx \in S \backslash S^\prime such that S{x}IS^\prime \cup \{x\} \in \mathcal{I}. Hence, the augmentation property holds.

Therefore, since I\emptyset \in \mathcal{I} and both the hereditary and augmentation property is satisfied, then M(A)M(A) is a matroid.

Furthermore, since this paper deals with graphic matroids, it is important to define fundamental graph theory concepts. Some familiarity with graph theory is required to understand how a graphic matroid operates. A graph is a structure consisting of vertices and edges where vertices are connected by edges. A graph may be either directed or undirected depending on whether the edges of the graph have an orientation or not. It should be noted that graphic matroids strictly represent undirected graphs; however, an arbitrary orientation of a graph may be considered to construct its corresponding vector matroid representation. Furthermore, a cycle is a collection of edges that connects some vertex to itself. A tree is an undirected graph where all nodes are connected by a single path, meaning that it is both connected and acyclic.

Definition 2.2. A forest is a disjoint union of trees; each connected component of the forest is a tree.

Example 2.3. Suppose GG is a connected graph illustrated in Figure 2.

Illustration of connected graph $G$.

Figure 2. Illustration of connected graph GG.

Now, we can define FF: an arbitrary forest of GG illustrated in Figure 3.

Illustration of forest $F$.

Figure 3. Illustration of forest FF.

We can observe that FF is a forest containing 33 trees:

  1. a tree connecting vertices 11, 22, and 33,
  2. a tree connecting vertices 44 and 66,
  3. and a tree consisting of just the 33 vertex.

Proposition 2.4. Let GG be a graph. Let the ground set EE be the set of edges of GG and let I\mathcal{I} be the set of subsets of EE that form a forest of GG. Thus, M(G)=(E,I)M(G) = (E, \mathcal{I}) is a matroid. A matroid defined in this manner is known as a graphic matroid.

Proof. Similar to how we proved a vector matroid is indeed a matroid, we can prove that M(G)M(G) is a matroid by showing it too satisfies the three properties.

Firstly, we know that I\emptyset \in \mathcal{I} because a graph consisting of no edges contains no cycles.

Secondly, we can show that the hereditary property is satisfied by M(G)M(G). Let SSES^\prime \subset S \subset E and SIS \in \mathcal{I}. Additionally, let S|S^\prime| and S|S| denote the number of edges in SS^\prime and SS respectively. We can consider the following two cases:

  1. If S=S|S^\prime| = |S|, then we know that S=SS = S^\prime since SS^\prime is a subset of SS. Since we know that SS is independent, then it follows that SS^\prime is also independent. Thus, SIS^\prime \in \mathcal{I}.
  2. If S<S|S^\prime| < |S|, it follows that removing any edge from SS cannot form a cycle. Thus, SS^\prime represents a forest because it does not form any cycles, hence, SIS^\prime \in \mathcal{I}.

Therefore, M(G)M(G) satisfies the hereditary property.

Thirdly and lastly, we can show that M(G)M(G) satisfies the augmentation property using proof by contradiction. Let S,SIS, S^\prime \in \mathcal{I} represent forests of GG where S>S|S| > |S^\prime|. Assume there does not exist an edge xSx \in S that connects two disjoint trees in the forest SS^\prime. Then, all the trees in SS would be subcomponents of the trees in SS^\prime. Since we know that SS and SS^\prime are both forests, it follows that SS would have at most S|S^\prime| edges. So, we reach a contradiction because it was defined earlier that S>S|S| > |S^\prime|. Thus, adding the edge xSx \in S, which connects two disjoint trees in SS^\prime would still result in a forest. More expressly, S{x}IS^\prime \cup \{x\} \in \mathcal{I}. Hence, M(G)M(G) satisfies the augmentation property.

Therefore, since I\emptyset \in \mathcal{I} and both the hereditary and augmentation property is satisfied by M(G)M(G), then we can conclude M(G)M(G) is a matroid.

Definition 2.5. Let GG be a graph and D(G)D(G) be an arbitrary orientation of GG. The incidence matrix AA of GG can be defined using the following piecewise definition:

Ai,j={1,vertex i is the tail of non-loop edge j;1,vertex i is the head of non-loop edge j;0,otherwise. A_{i, j} = \begin{cases} 1, & \text{vertex $i$ is the tail of non-loop edge $j$;} \\ -1, & \text{vertex $i$ is the head of non-loop edge $j$;} \\ 0, & \text{otherwise.} \end{cases}

Later, it will be shown that the matroid M(G)M(G) is isomorphic to the matroid M(A)M(A).

Example 2.6. We can use Definition 2.52.5 to construct an incidence matrix AA from an arbitrary graph GG. To create AA, we must have some arbitrary orientation of GG. Let D(G)D(G) denote an arbitrary orientation of GG. D(G)D(G) is illustrated in Figure 44.

Illustration of $D(G)$: an arbitrary orientation of $G$.

Figure 4. Illustration of D(G)D(G): an arbitrary orientation of GG.

Since GG has 33 vertices and 44 edges, the resulting incidence matrix will be 33-by-44.

A=[100010110011]. A = \begin{bmatrix} -1 & 0 & 0 & 0 \\ 1 & 0 & 1 & -1 \\ 0 & 0 & -1 & 1 \\ \end{bmatrix}.

Notably, the column encoding edge bb is the zero vector because it is a loop: a single edge that connects a vertex directly to itself.

Definition 2.7. A circuit of a matroid is a minimally dependent subset of the ground set. More formally, let M=(E,I)M = (E, \mathcal{I}) be an arbitrary matroid. Let C={c1,,cn}IC = \{c_1, \dots, c_n\} \notin \mathcal{I} be a circuit of MM where ciEc_i \in E. Since CC is a circuit, then for any ckCc_k \in C, it holds that C\{ck}IC \backslash \{c_k\} \in \mathcal{I}. Also, using the hereditary property of matroids, we know any proper subset of CC is a member of I\mathcal{I}.

Furthermore, to prove the main theorem, we must rely on the following lemma

Lemma 2.8. Let VV be a vector space over F\mathbb{F}. Let SS be a minimally linearly dependent set {v1,,vn}\{v_1, \dots, v_n\} where all viVv_i \in V. If a nontrivial (meaning at least one of the scalar coefficients is nonzero) linear combination of vectors in SS equates to 00, then all the scalar coefficients must be nonzero.

Proof. Firstly, we can express the linear combination of vectors in SS with scalars aiFa_i \in \mathbb{F}

a1v1++anvn. a_1 v_1 + \cdots + a_n v_n.

Since SS is linearly dependent, we can allow some ai0a_i \neq 0 such that

a1v1++anvn=0.(1) a_1 v_1 + \cdots + a_n v_n = 0. \quad\quad\quad \text{($1$)}

Suppose aj=0a_j = 0. Since SS is minimally dependent, any subset of SS is linearly independent. So,

a1v1++aj1vj1+aj+1vj+1++anvn=0    i{1,,j1,j+1,,n},ai=0. a_1 v_1 + \cdots + a_{j-1} v_{j-1} + a_{j+1} v_{j+1} + \cdots + a_n v_n = 0 \\ \implies \forall i \in \{1, \dots, j - 1, j + 1, \dots, n\}, a_i = 0.

Thus, we find that if some scalar coefficient featured in (11) is equal to zero, then the rest of the scalar coefficients must be equal to zero. Since S={v1,,vn}S = \{v_1, \dots, v_n\} is linearly dependent, it must be the case that there is a nontrivial linear combination of viv_i. For this to be the case, then all the ai0a_i \neq 0.

3. Main Theorem

Having defined both vector and graphic matroids and outlined some supporting definitions, we can now prove Theorem AA. To do so, we must prove that every graphic matroid has a corresponding vector matroid representation over an arbitrary field F\mathbb{F}. The graphic and vector matroids must be isomorphic for them to be equivalent.

Theorem A. All graphic matroids are representable over any field F\mathbb{F}.

Proof. Let GG be an arbitrary undirected graph and M(G)M(G) be the matroid of GG. Let AA be a matrix defined by the piecewise equation in Definition 2.5 over the field F\mathbb{F}. Let M(A)M(A) denote the matroid of AA. Note that to prove that a graphic matroid is representable over any field, it also must be the case that it is representable over the field F2\mathbb{F}_2. Hence, any mention of 1-1 in the proof should be interpreted as the additive inverse of 11, which is simply 11 in the case of F2\mathbb{F}_2.

To prove that M(G)M(G) is isomorphic to M(A)M(A), it suffices to prove that there is a bijective mapping taking each independent set in M(G)M(G) to a corresponding independent set in M(A)M(A). Since a subset of a ground set is always either independent or dependent, we can equivalently prove that there is bijection from a dependent set in M(G)M(G) to a dependent set in M(A)M(A). Hence, to prove the main theorem, we shall prove:

  1. A dependent set in M(G)M(G) is also dependent in M(A)M(A).
  2. A dependent set in M(A)M(A) is also dependent in M(G)M(G).

Firstly, we can start by proving that a dependent set in M(G)M(G) is correspondingly dependent in M(A)M(A). Let XX be a dependent set in M(G)M(G). Hence, we can consider the following two exhaustive cases:

  1. XX contains a loop. The piecewise equation given in Definition 2.52.5 only sets a nonzero value if the corresponding edge is a non-loop. So, if XX were to contain a loop, then one of the corresponding columns in the matrix M(A)M(A) would be the zero vector. Consequently, any subset containing this column would be linearly dependent, so XX would be dependent in M(A)M(A) if it contains a loop.
  2. XX does not contain a loop. Since we know XX is dependent in M(G)M(G) and there are no loops, then XX must have at least one cycle. Let CXC \subset X be a cycle. We can represent CC as a sequence of edges where eiEe_i \in E where EE is the ground set of M(G)M(G): e1e2en. e_1 \rightarrow e_2 \rightarrow \cdots \rightarrow e_n. Let v1,,vnv_1, \dots, v_n be the corresponding columns of the matrix AA where each viv_i encodes the edge eie_i. While traversing from e1e_1 to ene_n, we multiply each viv_i by λi\lambda_i defined as follows: λi={1,direction of traversal agrees with orientation;1,otherwise. \lambda_i = \begin{cases} 1, & \text{direction of traversal agrees with orientation;} \\ -1, & \text{otherwise.} \end{cases} Hence, we end up with a linear combination of viv_i: λ1v1++λnvn. \lambda_1 v_1 + \cdots + \lambda_n v_n. Next, we can justify that the above linear combination equates to 00. Since e1e2ene_1 \rightarrow e_2 \rightarrow \cdots \rightarrow e_n form a cycle, then there must be a nontrivial trail connecting some vertex to itself. Thus, for every path going away from a node, there must be a path going back towards the node. Hence, by defining the λi\lambda_i appropriately, we find that the linear combination equates to 00 such that we have: λ1v1++λnvn=0. \lambda_1 v_1 + \dots + \lambda_n v_n = 0. Hence, since the λi0\lambda_i \neq 0, then v1,,vnv_1, \dots, v_n are linearly dependent. Thus, CC is dependent in M(A)M(A). Since CXC \subset X and any superset of a linearly dependent set is linearly dependent, then XX must be dependent in M(A)M(A) too.

Secondly, we can prove that a dependent set in M(A)M(A) is correspondingly dependent in M(G)M(G). Let XX be a dependent set in M(A)M(A). Let XXX^\prime \subset X denote a minimally dependent set (a circuit) as declared in Definition 2.72.7. We can define XX^\prime further by considering some subset of individual columns of AA as v1,,vnv_1, \dots, v_n such that

X={v1,,vn}. X^\prime = \{v_1, \dots, v_n\}.

If XX^\prime contains a zero vector, it must be the case that n=1n = 1 for it to be minimally linearly dependent. Some column that is the zero vector simply encodes a loop edge, which means the corresponding set in M(G)M(G) would also be dependent. Since XX^\prime is dependent in M(G)M(G), then XX would also be dependent in M(G)M(G).

Now, we can consider the case when XX^\prime does not contain the zero vector. Since it is linearly minimally dependent, we can use Lemma 2.82.8 such that for ai0Fa_i \neq 0 \in \mathbb{F} we have

a1v1++anvn=0. a_1 v_1 + \cdots + a_n v_n = 0.

In other words, we have a linear combination of vectors v1,,vnv_1, \dots, v_n with all nonzero scalar coefficients equate to 00. This means for every row of the matrix formed by [v1vn]\begin{bmatrix} v_1 & \cdots & v_n \end{bmatrix}, there would be at least two nonzero entries. Since each row encodes a single vertex and each row has at least two nonzero entries, then it is equivalent to saying that each vertex has at least two connecting edges. Furthermore, we can observe that for a connected graph with mm vertices to be acyclic, it can have at most m1m - 1 edges. If every vertex (out of mm vertices) has at least two connecting edges, then there must be at least mm edges, which means it must be cyclic. Therefore, XX^\prime is dependent in M(G)M(G). Consequently, XX would also be correspondingly dependent in M(G)M(G).

In conclusion, we have shown that both a dependent set in M(G)M(G) is dependent in M(A)M(A) and a dependent set in M(A)M(A) is dependent in M(G)M(G). Hence, there is a bijection mapping between the dependent sets in M(G)M(G) and M(A)M(A). So, M(G)M(G) is isomorphic to M(A)M(A). Therefore, any graphic matroid can be represented as a vector matroid, so all graphic matroids are representable over any field F\mathbb{F}.

4. Applications

Having proven Theorem A, we know any theorem pertaining to F\mathbb{F}-representable matroids also pertains to graphic matroids. One such theorem is the equivalency between a matroid MM being F\mathbb{F}-representable and MM being representable over R\mathbb{R} by a totally unimodular matrix [2, 5.16]. We can show that Theorem A holds by showing that Corollary 4.2 is true for any graphic matroid.

Definition 4.1. ([2, 5.16]) A matrix is called totally unimodular if the determinant of any of its square submatrices is either 1-1, 11, or 00.

Corollary 4.2. Let GG be a finite graph and M(G)M(G) the corresponding graphic matroid. Let AA be the incident matrix of GG. Then, AA must be a totally unimodular matrix.

Proof. To prove Corollary 4.2, we can use a proof by induction approach by showing that every square submatrix has determinant either 11, 1-1, or 00.

The base case is when the square submatrix is 11-by-11. Let AA^\prime be a 11-by-11 square submatrix of AA. From Definition 2.5, we know that all entries in AA are either 11, 1-1, or 00. Hence, we know

A=[1] or [1] or [0]    detA=1 or 1 or 0. A^\prime = \begin{bmatrix} 1 \end{bmatrix} \text{ or } \begin{bmatrix} -1 \end{bmatrix} \text{ or } \begin{bmatrix} 0 \end{bmatrix} \\ \implies \det A^\prime = 1 \text{ or } -1 \text{ or } 0.

Therefore, we know detA{1,1,0}\det A^\prime \in \{1, -1, 0\}. Thus, the base case holds.

Next, assume that the determinant of a kk-by-kk square submatrix is 11, 1-1, or 00. Then, we shall prove that a (k+1)(k+1)-by-(k+1)(k+1) square submatrix also has its determinant either 11, 1-1, or 00. Let AA^\prime be a (k+1)(k+1)-by-(k+1)(k+1) square submatrix of AA. Then, AA^\prime fits in the following cases:

  1. One of the columns in AA^\prime is the zero vector. If one of the columns in AA^\prime is the zero vector, then it cannot be the case that the columns form a basis because it would be linearly dependent. Hence, detA=0\det A^\prime = 0, which means detA{1,1,0}\det A^\prime \in \{1, -1, 0\}.
  2. One of the columns in AA^\prime has a single nonzero entry. Let the iith column be a column of AA^\prime such that it has a single nonzero entry. We can write the determinant of AA^\prime by conducting cofactor expansion along the iith column: detA=j=0k+1(1)i+jAj,idet(Aj^,i^).\det A^\prime = \sum_{j = 0}^{k+1} (-1)^{i+j} A^\prime_{j, i} \det\left(A^\prime_{\hat{j}, \hat{i}}\right). Since there exists only one jj such that Aj,i0A^\prime_{j, i} \neq 0, then only one of the summands of the above sum can be nonzero. Let kk be the row denoting the only nonzero entry of the iith column. Then, the detA\det A^\prime can be expressed as follows detA=(1)i+kAk,idet(Ak^,i^). \det A^\prime = (-1)^{i + k} A^\prime_{k, i} \det\left(A^\prime_{\hat{k},\hat{i}}\right). Since we know Ak,i=1A^\prime_{k, i} = 1 or 1-1, and det(Ak^,i^)\det(A^\prime_{\hat{k}, \hat{i}}) is 11, 1-1, or 00 (because of the inductive hypothesis), then detA=1 or 1 or 0. \det A^\prime = 1 \text{ or } -1 \text{ or } 0. Hence, detA{1,1,0}\det A^\prime \in \{1, -1, 0\}.
  3. All columns in AA^\prime have exactly two nonzero entries. If all columns have exactly two nonzero entries, then each column has a single 11 and a single 1-1. Let r1,,rk+1r_1, \cdots, r_{k+1} denote the corresponding rows of AA^\prime. Since each column has a single 11 and 1-1, we have: r1++rk+1=0. r_1 + \cdots + r_{k+1} = 0. Hence, there exists a linear combination of the rows of AA^\prime such that it equates to 00 and not all scalar coefficients are nonzero. So, r1,,rk+1r_1, \cdots, r_{k+1} is linearly dependent, so they cannot jointly form a basis. Therefore, detA=0\det A^\prime = 0.

Thus, in every case, detA{1,1,0}\det A^\prime \in \{1, -1, 0\}. Therefore, by induction, every square submatrix of AA has a determinant of either 11, 1-1, or 00. Equivalently, AA is therefore totally unimodular.

5. Future Directions

Questions for future study will be outlined in this section.

Question 5.1. How can a vector matroid representation of a graphic matroid be used to computationally solve combinatorial optimization problems?

A combinatorial problem represented as a matroid can be solved using a greedy algorithm [3]. A greedy algorithm is a procedure that finds an optimal solution by choosing a locally optimal solution at each stage. For example, by assigning weights to each element of a matroid’s ground set, a greedy algorithm can be used to find the base (maximally independent set) with minimum weight. This is known as a minimum matroid base problem.

Question 5.2. Although graphic matroids are representable over any field F\mathbb{F}, what are some examples of matroids that are not representable?

References

[1] S. Axler. Linear Algebra Done Right. Undergraduate Texts in Mathematics. Springer International Publishing, Cham, 3rd ed. 2015 edition.

[2] J. Oxley. What is a Matroid? 2014.

[3] L. Turner, M. Ehrgott, and H. W. Hamacher. On the generality of the greedy algorithm for solving matroid base problems. Discrete Applied Mathematics, 195:114–128, 2015.

[4] W. T. Tutte. Matroids and graphs. Transactions of the American Mathematical Society, 90(3):527–552, 1959.

[5] H. Whitney. On the abstract properties of linear dependence. American Journal of Mathematics, 57(3):509, 1935.