# The Representability of Graphic Matroids as Vector Matroids

Edward Wibowo,

Matroids are mathematical structures that encapsulate the property of independence. Using a set of defining axioms, matroids can be used to describe concepts across both linear algebra and graph theory. Due to the abstract nature of matroids, it is possible to equivalently describe a matroid of a graph as a matroid of vectors. The main theorem of this paper is that any graphic matroid is representable as a vector matroid over any field $\mathbb{F}$. We then use this theorem to propose the total unimodularity of a graph’s incident matrix.

## 1. Introduction

There are many equivalent ways to define a matroid. One such way is by defining it as a pair of sets $(E, \mathcal{I})$ where $E$ is a finite set known as a ground set and $\mathcal{I}$ is a set of independent subsets of $E$. Additionally, the following properties must be satisfied [2]:

- The set containing nothing is independent: $\emptyset \in \mathcal{I}$.
- Hereditary: any subset of an independent set is also independent. More formally, for all $S^\prime \subset S \subset E$, if we have $S \in \mathcal{I}$, then $S^\prime \in \mathcal{I}$.
- Augmentation: for all $S, S^\prime \in \mathcal{I}$ where $|S| > |S^\prime|$, there exists $x \in S \backslash S^\prime$ such that $S^\prime \cup \{x\} \in \mathcal{I}$.

Using this definition, a matroid can be defined relative to different fields of mathematics. For example, by borrowing concepts from graph theory, we define a graphic matroid as representing an undirected graph and an independent set containing the forests of the graph (which will be defined later). Alternatively, a vector matroid can be defined from linear algebra: we can take a matrix $A$ and let the independent set represent the sets of columns that are linearly independent.

We say that a matroid $M$ is $\mathbb{F}$-representable if it is isomorphic to a vector matroid $M(A)$ where $A$ is a matrix over a field $\mathbb{F}$. In this paper, we will prove that every graphic matroid is representable over any field $\mathbb{F}$.

**Theorem A.** All graphic matroids are representable over any field $\mathbb{F}$.

**Example 1.1.**
To motivate the main theorem, we start with an example of a graphic matroid and its corresponding vector matroid representation.
We denote the graphic matroid of $G$ as $M(G)$.
Furthermore, let $D(G)$ be an arbitrary orientation of $G$.
Both $G$ and $D(G)$ are illustrated below in Figure 1.

**Figure 1.** A graph of $G$ and a corresponding arbitrary orientation $D(G)$.

Based on $D(G)$, we can define a matrix $A$ where each column encodes an edge and each row encodes a vertex. The method with which we define this matrix will be defined later in Definition $2.5$.

It can be observed that any linearly independent subset of columns of $A$ correspond to a subset of edges that form a forest of $G$ and vice versa. So, each independent set in the matroid of $A$ (written $M(A)$), has a corresponding independent set in $M(G)$. Hence, we say that $M(G)$ is isomorphic to $M(A)$.

Matroids were independently discovered by Takeo Nakasawa and Hassler Whitney [5]. In 1935, Whitney introduced the concept of a matroid to describe independence across both linear algebra and graph theory. Whitney discovered that describing graphs and matrices by their hereditary and augmentative properties results in an object that can be interpreted both geometrically and algebraically. Through this, Whitney related the concept of bases, ranks, and circuits to both linear algebra and graph theory, effectively uniting the two mathematical fields [5].

In 1958, Tutte classified matroids depending on whether they met certain conditions. Namely, Tutte related the condition of a matroid being graphic with a matroid being regular; meaning it is representable over any field $\mathbb{F}$ [4].

This paper is structured to build up towards the main Theorem A and explore its applications. Section 2 will introduce the necessary terminology, concepts, and propositions to prove the main theorem. Particularly, the validity of both vector matroids and graphic matroids will be proved. In Section 3, Theorem A will be proved. Section 4 covers a corollary that proposes the matrix representing a graph of a graphic matroid is totally unimodular as a result of Theorem A. Lastly, Section 5 provides questions to motivate additional study of matroids.

## 2. Background

This paper assumes the reader is familiar with fundamental linear algebra concepts. Namely, the paper requires prerequisite knowledge of linear independence and matrices as defined in [1, 2.17 and 3.30].

**Proposition 2.1.**
Let $A$ be a matrix.
If we set the ground set $E$ as the set of columns of $A$ and $\mathcal{I}$ as the set of linearly independent column subsets, then $M(A) = (E, \mathcal{I})$ is a matroid.
A matroid defined in this manner is known as a vector matroid.

**Proof.**
We can prove that $M(A)$ is a matroid by showing that it satisfies the three properties outlined in the introduction.
Throughout this proof, let $A$ be a matrix over an arbitrary field $\mathbb{F}$.

Firstly, we know that $\emptyset \in \mathcal{I}$ because the empty set is linearly independent. Hence, the empty set is in $\mathcal{I}$. Thus, the first property is satisfied.

Secondly, it can be observed that the hereditary property can be shown to be satisfied by showing that any subset of a linearly independent set is also linearly independent. Let $S \in \mathcal{I}$ and $S^\prime \subset S$. We can define $S^\prime$ and $S$ as follows:

Next, let $\lambda_i \in \mathbb{F}$ such that we can equate a linear combination of $S^\prime$ to $0$:

Since we know that $S \in \mathcal{I}$, we know that it is linearly independent such that if a linear combination of $S$ is equal to $0$, then all scalars must equal $0$. Thus, it must be the case that $\lambda_1 = \cdots = \lambda_k = 0$. Hence, $S^\prime$ is linearly independent which means $S^\prime \in \mathcal{I}$. So, if $S^\prime \subset S \subset E$ and $S \in I$, then $S^\prime \in \mathcal{I}$. Therefore, the hereditary property is satisfied.

Thirdly and lastly, we can show that the augmentation property is satisfied using proof by contradiction. Let $S$ and $S^\prime$ be linearly independent subsets of the columns in $A$ such that $S, S^\prime \in \mathcal{I}$. Furthermore, let $|S| > |S^\prime|$. Let $V$ be a vector space spanned by $S \cup S^\prime$ over the field $\mathbb{F}$. Hence, $\dim V \geq |S|$. Assume that for all $x \in S \backslash S^\prime$, $S^\prime \cup \{x\}$ is linearly dependent. Hence, $V$ is fully spanned by just $S^\prime$, which means $\dim V \leq |S^\prime|$. So, we have the inequality

However, we reach a contradiction because $|S| > |S^\prime|$, which means the above inequality cannot be true. Thus, for $S, S^\prime \in \mathcal{I}$ and $|S| > |S^\prime|$, there must exist an $x \in S \backslash S^\prime$ such that $S^\prime \cup \{x\} \in \mathcal{I}$. Hence, the augmentation property holds.

Therefore, since $\emptyset \in \mathcal{I}$ and both the hereditary and augmentation property is satisfied, then $M(A)$ is a matroid.

Furthermore, since this paper deals with graphic matroids, it is important to define fundamental graph theory concepts. Some familiarity with graph theory is required to understand how a graphic matroid operates. A graph is a structure consisting of vertices and edges where vertices are connected by edges. A graph may be either directed or undirected depending on whether the edges of the graph have an orientation or not. It should be noted that graphic matroids strictly represent undirected graphs; however, an arbitrary orientation of a graph may be considered to construct its corresponding vector matroid representation. Furthermore, a cycle is a collection of edges that connects some vertex to itself. A tree is an undirected graph where all nodes are connected by a single path, meaning that it is both connected and acyclic.

**Definition 2.2.**
A forest is a disjoint union of trees; each connected component of the forest is a tree.

**Example 2.3.**
Suppose $G$ is a connected graph illustrated in Figure 2.

**Figure 2.** Illustration of connected graph $G$.

Now, we can define $F$: an arbitrary forest of $G$ illustrated in Figure 3.

**Figure 3.** Illustration of forest $F$.

We can observe that $F$ is a forest containing $3$ trees:

- a tree connecting vertices $1$, $2$, and $3$,
- a tree connecting vertices $4$ and $6$,
- and a tree consisting of just the $3$ vertex.

**Proposition 2.4.**
Let $G$ be a graph.
Let the ground set $E$ be the set of edges of $G$ and let $\mathcal{I}$ be the set of subsets of $E$ that form a forest of $G$.
Thus, $M(G) = (E, \mathcal{I})$ is a matroid.
A matroid defined in this manner is known as a graphic matroid.

**Proof.**
Similar to how we proved a vector matroid is indeed a matroid, we can prove that $M(G)$ is a matroid by showing it too satisfies the three properties.

Firstly, we know that $\emptyset \in \mathcal{I}$ because a graph consisting of no edges contains no cycles.

Secondly, we can show that the hereditary property is satisfied by $M(G)$. Let $S^\prime \subset S \subset E$ and $S \in \mathcal{I}$. Additionally, let $|S^\prime|$ and $|S|$ denote the number of edges in $S^\prime$ and $S$ respectively. We can consider the following two cases:

- If $|S^\prime| = |S|$, then we know that $S = S^\prime$ since $S^\prime$ is a subset of $S$. Since we know that $S$ is independent, then it follows that $S^\prime$ is also independent. Thus, $S^\prime \in \mathcal{I}$.
- If $|S^\prime| < |S|$, it follows that removing any edge from $S$ cannot form a cycle. Thus, $S^\prime$ represents a forest because it does not form any cycles, hence, $S^\prime \in \mathcal{I}$.

Therefore, $M(G)$ satisfies the hereditary property.

Thirdly and lastly, we can show that $M(G)$ satisfies the augmentation property using proof by contradiction. Let $S, S^\prime \in \mathcal{I}$ represent forests of $G$ where $|S| > |S^\prime|$. Assume there does not exist an edge $x \in S$ that connects two disjoint trees in the forest $S^\prime$. Then, all the trees in $S$ would be subcomponents of the trees in $S^\prime$. Since we know that $S$ and $S^\prime$ are both forests, it follows that $S$ would have at most $|S^\prime|$ edges. So, we reach a contradiction because it was defined earlier that $|S| > |S^\prime|$. Thus, adding the edge $x \in S$, which connects two disjoint trees in $S^\prime$ would still result in a forest. More expressly, $S^\prime \cup \{x\} \in \mathcal{I}$. Hence, $M(G)$ satisfies the augmentation property.

Therefore, since $\emptyset \in \mathcal{I}$ and both the hereditary and augmentation property is satisfied by $M(G)$, then we can conclude $M(G)$ is a matroid.

**Definition 2.5.**
Let $G$ be a graph and $D(G)$ be an arbitrary orientation of $G$.
The incidence matrix $A$ of $G$ can be defined using the following piecewise definition:

Later, it will be shown that the matroid $M(G)$ is isomorphic to the matroid $M(A)$.

**Example 2.6.**
We can use Definition $2.5$ to construct an incidence matrix $A$ from an arbitrary graph $G$.
To create $A$, we must have some arbitrary orientation of $G$.
Let $D(G)$ denote an arbitrary orientation of $G$.
$D(G)$ is illustrated in Figure $4$.

**Figure 4.** Illustration of $D(G)$: an arbitrary orientation of $G$.

Since $G$ has $3$ vertices and $4$ edges, the resulting incidence matrix will be $3$-by-$4$.

Notably, the column encoding edge $b$ is the zero vector because it is a loop: a single edge that connects a vertex directly to itself.

**Definition 2.7.**
A circuit of a matroid is a minimally dependent subset of the ground set.
More formally, let $M = (E, \mathcal{I})$ be an arbitrary matroid.
Let $C = \{c_1, \dots, c_n\} \notin \mathcal{I}$ be a circuit of $M$ where $c_i \in E$.
Since $C$ is a circuit, then for any $c_k \in C$, it holds that $C \backslash \{c_k\} \in \mathcal{I}$.
Also, using the hereditary property of matroids, we know any proper subset of $C$ is a member of $\mathcal{I}$.

Furthermore, to prove the main theorem, we must rely on the following lemma

**Lemma 2.8.**
Let $V$ be a vector space over $\mathbb{F}$.
Let $S$ be a minimally linearly dependent set $\{v_1, \dots, v_n\}$ where all $v_i \in V$.
If a nontrivial (meaning at least one of the scalar coefficients is nonzero) linear combination of vectors in $S$ equates to $0$, then all the scalar coefficients must be nonzero.

**Proof.**
Firstly, we can express the linear combination of vectors in $S$ with scalars $a_i \in \mathbb{F}$

Since $S$ is linearly dependent, we can allow some $a_i \neq 0$ such that

Suppose $a_j = 0$. Since $S$ is minimally dependent, any subset of $S$ is linearly independent. So,

Thus, we find that if some scalar coefficient featured in ($1$) is equal to zero, then the rest of the scalar coefficients must be equal to zero. Since $S = \{v_1, \dots, v_n\}$ is linearly dependent, it must be the case that there is a nontrivial linear combination of $v_i$. For this to be the case, then all the $a_i \neq 0$.

## 3. Main Theorem

Having defined both vector and graphic matroids and outlined some supporting definitions, we can now prove Theorem $A$. To do so, we must prove that every graphic matroid has a corresponding vector matroid representation over an arbitrary field $\mathbb{F}$. The graphic and vector matroids must be isomorphic for them to be equivalent.

**Theorem A.**
All graphic matroids are representable over any field $\mathbb{F}$.

**Proof.**
Let $G$ be an arbitrary undirected graph and $M(G)$ be the matroid of $G$.
Let $A$ be a matrix defined by the piecewise equation in Definition 2.5 over the field $\mathbb{F}$.
Let $M(A)$ denote the matroid of $A$.
Note that to prove that a graphic matroid is representable over any field, it also must be the case that it is representable over the field $\mathbb{F}_2$.
Hence, any mention of $-1$ in the proof should be interpreted as the additive inverse of $1$, which is simply $1$ in the case of $\mathbb{F}_2$.

To prove that $M(G)$ is isomorphic to $M(A)$, it suffices to prove that there is a bijective mapping taking each independent set in $M(G)$ to a corresponding independent set in $M(A)$. Since a subset of a ground set is always either independent or dependent, we can equivalently prove that there is bijection from a dependent set in $M(G)$ to a dependent set in $M(A)$. Hence, to prove the main theorem, we shall prove:

- A dependent set in $M(G)$ is also dependent in $M(A)$.
- A dependent set in $M(A)$ is also dependent in $M(G)$.

Firstly, we can start by proving that a dependent set in $M(G)$ is correspondingly dependent in $M(A)$. Let $X$ be a dependent set in $M(G)$. Hence, we can consider the following two exhaustive cases:

- $X$ contains a loop. The piecewise equation given in Definition $2.5$ only sets a nonzero value if the corresponding edge is a non-loop. So, if $X$ were to contain a loop, then one of the corresponding columns in the matrix $M(A)$ would be the zero vector. Consequently, any subset containing this column would be linearly dependent, so $X$ would be dependent in $M(A)$ if it contains a loop.
- $X$ does not contain a loop. Since we know $X$ is dependent in $M(G)$ and there are no loops, then $X$ must have at least one cycle.
Let $C \subset X$ be a cycle.
We can represent $C$ as a sequence of edges where $e_i \in E$ where $E$ is the ground set of $M(G)$:
$e_1 \rightarrow e_2 \rightarrow \cdots \rightarrow e_n.$Let $v_1, \dots, v_n$ be the corresponding columns of the matrix $A$ where each $v_i$ encodes the edge $e_i$. While traversing from $e_1$ to $e_n$, we multiply each $v_i$ by $\lambda_i$ defined as follows:$\lambda_i = \begin{cases} 1, & \text{direction of traversal agrees with orientation;} \\ -1, & \text{otherwise.} \end{cases}$Hence, we end up with a linear combination of $v_i$:$\lambda_1 v_1 + \cdots + \lambda_n v_n.$Next, we can justify that the above linear combination equates to $0$. Since $e_1 \rightarrow e_2 \rightarrow \cdots \rightarrow e_n$ form a cycle, then there must be a nontrivial trail connecting some vertex to itself. Thus, for every path going away from a node, there must be a path going back towards the node. Hence, by defining the $\lambda_i$ appropriately, we find that the linear combination equates to $0$ such that we have:$\lambda_1 v_1 + \dots + \lambda_n v_n = 0.$Hence, since the $\lambda_i \neq 0$, then $v_1, \dots, v_n$ are linearly dependent. Thus, $C$ is dependent in $M(A)$. Since $C \subset X$ and any superset of a linearly dependent set is linearly dependent, then $X$ must be dependent in $M(A)$ too.

Secondly, we can prove that a dependent set in $M(A)$ is correspondingly dependent in $M(G)$. Let $X$ be a dependent set in $M(A)$. Let $X^\prime \subset X$ denote a minimally dependent set (a circuit) as declared in Definition $2.7$. We can define $X^\prime$ further by considering some subset of individual columns of $A$ as $v_1, \dots, v_n$ such that

If $X^\prime$ contains a zero vector, it must be the case that $n = 1$ for it to be minimally linearly dependent. Some column that is the zero vector simply encodes a loop edge, which means the corresponding set in $M(G)$ would also be dependent. Since $X^\prime$ is dependent in $M(G)$, then $X$ would also be dependent in $M(G)$.

Now, we can consider the case when $X^\prime$ does not contain the zero vector. Since it is linearly minimally dependent, we can use Lemma $2.8$ such that for $a_i \neq 0 \in \mathbb{F}$ we have

In other words, we have a linear combination of vectors $v_1, \dots, v_n$ with all nonzero scalar coefficients equate to $0$. This means for every row of the matrix formed by $\begin{bmatrix} v_1 & \cdots & v_n \end{bmatrix}$, there would be at least two nonzero entries. Since each row encodes a single vertex and each row has at least two nonzero entries, then it is equivalent to saying that each vertex has at least two connecting edges. Furthermore, we can observe that for a connected graph with $m$ vertices to be acyclic, it can have at most $m - 1$ edges. If every vertex (out of $m$ vertices) has at least two connecting edges, then there must be at least $m$ edges, which means it must be cyclic. Therefore, $X^\prime$ is dependent in $M(G)$. Consequently, $X$ would also be correspondingly dependent in $M(G)$.

In conclusion, we have shown that both a dependent set in $M(G)$ is dependent in $M(A)$ and a dependent set in $M(A)$ is dependent in $M(G)$. Hence, there is a bijection mapping between the dependent sets in $M(G)$ and $M(A)$. So, $M(G)$ is isomorphic to $M(A)$. Therefore, any graphic matroid can be represented as a vector matroid, so all graphic matroids are representable over any field $\mathbb{F}$.

## 4. Applications

Having proven Theorem A, we know any theorem pertaining to $\mathbb{F}$-representable matroids also pertains to graphic matroids. One such theorem is the equivalency between a matroid $M$ being $\mathbb{F}$-representable and $M$ being representable over $\mathbb{R}$ by a totally unimodular matrix [2, 5.16]. We can show that Theorem A holds by showing that Corollary 4.2 is true for any graphic matroid.

**Definition 4.1. ([2, 5.16])**
A matrix is called totally unimodular if the determinant of any of its square submatrices is either $-1$, $1$, or $0$.

**Corollary 4.2.**
Let $G$ be a finite graph and $M(G)$ the corresponding graphic matroid.
Let $A$ be the incident matrix of $G$.
Then, $A$ must be a totally unimodular matrix.

**Proof.**
To prove Corollary 4.2, we can use a proof by induction approach by showing that every square submatrix has determinant either $1$, $-1$, or $0$.

The base case is when the square submatrix is $1$-by-$1$. Let $A^\prime$ be a $1$-by-$1$ square submatrix of $A$. From Definition 2.5, we know that all entries in $A$ are either $1$, $-1$, or $0$. Hence, we know

Therefore, we know $\det A^\prime \in \{1, -1, 0\}$. Thus, the base case holds.

Next, assume that the determinant of a $k$-by-$k$ square submatrix is $1$, $-1$, or $0$. Then, we shall prove that a $(k+1)$-by-$(k+1)$ square submatrix also has its determinant either $1$, $-1$, or $0$. Let $A^\prime$ be a $(k+1)$-by-$(k+1)$ square submatrix of $A$. Then, $A^\prime$ fits in the following cases:

- One of the columns in $A^\prime$ is the zero vector. If one of the columns in $A^\prime$ is the zero vector, then it cannot be the case that the columns form a basis because it would be linearly dependent. Hence, $\det A^\prime = 0$, which means $\det A^\prime \in \{1, -1, 0\}$.
- One of the columns in $A^\prime$ has a single nonzero entry.
Let the $i$th column be a column of $A^\prime$ such that it has a single nonzero entry.
We can write the determinant of $A^\prime$ by conducting cofactor expansion along the $i$th column:
$\det A^\prime = \sum_{j = 0}^{k+1} (-1)^{i+j} A^\prime_{j, i} \det\left(A^\prime_{\hat{j}, \hat{i}}\right).$Since there exists only one $j$ such that $A^\prime_{j, i} \neq 0$, then only one of the summands of the above sum can be nonzero. Let $k$ be the row denoting the only nonzero entry of the $i$th column. Then, the $\det A^\prime$ can be expressed as follows$\det A^\prime = (-1)^{i + k} A^\prime_{k, i} \det\left(A^\prime_{\hat{k},\hat{i}}\right).$Since we know $A^\prime_{k, i} = 1$ or $-1$, and $\det(A^\prime_{\hat{k}, \hat{i}})$ is $1$, $-1$, or $0$ (because of the inductive hypothesis), then$\det A^\prime = 1 \text{ or } -1 \text{ or } 0.$Hence, $\det A^\prime \in \{1, -1, 0\}$.
- All columns in $A^\prime$ have exactly two nonzero entries.
If all columns have exactly two nonzero entries, then each column has a single $1$ and a single $-1$.
Let $r_1, \cdots, r_{k+1}$ denote the corresponding rows of $A^\prime$.
Since each column has a single $1$ and $-1$, we have:
$r_1 + \cdots + r_{k+1} = 0.$Hence, there exists a linear combination of the rows of $A^\prime$ such that it equates to $0$ and not all scalar coefficients are nonzero. So, $r_1, \cdots, r_{k+1}$ is linearly dependent, so they cannot jointly form a basis. Therefore, $\det A^\prime = 0$.

Thus, in every case, $\det A^\prime \in \{1, -1, 0\}$. Therefore, by induction, every square submatrix of $A$ has a determinant of either $1$, $-1$, or $0$. Equivalently, $A$ is therefore totally unimodular.

## 5. Future Directions

Questions for future study will be outlined in this section.

**Question 5.1.**
How can a vector matroid representation of a graphic matroid be used to computationally solve combinatorial optimization problems?

A combinatorial problem represented as a matroid can be solved using a greedy algorithm [3].
A greedy algorithm is a procedure that finds an optimal solution by choosing a locally optimal solution at each stage.
For example, by assigning weights to each element of a matroid’s ground set, a greedy algorithm can be used to find the base (maximally independent set) with minimum weight.
This is known as a *minimum matroid base problem*.

**Question 5.2.**
Although graphic matroids are representable over any field $\mathbb{F}$, what are some examples of matroids that are not representable?

## References

[1] S. Axler. Linear Algebra Done Right. Undergraduate Texts in Mathematics. Springer International Publishing, Cham, 3rd ed. 2015 edition.

[2] J. Oxley. What is a Matroid? 2014.

[3] L. Turner, M. Ehrgott, and H. W. Hamacher. On the generality of the greedy algorithm for solving matroid base problems. Discrete Applied Mathematics, 195:114–128, 2015.

[4] W. T. Tutte. Matroids and graphs. Transactions of the American Mathematical Society, 90(3):527–552, 1959.

[5] H. Whitney. On the abstract properties of linear dependence. American Journal of Mathematics, 57(3):509, 1935.