Browsed by
Category: Graph Theory

The Cartesian Product of Two Graphs

The Cartesian Product of Two Graphs

(Ed. Note: A pdf version of this article is attached at the end of the post for offline reading.)

Introduction and Preliminaries

Graphs are objects like any other, mathematically speaking. We can define operations on two graphs to make a new graph. We’ll focus in particular on a type of graph product- the Cartesian product, and its elegant connection with matrix operations. 

We mathematically define a graph G to be a set of vertices coupled with a set of edges that connect those vertices. We write G = (V_{G}, E_{G}). As an example, the left graph in Figure 1 has three vertices V_{G} = \{v_{1}, v_{2}, v_{3}\}, and two edges E_{G} = \{v_{1}v_{2}, v_{2}v_{3}\}. The order of G is the number of vertices, and the size of G is the number of edges. So our graph G has order n=3 and size m=2. This graph in particular has a special name, P_{3}, because it’s a special type of graph called a path that consists of 3 vertices. Two vertices that are connected by an edge are adjacent and denoted with the symbol \sim. 

Cartesian Product of Graphs

Now that we’ve dispensed with necessary terminology, we shall turn our attention to performing operations on two graphs to make a new graph. In particular, a type of graph multiplication called the Cartesian product. Suppose we have two graphs, G and H, with orders n_{G} and n_{H} respectively. Formally, we define the Cartesian product G \times H to be the graph with vertex set V_{G} \times V_{H}. Pause here to note what we mean by this. The vertex set of the graph Cartesian  is the Cartesian product of the vertex sets of the two graphs: V_{G \times H} = V_{G} \times V_{H}. This means that the Cartesian product of a graph has n_{G}n_{H} vertices. As an example, P_{3} \times P_{2} has 3\cdot 2 = 6 vertices. The vertices of G \times H are ordered pairs formed by V_{G} and V_{H}. For P_{3} \times P_{2} we have, 

 V_{P_{3} \times P_{2}} = \{(v_{1}, x_{1}),(v_{1}, x_{2}),(v_{2}, x_{1}),(v_{2}, x_{2}), (v_{3}, x_{1}),(v_{1}, x_{2}) \}

The edge set of G \times H is defined as follows: (v_{i}, x_{k}) is adjacent to (v_{j}, x_{l}) if

  1. v_{i} = v_{j} and x_{k} \sim x_{l}, or
  2. x_{k} = x_{l} and v_{i} \sim v_{j}

 

Let’s create P_{3} \times P_{2} as an example:

 

The red edges are due to condition (1) above, and the red to (2). Interestingly enough, this operation is commutative up to isomorphism (a relabeling of vertices that maintains the graph structure). This will be examined in a later section.

Graph and Matrices

We can also operate on graphs using matrices. The pictures above are one way to represent a graph. An adjacency matrix is another. The adjacency matrix A_{G} of a graph G of order n is an n \times n square matrix whose entries a_{ij} are given by 

a_{ij} = \begin{cases} 1, & v_{i} \sim v_{j} \\0, & i=j \text{ or } v_{i} \not\sim v_{j}\end{cases}.

 The adjacency matrix for P_{3} and P_{2} respectively are 

A_{P_{3}} = \begin{bmatrix} 0 & 1 & 0\\ 1 & 0 & 1\\0 & 1 & 0\end{bmatrix} \qquad A_{P_{2}} = \begin{bmatrix} 0 & 1\\1 & 0\end{bmatrix}

Note that a relabeling of vertices simply permutes the rows and columns of the adjacency matrix. 

Adjacency Matrix of the Cartesian product of Graphs

What is the adjacency matrix of G \times H? If G and H have orders n_{G} and n_{H} respectively, we can show that

A_{G \times H} = A_{G} \otimes I_{n_{H}} + I_{n_{G}} \otimes A_{H}

where \otimes is the Kronecker product, and I_{m} is the m\times m identity matrix. Recall that the Kronecker product of two matrices A_{m \times n} \otimes B_{k \times l} is the mk \times nl matrix given by 

A \otimes B = \begin{bmatrix} a_{11}B & a_{12}B &\ldots & a_{1n}B \\ a_{21}B & a_{22}B & \ldots & a_{2n}B \\ \vdots & & \ddots& \vdots \\ a_{m1}B & a_{m2}B & \ldots & a_{mn}B\end{bmatrix}

In general, the Kronecker product is not commutative, so A \otimes B \neq B \otimes A. 

We can prove the formula for A_{G \times H} above using standard matrix theory, but this really wouldn’t be that illuminating. We’d see that the statement is indeed true, but we would gain very little insight into how this affects the graph and why. We shall break apart the formula to see why the Kronecker product is needed, and what the addition is doing. 

Let us return to finding P_{3} \times P_{2}. By our formula, A_{P_{3} \times P_{2}} = A_{P_{3}} \otimes I_{2} + I_{3} \otimes A_{P_{2}}. Notice first why the dimensions of the identity matrices “cross” each other; that is, we use the identity matrix of order n_{P_{2}} in the Kronecker product with A_{P_{3}} and vice versa n_{P_{3}} for A_{P_{2}}. This ensures we get the correct dimension for the adjacency matrix of P_{3} \times P_{2}, which is 6 \times 6. 

We’ll walk through the computation one term at a time. First,

\begin{aligned}A_{P_{3}} \otimes I_{2} &= \begin{bmatrix} 0 & 1 & 0\\ 1 & 0 & 1\\0 & 1 & 0\\\end{bmatrix} \otimes \begin{bmatrix} 1 & 0\\0 & 1\end{bmatrix} \\&= \begin{bmatrix} 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0\\\end{bmatrix}\end{aligned}

This matrix is the adjacency matrix of a 6-vertex graph given below:

Notice that we now have two copies of P_{3} now in one graph. Similarly, I_{3} \otimes A_{P_{2}} is 

\begin{aligned}I_{3} \otimes A_{P_{2}} &= \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 1 & 0\end{bmatrix}\end{aligned}

 

Here the second Kronecker product term has made three copies of P_{2}. The last step is to add these two matrices together to obtain the adjacency matrix for P_{3} \times P_{2}. Graphically, what’s happening? This addition term overlays our 3 copies of P_{2} (usually written 3P_{2} or P_{2} \cup P_{2} \cup P_{2}) onto the respectively labeled vertices of our two copies of P_{3} (written 2P_{3} or P_{3} \cup P_{3}). That is,

\begin{aligned}A_{P_{3} \times P_{2}} &= A_{P_{3}} \otimes I_{2} + I_{3} \otimes A_{P_{2}}\\&= \begin{bmatrix} 0 & 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 & 0 \\1 & 0 & 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 1\\ 0 & 0 & 0 & 1 & 1 & 0\end{bmatrix}\end{aligned}

which is the adjacency matrix of our originally obtained graph

 

Note that here the vertex labels aren’t ordered pairs. That’s ok. Technically we shouldn’t have labeled the vertices of the two graphs identically, but the goal was to illustrate the action of the Cartesian product. We can always relabel the edges. Formally, the overlaying of 3P_{2} onto the vertices of 2P_{3} would be forming those ordered-pair vertices. 

Commutativity of the Graph Cartesian Product

A natural question for any operation is whether or not it possesses the property of commutativity. An operation is commutative if the order in which we take it produces the same result. That is, if \square is an operation, a\square b = b\square a.

Does G \times H yield the same graph as H \times G? In a way, yes. Let’s examine what happens if we switch the order of the graph Cartesian product and take P_{2} \times P_{3}. 

We won’t go quite as thoroughly through each step, and instead give the results. 

We know the adjacency matrix of P_{2} \times P_{3} is given by 

A_{P_{2} \times P_{3}} = A_{P_{2}} \otimes I_{3} + I_{2} \otimes A_{P_{3}} \begin{aligned}A_{P_{2}} \otimes I_{3} &= \begin{bmatrix}0 & 0 & 0 & 1 & 0 & 0\\0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0\end{bmatrix}\end{aligned}

Notice again we have 3 copies of P_{2}, but the labeling and drawing is now different. If we create a mapping \phi that relabels v_{4} as v_{2}, v_{6} as v_{4}, and v_{2} as v_{6}, then we’d get back the graph we drew from the adjacency matrix formed by I_{3} \otimes A_{P_{2}}. Thus, the two graphs are structurally equivalent; it just looks different due to labeling and redrawing. The mapping \phi is called an isomorphism because it transforms one graph into the other without losing any structural properties, and the two graphs are isomorphic. 

This implies that A_{P_{2}} \otimes I_{3} \neq I_{3} \otimes A_{P_{2}}, but if we permute rows and columns, we can transform one matrix into the other. If the labelings on the vertices didn’t matter, then the operation is commutative up to isomorphism. If the labelings matter (more common in engineering), then we do not get the “same” graph by switching the order of the Kronecker product.

We’ll do the same thing and examine I_{2} \otimes A_{P_{3}} and compare it to A_{P_{3}}\otimes I_{2}:

\begin{aligned}I_{2} \otimes A_{P_{3}} &= \begin{bmatrix}0 & 1 & 0 & 0 & 0 & 0\\1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0\end{bmatrix}\end{aligned}

The corresponding graph is 

Notice again that here we have two copies of P_{3} just as when we created the graph formed by the matrix A_{P_{3}}\otimes I_{2}, but the labels are again permuted. We have generated an isomorphic graph. 

Finally, we add the two matrices together (or follow the definition to create the last step of the Cartesian product by overlaying vertices and fusing them together). 

\begin{aligned}A_{P_{2}} \otimes I_{3} + I_{2} \otimes A_{P_{3}} &= \begin{bmatrix}0 & 1 & 0 & 1 & 0 & 0\\1 & 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1 & 0 \end{bmatrix}\end{aligned}

This is isomorphic to P_{3} \times P_{2}. We can show this by finding a mapping \phi that “untangles” P_{2} \times P_{3} by relabeling vertices. We can also redraw the ladder structure we saw with P_{3} \times P_{2} and label the vertices so that we get the structure of P_{2} \times P_{3}

Our conclusion is that the graph Cartesian product is “sort of” commutative as an operation. If the vertices were unlabeled or didn’t matter, then the operation is commutative because we can always relabel vertices or “untangle” the structure. If the vertices are labeled and matter, then we don’t get the same graph back when we switch the order in the Cartesian product. Thus, we can say that the graph Cartesian product is commutative to isomorphism. 

Continuation

A natural question after looking through all of this is the following:

Given an adjacency matrix, can we decompose it into a Cartesian product of two graphs?

The next article will address this.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Using Boolean Algebra to Find all Maximal Independent Sets in a Graph

Using Boolean Algebra to Find all Maximal Independent Sets in a Graph

Graph theory may be one of the most widely applicable topics I’ve seen in mathematics. It’s used in chemistry, coding theory, operations research, electrical and network engineering, and so many other places. The subject is mainly credited to have begun with the famous  Seven Bridges of Königsberg problem posed by Leonard Euler in 1736. Frank Harary should also be credited with his massive work in bringing applications of graph theory to the sciences and engineering with his famous textbook written in 1969. 

My own research forced me to stumble into this area once my  research partner, Jason Hathcock, suggested we explore the idea of viewing dependency relations in the sequences of variables we were studying as digraphs. Since then, I’ve been buried in graph theory texts, finding a wealth of fascinating topics to explore.

Of this article’s particular interest is finding all maximally independent sets in a graph using Boolean algebra. 

What’s a maximally independent set?

Firstly, what’s an independent set? 


Definition (Independent Set): A set of vertices of a graph is independent if no two vertices in the set are adjacent. 



If we take a look at the digraph above (from our paper on vertical dependence), and look at the underlying graph1, \{1,6,11\} form an independent set, as an example. There are lots more, and of varying sizes. Of particular interest here are maximal independent sets.


Definition:(Maximal Independent Set): An independent set to which no other vertex in the graph can be added to retain the independence property


An example from the graph above is \{2,3,4,5,13\}. If we added any other vertex to that set, it would be adjacent to some vertex already in there. 

A few notes:

(1) There are many maximal independent sets in a graph, and they may not all have the same cardinality. 

(2) Maximal and maximum are not the same thing. An independent set may be a maximal independent set without being the largest independent set in the graph. The largest cardinality among all the maximal independent sets is called the independence number of the graph and is denoted \beta(G).

Why do we care about maximal independent sets?

Of the many applications that arise, one in particular is in coding theory. We want to find the largest error correcting codes we can, particularly in internet transmissions that can lose packets. A paper discussing this can be found here. (Paywall warning). We’ve discussed some basics of coding theory on this site as well. Finding error correcting codes with desirable properties is equivalent to solving the problem of finding maximal independent sets. The purpose of this article isn’t to discuss the applications here, but I’ve learned long ago that no one will keep reading unless I mention at least one application. 

Finding a maximal independent set

Dependency Graph for a monotonic example

Finding a maximal independent set is relatively simple. Start with any vertex v \in V(G). Add another vertex u that is not adjacent to v. Continue adding vertices that are not adjacent to any already in the set. For a finite graph2, this process will terminate and the result will be a maximally independent set. 

Will it be one of largest cardinality? Not necessarily. 

For example, using one more of our dependency graphs generated by \alpha(n) = \sqrt{n}, we can take the order to be 24 as shown, and build a maximal independent set starting with vertex 3. Note that none of vertices 9-15 or 1 can be in the set, since they’re all adjacent to vertex 3. Vertex 2 is not adjacent to vertex 3, so we add it into our set: V = \{2,3\}. Now, the next vertex we add can’t be adjacent to either 2 or 3, so that rules out 1, 9-15, and 4-8. Grab vertex 16. Now V = \{2,3,16\}. Notice that none of the remaining vertices are adjacent to any of the previous vertices. Continuing this process, we’ll get that V = \{2,3,16,17,18,19,20,21,22,23,24\}. Notice that if we add any other vertices to this set, they’ll be adjacent to something already in it. 

Finding all Maximal Independent Sets

We’re rarely interested in just finding one maximal independent set. We’d prefer to find them all, and doing it by inspection is not very palatable. The heart of the article is an admittedly not optimal but still interesting way to find all maximal independent sets for reasonably small graphs. 

Image credit: https://www.geeksforgeeks.org/mathematics-graph-theory-basics/

We’ll illustrate the method on the 6-node graph above. 

Getting started

First, we’ll assign a Boolean variable to each vertex according to its inclusion in a maximal independent set. For example A = 1 implies A is in the maximal independent set. Recall from Boolean algebra that 

x+y = \left\{\begin{array}{lr}1, & x = 1 \text{ or } y = 1 \text{ or } (x=y=1)\\0,&x=0 \text{ and } y=0\end{array}\right.

Remark: x+y is just another way of writing a union. This isn’t addition mod 2 here. 

xy=\left\{\begin{array}{lr}1, & x = 1 =y\\0,&\text{ otherwise}\end{array}\right.

What we’ve done here is set up inclusion into our maximal independent sets in a Boolean fashion. So x+y = 1 corresponds to the inclusion of either vertex x OR vertex y OR both vertices x and y. Similarly, xy = 1 corresponds to the inclusion of both vertices x and y.

Now, we can express an edge of a graph as a Boolean product xy, where x and y are the vertices at either end of the edge. 

Finally, set up the sum of all edges and call it \phi:

\phi = \sum xy \text{ for all } (x,y) \in G

For our graph above,

\phi = AB + AD + AE + BC + CE + CF + DE + EF

Why did we do this?

For a vertex to be in an independent set, it can’t be adjacent to any other vertices in the set. Put another way, for each edge, we can only have at most one of the vertices that make it up. If we include A in the independent set V, then B cannot be in there. 

Returning to our \phi, note that its value under Boolean algebra can only be 0 or 1. If \phi = 1, then at least one edge has both of its vertices “on”. This means, only combinations of A, B, C, D, E, F that yield \phi = 0 will give us a maximally independent set. 

Solving the problem

Our goal now is to find all combinations of our Boolean vertex variables that yield \phi = 0. As it turns out, solving this directly is pretty annoying3. If we want \phi = 0, that’s logically equivalent to seeking \phi^{c} = 1, where \phi^{c} is the Boolean complement (or negation) of \phi

Recall from Boolean algebra the following4:

\begin{aligned}(xy)^{c}&=x^{c}+y^{c}\\(x+y)^{c} &= x^{c}y^{c}\end{aligned}

So, if we take \phi^{c} for our graph above, 

\begin{aligned}\phi^{c}&=(A^{c}+B^{c})(A^{c}+D^{c})(A^{c}+E^{c})(B^{c}+C^{c})(C^{c}+E^{c})\\&\quad(C^{c}+F^{c})(D^{c}+E^{c})(E^{c}+F^{c})\end{aligned}

What does the negation here actually mean? By taking the complement, instead of finding vertices to include, now we’re finding vertices to excludeWhen we multiply this expression out, we’ll get a sum of terms, where each term is a product of complements of our original Boolean variables. To get \phi^{c} = 1, all we need is one of those terms to be 1. To get a term to be 1, all members of the product must themselves be 1, meaning each term gives us a set of variables to exclude. Excluding these variables gives us one maximally independent set for each term, so this gives us all the maximally independent sets. 

The nice thing about dealing with Boolean arithmetic is that we can program a computer to do this for us. Any time we can invoke a relationship with Boolean algebra, we can enlist a friendly helpful computer. 

Finishing the example

We’ll do it by hand here, because I’m old-school like that. For larger graphs, obviously one would want to enlist some computational help, or just be very patient. We’ll remember a few other rules for Boolean algebra before we finish5:

\begin{aligned}xx &=x\\x+x &=x\\x +xy&=x\end{aligned}

After an insane amount of tedious Boolean algebra,

\phi^{c} = A^{c}C^{c}E^{c}+A^{c}B^{c}E^{c}F^{c}+A^{c}C^{c}D^{c}F^{c}+B^{c}C^{c}D^{c}E^{c}+B^{c}D^{c}E^{c}F^{c}

Recall that each term now tell us which sets of vertices to exclude from a maximal independent set. We negated the question logically. That means we have 5 maximal independent sets:

\{B,D,F\}, \{C,D\}, \{B,E\}, \{A,F\}, \{A,C\}

We can actually say what the independence number is as well, since we just have to find the maximum cardinality among the sets listed. For this graph, \beta(G) = 3.

Conclusion

I happened to find this interesting, and ended up obsessed with it for a day, much to the chagrin of my daily planner, which expected me to be working on writing my research monograph. I tried several different ways of solving this beyond the one given. I tried using the direct equation \phi, and tried using regular arithmetic on just \{0,1\}, setting up a sort-of structure function similar to the reliability block diagrams detailed here. 

I always hesitate to blame the method, and rather my own arithmetic errors, but I didn’t have much luck with the structure-function method, though I may try again to see if it’s an equivalent method. I believe it should be. 

Looking at \phi^{c} makes more sense after playing with this problem for some hours. The sum/union is quite nice, because it neatly separates out the various sets to exclude. It’s a better exploitation of Boolean algebra than trying to work with \phi but aiming for a sum of 0. I still think it should be possible to work with it directly, even if not advisable. If I decide to torture myself with it further, and end up with something to write about, perhaps I’ll append here.

I always end up ending my articles with some takeaway. I don’t have much of one here, except it was a curiosity worth sharing. Perhaps a decent takeaway is to reveal a bit of the tedium and dead-ends mathematicians can run into when exploring something. That’s just part of research and understanding. It’s entirely possible to spend hours, days, weeks on something and all you conclude is that the original method you saw is definitely superior than the one you were trying to develop.