## Quantum Gravity Seminar

### Week 8 - Vector Networks and the Four-Color Theorem

#### November 20, 2000

The Wizard thought a while and then said, "Let's take a break from our usual Track 2 stuff. I want to keep talking about spin networks.

Just for fun, let's see what we can do with spin networks where all the edges are labelled by the same spin. Labelling them all with spin 0 is boring -- every spin network like this gives an identity operator. We can't label them all with spin 1/2, since this would violate the admissibility condition at the vertices. So the simplest interesting case is to label all the edges with spin 1.

We start by introducing a simplified notation for these spin networks. Since all the edges are labelled by spin 1, we leave off the labels on edges. Also, it will be convenient to change our normalization of the trivalent vertex. We define a vertex with a little "x" on it:

```\     /
\   /
\ /
x
|
|
|
```
to be sqrt(-2) times the one we have been working with so far:
```\     /                  1\     /1
\   /                     \   /
\ /                       \ /
x       =   sqrt(-2)      |
|                         |
|                         |
|                        1|
```
Just as spin-1/2 particles are called "spinors", in physics spin-1 particles are called "vectors", for reasons soon to become clear. Thus we call a spin network with all edges labelled by spin 1 and all vertices labelled by a little "x" a "vector network".

Now let's derive the basic identities satisfied by vector networks. Since the spin-1 representation has an orthogonal structure on it, we have

```         |     |             |         |
|     |             |         |
|     |             |         |
\   /              |         |
\ /       =       |         |
/                |         |
/ \               |         |
/   \              |         |
|     |              \       /
\   /                \     /
\_/                  \___/
```
Also, a closed loop gives the dimension of the spin-1 rep:
```            _______
/       \
/         \
|           |
|           |
|           |     =  3
|           |
|           |
\         /
\_______/
```
Furthermore, the vertex is antisymmetric:
```    \   /               |     |
\ /                |     |
/                 |     |
/ \                |     |
/   \     =   -     |     |
\   /                \   /
\ /                  \ /
x                    x
|                    |
|                    |
```
This follows from the homework problem I just assigned in Track 1:
``` j1 \   / j2                           j1 |     | j2
\ /                                  |     |
/                                   |     |
/ \                                  |     |
/   \     =   (-1)^{j1 + j2 - j3}     |     |
\   /                                  \   /
\ /                                    \ /
|                                      |
|                                      |
j3 |                                   j3 |
```
The antisymmetry of the vertex together with our ability to remove twists implies that
```       _               ___            _
/ \             /   \          / \
/   \            \   /         /   \
/     \            \ /         /     \
\     /             /          \     /
\   /             / \          \   /
\ /     =   -   /   \  =   -   \ /
x              \   /           x
|               \ /            |
|                x             |
|                |             |
```
so
```        _
/ \
/   \
/     \
\     /
\   /
\ /
x          =   0
|
|
|
```
It follows that whenever a vector network has an edge connecting a vertex to itself, this network evaluates to zero.

The really interesting identity, however, is this:

```     \   /             |      |         \     /
\ /              |      |          \   /
x               |      |           \ /
|        =      |      |     -      /
x               |      |           / \
/ \              |      |          /   \
/   \             |      |         /     \
```
We call this the "spin-1 skein relation". It should remind you a bit of the binor identity. It's different, but it was also discovered by Penrose. To prove it, we go back to the definitions:
```     \   /              1\   /1
\ /                 \ /
x                   |
|        = (-2)    1|
x                   |
/ \                 / \
/   \              1/   \1
```
and then expand the right hand side, switching momentarily to our usual convention where unlabelled edges have spin 1/2:
```     1\       /1              1\       /1           1\        /1
-----  -----            -----  -----          -----   -----
\ \__/ /                 \ \__/ /              \ \___/ /
\    /                   \    /                \     /
|  |                     |  |                  \   /
(-2)  ------          =    -    |  |         -         \ /
|  |                     |  |                    /
|  |                     |  |                   /  \
/ __ \                   / __ \                 / __ \
/ /  \ \                 / /  \ \               / /  \ \
-----  -----             -----   -----          -----  -----
1/      \1               1/       \1           1/        \1
```
Then we use the binor identity on both terms. With a little help from the symmetrizers, we get
```     1\          /1             1\           /1
-----    -----             -----     -----
\ \     / /                \ \      / /
\ \   / /                  \ \    / /
| \ / |                    | |  | |
-  |  /  |         +          | |  | |
| / \ |                    | |  | |
/ /   \ \                  / /    \ \
/ /     \ \                / /      \ \
-----     -----            -----     -----
1/          \1             1/          \1

1\          /1               1\        /1
-----    -----               -----   -----
\ \     / /                  \ \   / /
\ \   / /                    \ \ / /
| \ /  |                     \ / /
+  |  /   |        +             / /
| / \  |                     / / \
/ /   \ \                    / / \ \
/ /     \ \                  / /   \ \
-----     -----              -----   -----
1/          \1               1/        \1
```
Two of the terms cancel, and the remaining two are just what we need to show that
```     \   /             |      |         \     /
\ /              |      |          \   /
x               |      |           \ /
|        =      |      |     -      /
x               |      |           / \
/ \              |      |          /   \
/   \             |      |         /     \
```
Here are two other nice identities:
```       |              |
|              |
x              |
/ \             |
/   \     =  2   |
\   /            |
\ /             |
x              |
|              |
|              |
```
and
```        \         /                  \         /
\       /                    \       /
x-----x                      \     /
\   /           =            \   /
\ /                          \ /
x                            x
|                            |
|                            |
|                            |
```
They are both great for simplifying vector networks. How do we prove them? The trick is just to find a place to apply the spin-1 skein relation, and apply it! We may need to bend things around a bit first, but the topological invariance of our rules says that's okay.

We prove the first identity as follows:

```       |                |                   |                  |
|             __ |              __   |             __   |
x            /  \|             /  \  |            /  \  |
/ \          |    x            |    | |           /    \ /
/   \   =     |    |     =      |    | |    -     |      /
\   /         |    |            |    | |           \    / \
\ /          |    x            |    | |            \__/  |
x            \__/|             \__/  |                  |
|                |                   |                  |
|                |                   |                  |

|                  |
|                  |
|                  |
=        3   |      -           |
|                  |
|                  |
|                  |

|
|
|
=   2   |
|
|
|
```
I leave the other as an exercise.

And now let me reveal a secret that I've been hiding all along -- while still dropping small clues, so you could have caught on if you were really paying attention. What we are actually doing here is good old-fashioned vector algebra: the study of the dot product and cross product!"

Toby grinned as he realized what the trick was.

"We usually think of these as operations involving vectors in R3, but we can use the same formulas to define them as operations involving vectors in C3, which is the spin-1 rep of SU(2). The dot product corresponds to this vector network:

```
\       /
\     /
\___/
```
while the cross product corresponds to this one:
```
\     /
\   /
\ /
x
|
|
```
(That's why I used a little "x" for it.)

One can show this either by direct computation or a little group representation theory; we leave this as an exercise. By the way, you may have wondered which square root of -2 we used in our new normalization of the trivalent vertex. It doesn't really matter: the two choices correspond to using either a left-hand rule or right-hand rule for the cross product!

The classical identities of vector algebra are easily proved using diagrammatic methods. We have already seen the commutativity of the dot product and anticommutativity of the cross product:

```         |     |             |        |
|     |             |        |
\   /              |        |
\ /       =       |        |
/                |        |
/ \               |        |
/   \              |        |
|     |              \      /
\   /                \    /
\_/                  \__/

a.b = b.a

\   /               |     |
\ /                |     |
/                 |     |
/ \                |     |
/   \     =   -     |     |
\   /                \   /
\ /                  \ /
x                    x
|                    |
|                    |

a x b   =  - b x a
```
This identity:
```             \     /     /        \     \     /
\   /     /          \     \   /
\ /     /      =     \     \ /
x     /              \     x
\___/                \___/

(a x b).c     =     a.(b x c)
```
is true thanks to topology. If we take the spin-1 skein relation and convert one output to an input, we get:
```          \     /     /        \     \     /         \     /    |
\   /     /          \     \   /           \   /     |
\ /     /      =     \     \ /       -     \_/      |
x     /              \     \                       |
\   /                \___/ \                      |
\ /                        |                     |
x                         |                     |
|                         |                     |
|                         |                     |

a x (b x c)   =      (a.c) b         -    (a.b) c
```
If we convert both outputs to inputs, we get:
```          \   /    \   /      \     \   /     /      \    \    /    /
\ /      \ /        \     \ /     /        \    \__/    /
x        x     =    \     /     /    -     \          /
|        |           \   / \   /            \        /
\______/             \_/   \_/              \______/

(a x b).(c x d)  =    (a.c) (b.d)      -     (a.d) (b.c)
```
In short, spin networks know all about vector algebra!

Now, it's not hard to show that we can evalulate any closed vector network using just the identities we have derived so far. The resulting number is always an integer, since our identities only involve integers. This was the reason for changing our normalization of the trivalent vertex. We have also seen that this integer is zero if our vector network has an edge connecting a vertex to itself - or in the language of graph theory, an "edge-loop".

That much is obvious. The following result is less obvious. We say a vector network is "planar" if it can be drawn in the plane without any edges crossing each other.

Theorem: Any planar vector network without edge-loops evaluates to a nonzero integer.

This is one of the hardest theorems in all of mathematics. So far, the only known proofs involve reducing it to thousands of special cases, and then checking these one by one on a computer. The reason is that this result is equivalent to the four-color theorem!"

The class gasped, and the Wizard began to really ham it up.

"The story of this theorem began one rainy day in October, 1852, when a fellow named Francis Guthrie was coloring a map of England. He wondered whether it was always possible to color any map with only 4 colors, in such a way that no two countries (or counties!) touching with a common stretch of boundary were given the same color. Guthrie's brother passed the question on to the famous mathematician De Morgan, who passed it on to his students and other mathematicians. In 1878 Cayley publicized it in the Proceedings of the London Mathematical Society. In just one year, the mathematician Kempe proved it. End of story.

Whoops! In 1890, Heawood found an error in Kempe's proof. And then the real fun began....

I won't tell the rest of the tale, leading up to how Appel and Haken finally proved it in 1976, with the help of a computer calculation involving 1010 operations and taking 1200 hours. Instead, let me explain how the four-color theorem is equivalent to the above result about vector networks.

First, note that to prove the 4-color theorem, it suffices to consider the case where only three countries meet at any "corner," since if more meet there, say four:

```               |
|
|
--------------
|
|
|
```
we can stick in a little country at the corner:
```               |
|
/ \
-----    ------
\ /
|
|
```
so that now only three meet at each corner. If we can color the resulting map, it's easy to check that the same coloring with the little countries deleted gives a coloring of the original map.

Let's use the language of graph theory, calling the map a "graph," the countries "faces," their borders "edges," and the corners "vertices." The graph coming from our map never has an edge-loop:

```        _
/ \
/   \
/     \
\     /
\   /
\ /
|
|
|
```
So we've shown is that to prove the four-color theorem, it suffices to consider trivalent planar graphs without edge-loops!

Now, to four-color the faces of a graph like this, it turns out to be enough to three-color the edges. In other words, it's enough to label its edges with the letters i, j and k in such a way that no two edges labelled by the same letter meet at a vertex. For example:

```         ___i______j____
|\       /     /\
| j     k     i  \
|  \   /     /    k
|   \ /     /      \
k    \     / \     |
|     i   k   j   /
\     \ /     \ /
\     |       /
\    j      i
\   |     /
\__|____/
```
How does this give a four-coloring of the faces? Can anyone guess?"

He paused, but nobody guessed.

"Well, the trick is to use this profound relationship between the numbers 3 and 4: there are 3 ways to divide the set of 4 colors into two pairs:

```    R ---- G              R      G               R     G
|      |                \   /
|      |                  /
|      |                /   \
B ---- Y              B      Y               B     Y
```
We call these 3 ways "i", "j" and "k". To get a four-coloring of our graph, we start by coloring one face arbitrarily, say red:
```         ___i______j____
|\       /     /\
| j     k     i  \
|  \   /     /    k
|   \ /     /      \
k    \     / \     |
|     i   k   j   /
\     \ /     \ /
\     |       /
\    j  R   i
\   |     /
\__|____/
```
Then we color the rest using this rule: whenever we cross an edge, the face color switches to the other color in the pair corresponding to the letter on the edge:
```       i                    j                     k

R --- G              R     G               R     G
|     |                \   /
|     |                  /
|     |                /   \
B --- Y              B     Y               B     Y

```
In the above example we get this four-coloring:
```                G

___i______j____
|\       /     /\
| j  R  k     i  \
|  \   /     /    k
|   \ /  Y  /  B   \
k    \     / \     |
|     i   k   j   /
\     \ /     \ /
G       \  B  |       /     G
\    j  R   i
\   |     /
\__|____/
```
I'll leave it as a puzzle to check that this rule always gives a well-defined four-coloring. In particular, you should check that as you march around a vertex following this rule, the 3 countries you go through have different colors, and by the time you get back where you started, you get back to the same color. This isn't hard.

Even better, we can run this rule backwards. If we start with a four-coloring of the faces, it uniquely determines a way to label the edges by i, j, k so that no two edges meeting at a vertex get the same letter.

In short, the 4-color theorem is equivalent to this result:

Theorem: For any trivalent planar graph without edge-loops, there exists a way to label the edges by the letters i, j and k so that no two edges meeting at a vertex are labelled by the same letter.

Next, let's see why this theorem is equivalent to the fact that any planar vector network without loops evaluates to a nonzero integer.

Think of i, j and k as the usual basis for C3, and imagine evaluating a vector network by contracting a lot of tensors, one for each vertex. Since the vertex of a vector network is just the cross product in disguise, the tensor at each vertex is just

```epsilonabc =     1      (a,b,c) = an even permutation of (1,2,3) as we
march counterclockwise around the vertex
-1      (a,b,c) = an odd permutation of (1,2,3) as we
march counterclockwise around the vertex
0      otherwise
```
The spin network is thus given by a sum over all ways of labelling the edges by i, j, and k, where each term in this sum is a product over all vertices of numbers 1, -1, or 0, computed using the above rule.

If there is no way to three-color the edges of our graph, all the terms in this sum will vanish, so our network will evaluate to zero. Thus the only thing left to check is the converse: if the network evaluates to zero, all the terms in the sum must vanish -- so there are no ways to three-color its edges! I'll leave the proof of this as an exercise for the truly courageous among you. In Penrose's paper on the subject he omits this proof, saying there are lots of ways to do it, but none of them very snappy.

Anyway: this means that the four-color theorem is really a theorem about vector networks! We can also reformulate it purely in terms of the vector cross product, as follows:

Theorem: Given two parenthesizations of the iterated cross product
```v1 x v2 x ... x vn
```
there is a way of choosing each vector v1, ... ,vn from the set of basis vectors {i,j,k} so that both parenthesizations give the same nonzero result.

Who'd have guessed the cross product held such deep mysteries?

Can we use any of these reformulations to find a better proof of the four-color theorem? Nobody knows! If any of you can do it, you'll immediately be made a wizard, without needing to pass through the usual arduous apprenticeship."