Quantum Gravity Seminar

Week 2, Track 1

Toby Bartels

October 9, 2000

The Wizard surveyed his audience. It was smaller. Yet some of the Track 1 people remained -- good. By the magic of computer storytelling, Oz was there too.

"I want to begin with one note." said the Wizard "I don't remember if I got it wrong last time, or if I didn't say it at all, but I'm saying it now. When you form duals, you don't reflect across a horizontal line. Rather, you rotate 180 degrees. It makes a difference.".

"O," said Toby the Acolyte "so if you're working with infinite dimensional vector spaces, where V** isn't the same as V, then rotating 360 degrees isn't the same as no rotation.".

The Wizard gave the Acolyte a funny look and went on. "All right, now, if you think you know these diagrams, tell me what this means:

         |
         |
       V v
         |
         |
        / \
       | T |
        \_/
Any guesses?".

Toby the Acolyte's hand shot up.

"Toby can't answer this," said the Wizard "because Toby is an expert on the number 0, so of course he knows the answer right away.".

[Actually, I'm an expert on the empty set. -- Ed.]

"Let me give you a hint." said the Wizard "You remember that

            |
            |
          U v
            |
            |
           / \
          | T |
           \_/
           / \
          /   \
       V v   W v
         |     |
         |     |
is a linear map from U to V (x) W. There are 2 arrows, V and W, coming out at the bottom, so we take the tensor product of these 2 spaces. But now we have 0 arrows coming out at the bottom. What's the tensor product of 0 spaces?".

"0?" asked Oz.

Foom!

"Think about that." said the Wizard, as Oz smoldered, "If I tensor 2 spaces, V and W, and then tensor in 2 more spaces X and Y, the result is the tensor product V (x) W (x) X (x) Y of all 4 spaces. But you're telling me that if I tensor 0 spaces and then tensor in 2 more space X and Y, the result is 0 (x) X (x) Y = 0? Since I have 2 spaces, X and Y, in all, the result should be their tensor product, X (x) Y.".

"So, the answer should be something that doesn't change X when you tensor X with it?" said Oz "Something like C?".

Foosh!

"Hey," said Miguel the Acolyte "that's not fair. Oz was correct!".

"If you listen carefully," said the Wizard "you'll see that I didn't hit him with a fireball. Instead I covered him with flame resistant spray, which will protect him from 1 future fireball, as a reward for thinking so well.

Anyway, Oz is correct. That's just C coming out of the bottom. So, T is a linear map from V to C, or an element of V*, if you like.

Now, what do you suppose this is:".

         _
        / \
       | T |
        \_/
         |
         |
       V v
         |
         |

"A map from C to V!" said Oz, proud of himself.

"Right," said the Wizard "but I'm not going to reward you yet, because I want to know what a map from C to V really is.".

"It's an element of V." said Richard, the Acolyte the Wizard Forgot to Introduce me to Earlier.

"Exactly." said the Wizard "Do you see why, Oz?".

"No." said Oz "How can a map to V be an element in V?".

"Well, let's look at the map T: C -> V. Since T is linear, we can say, if c in C, that Tc = cT1. So, T is determined entirely by the value T1 that it assigns to 1 in C. And of course T1 is determined entirely by T. So, they really are essentially the same thing. But T1 could be any element of V! So, a linear map from C to V is essentially just an element of V. Not any function, but a linear map.

OK, now how about this picture:

         _
        / \
       | T |
        \_/

What's T in this case?".

"A linear map from C to C." said everybody.

"Or just a complex number." said Jay the Acolyte.

"A linear map from C to C is just multiplication by some constant." said Toby the Acolyte "That constant is the complex number.".

"Right." said the Wizard "I'm glad you're all so good at these examples.

Now let's move on to something new. There's a diagram that we can call the "cup", which looks like this:

         |           |
         |           |
       V ^         V v
          \         /
           \_______/
I haven't told you what this means yet, but you can tell me what spaces it maps between: V* (x) V up top to C down on the bottom.

Now, since there's no circle in the middle, this must be a natural map from V* (x) V to C, and indeed there is a very natural map of this sort. We call it the dual pairing and denote it by "eV", and we define it by eV(L,v) = Lv for L in V* and v in V.

Notice that the symbol "eV" appears only in linear symbols, not in diagrammatic language; the diagram just has the cup shape. It's the same thing with the identity map; the symbol "1_V" appears in symbols on a line, but the diagram just has an unbroken arrow.".

"Couldn't we call the cup the "trace"?" asked Toby the Acolyte "After all, V* (x) V is secretly the linear maps from V to V.".

"Actually," said the Wizard "I'd rather say that the linear maps from V to V are V (x) V*. Of course, you and I know that V* (x) V and V (x) V* are the same thing. But our diagrams don't know that -- at least, not yet. It's all a matter of convention, of course, but, if you make V* (x) V your space of linear maps, then you have to make eV go from V (x) V* to C -- you'll see why later -- so eV is never the trace.

Now, there have been some requests for index notation, so let me explain how to define eV in a basis. First of all, choose a basis {ei} for V. Then there's a conjugate basic {ei} for V*. If v in V, its components vi are ei(v), and v = vi ei, where I'm implicitly summing over the index i (Einstein convention). If L in V*, its components Li are L(ei), and L = Li ei. In particular, ei(ej) = deltaij, the Kronecker delta. Of course, all this works only in finite dimensions; in infinite dimensions, the {ei} wouldn't span V*. Anyway, eV(L,v) = eV(Li ei,vj ej) = Li vj ei(ej) = Li vj deltaij = Li vj, just as you might expect.

The next concept to consider is the cap:

            _______
           /       \
          /         \
       V v         V ^
         |           |
         |           |
This is a natural map iV: C -> V (x) V*. If V (x) V* is the space of linear maps from V to V, then this natural map is easy to guess: it maps every c in C to c times the identity map! It's easy to see how this works in a basis too. We simply map c in C to c ei (x) ei in V (x) V*. And, if we think of a map from C to V (x) V* as an element of V (x) V*, then iV *is* the identity map on V, pure and simple.

But maybe I'd better convince you that V (x) V* really is the space of all linear maps from V to V. Remember our bases {ei} and {ei} for V and V*. Every element of V (x) V* can be written as Tij ei (x) ej for some matrix of coefficients Tij. To act this on ek in V, we use the dual pairing: make ej act on ek to produce deltajk, and we get Tij ei deltajk = Tik ei, which is another element of V, as it should be. Conversely, if T is a linear map from V to V, then T has a matrix Tij with respect to the basis {ei}. This matrix is defined by T(ek) = Tik ei. But that's just what you get when you act Tij ei (x) ej on ek, so T must be the same thing as Tij ei (x) ej.

It all works out, just like I said, even without changing the order of tensor products (which, as I said, our diagrams can't do yet). You can even see that the conventions that matrices belong to V (x) V* while eV maps from V* (x) V both come from the convention of writing operators on the left, so you can't switch one without switching the other. You'll see another reason later.

Now, what if we combine the cup and the cap like this:

                                 |
                                 |
                                 |
            _______              |
           /       \           V v
          /         \            |
         |         V ^           |
         |            \         /
       V v             \_______/
         |
         |
         |
         |
This is some kind of natural map from V to V. What could it be? Well, I claim that it's just the identity map
         |
         |
       V v
         |
         |

To prove that I'm right, just consider what's happening at each time. To begin with, we have an element v in V. But V is the same thing as C (x) V, so we can think of this as 1 (x) v in C (x) V. Then the cap turns 1 into 1_V in V (x) V*, so we have 1_V (x) v in V (x) V* (x) V. You'd expect this to become 1_V(v) = v in V, and you'd be right, but that's really 2 steps. To break it up, write 1_V as ei (x) ei, so we have ei (x) ei (x) v in V (x) V* (x) V. Then the cup turns ei (x) v into ei(v) = vi, so we have ei (x) vi in V (x) C. But vi is just a complex number, which we can move wherever we want in the tensor product, so ei (x) vi = vi ei (x) 1 = v (x) 1. Interpreted as a member of V, this is just v.

We can even do it in components the whole way through. Write v as vi ei to begin with. Then we start with 1 (x) vi ei = vi (1 (x) ei) in C (x) V. Then 1 becomes ej (x) ej, so we have vi (ej (x) ej (x) ei). Then ej (x) ei becomes ej(ei) = deltaji, so we have vi (ej (x) deltaji) = vi (ei (x) 1). But ei (x) 1 in V (x) C corresponds to just ei in V, so the final result in vi ei = v.

However, now that we've proved that this curved diagram is the same thing as the identity map, we no longer have to do this calculation, with or without components. Just think diagrammatically and pull the curve straight!

Now, how about this diagram:

         |
         |
         |
       V ^              _______
         |             /       \
         |            /         \
         |         V v           |
          \         /            |
           \_______/             |
                               V ^
                                 |
                                 |
                                 |
Can you pull this straight too, to get 1_V*? Of course the answer is yes, but your homework is to prove it! Once you prove it, you never have to calculate it again, but you do have to prove it first -- this is mathematics, after all.

OK, we've seen how to do vector spaces, including tensor products and linear maps, so the next question is, can we do linear algebras? Well, we can think of an algebra as a vector space A along with an identity map i: C -> A and a multiplication map m: A (x) A -> A. I know, usually we think of the identity of an algebra as an element in the algebra, not as a map from C to the algebra, but we already know that these are secretly the same thing. Let's draw these diagrammatically:

         |     |
         |     |
       A v   A v
          \   /
           \_/
           / \
          | m |
           \_/
            |
            |
          A v
            |
            |

         _
        / \
       | i |
        \_/
         |
         |
       A v
         |
         |

Now, m and i have to satisfy a couple of nice properties. Not just any old m and i will do! m needs to satisfy the associativity law:

         |     |     |           |     |     |
         |     |     |           |     |     |
       A v   A v     |           |   A v   A v
          \   /      |           |      \   /
           \_/       |           |       \_/
           / \       |           |       / \
          | m |    A v         A v      | m |
           \_/       |           |       \_/
             \       |     =     |       /
              \      |           |      /
             A v     |           |   A v
                \   /             \   /
                 \_/               \_/
                 / \               / \
                | m |             | m |
                 \_/               \_/

and there are left and right unit laws:
         _                                     _
        / \    |             |           |    / \
       | i |   |             |           |   | i |
        \_/    |             |           |    \_/
         |   A v             |         A v     |
         |     |             |           |     |
       A v     |             |           |   A v
          \   /              |            \   /
           \_/       =     A v     =       \_/
           / \               |             / \
          | m |              |            | m |
           \_/               |             \_/
            |                |              |
            |                |              |
          A v                |            A v
            |                |              |
            |                |              |
Now, I claim that the linear transformations from V to V are in fact an algebra -- good old matrix multiplication. Let's see how this works diagrammatically!

Now, A is V (x) V*, as I've mentioned from time to time before. So, is there a natural map m: V (x) V* (x) V (x) V* -> V (x) V*? Of course there is; just apply the cup to the V* (x) V in the middle. How do we draw this in a diagram? Like so:

         | |       | |
         | |       | |
       V v ^ V   V v ^ V
          \ \     / /
           \ \___/ /
            \     /
             \   /
            V v ^ V
              | |
              | |
Incidentally, for the physicists," -- Jay the Acolyte's head popped up -- "notice that V (x) V* is like having a particle next to an antiparticle. That's how a meson works -- a quark combined with an antiquark. 2 mesons can combine to form 1 meson, and in just this fashion: the antiquark and the quark in the middle mutually annihilate. Anyway, this is meson multiplication -- I mean matrix multiplication.

And what about i: C -> V (x) V*? That's easy -- it's just the cap:

         ___
        /   \
        \   /
       V v ^ V
         | |
         | |
In terms of mesons, this is production of a meson out of nothing. In real life, that violates energy conservation, but we haven't told our diagrams about energy conservation yet.

Now, I claim that matrix multiplication is associative. And here's where you see the true value of diagrammatic reasoning. Remember the proof of associativity you saw in linear algebra? Wasn't it a mess? Well, look at this:

         | |       | |       | |           | |       | |       | |
         | |       | |       | |           | |       | |       | |
       V v ^ V   V v ^ V   V v ^ V       V v ^ V   V v ^ V   V v ^ V
          \ \     / /       / /             \ \       \ \     / /
           \ \___/ /       / /               \ \       \ \___/ /
            \     /       / /                 \ \       \     /
             \   /       / /                   \ \       \   /
            V v ^ V   V v ^ V        =        V v ^ V   V v ^ V
               \ \     / /                       \ \     / /
                \ \___/ /                         \ \___/ /
                 \     /                           \     /
                  \   /                             \   /
                 V v ^ V                           V v ^ V
                   | |                               | |
                   | |                               | |
It's obvious! Just slide the top connection to the other side of the bottom one.

Now, technically, we have something to prove here, but there isn't really anything new to do -- no more calculations, whether messy or clean. It's just the old thing from last week about how it doesn't matter in which order you perform operations which lie on the other side of a tensor product. We've got 2 cups here; they can go in either order, since one is on the left side and the other is on the right.

OK, now to see if i is really an identity. We know it should be, since it's the identity matrix, but can we prove it? That is, can we prove:

         ___                                             ___
        /   \      | |           | |           | |      /   \
        \   /      | |           | |           | |      \   /
       V v ^ V   V v ^ V       V v ^ V       V v ^ V   V v ^ V
          \ \     / /            | |            \ \     / /
           \ \___/ /       =     | |     =       \ \___/ /
            \     /              | |              \     /
             \   /               | |               \   /
            V v ^ V            V v ^ V            V v ^ V
              | |                | |                | |
              | |                | |                | |

OK, let's make one of you do it. Yes, one of you track 1 students just sitting there, never saying a word. Right there -- YOU!".

Nervously, the hapless student the Wizard pointed to got out of her seat and came over to the board ....

[I leave this as an exercise for you, the hapless reader. -- Ed.]


toby@math.ucr.edu

© 2000 Toby Bartels

home