"I want to begin with one note." said the Wizard "I don't remember if I got it wrong last time, or if I didn't say it at all, but I'm saying it now. When you form duals, you don't reflect across a horizontal line. Rather, you rotate 180 degrees. It makes a difference.".

"O," said Toby the Acolyte "so if you're working with infinite dimensional vector spaces, where V** isn't the same as V, then rotating 360 degrees isn't the same as no rotation.".

The Wizard gave the Acolyte a funny look and went on. "All right, now, if you think you know these diagrams, tell me what this means:

| | V v | | / \ | T | \_/Any guesses?".

Toby the Acolyte's hand shot up.

"Toby can't answer this," said the Wizard "because Toby is an expert on the number 0, so of course he knows the answer right away.".

[Actually, I'm an expert on the empty set. -- Ed.]

"Let me give you a hint." said the Wizard "You remember that

| | U v | | / \ | T | \_/ / \ / \ V v W v | | | |is a linear map from U to V (x) W. There are

"0?" asked Oz.

Foom!

"Think about that." said the Wizard, as Oz smoldered, "If I tensor 2 spaces, V and W, and then tensor in 2 more spaces X and Y, the result is the tensor product V (x) W (x) X (x) Y of all 4 spaces. But you're telling me that if I tensor 0 spaces and then tensor in 2 more space X and Y, the result is 0 (x) X (x) Y = 0? Since I have 2 spaces, X and Y, in all, the result should be their tensor product, X (x) Y.".

"So, the answer should be something that doesn't change X
when you tensor X with it?" said Oz "Something like **C**?".

Foosh!

"Hey," said Miguel the Acolyte "that's not fair. Oz was correct!".

"If you listen carefully," said the Wizard "you'll see that I didn't hit him with a fireball. Instead I covered him with flame resistant spray, which will protect him from 1 future fireball, as a reward for thinking so well.

Anyway, Oz is correct. That's just **C** coming out of the bottom.
So, T is a linear map from V to **C**, or an element of V*, if you like.

Now, what do you suppose this is:".

_ / \ | T | \_/ | | V v | |"A map from

"Right," said the Wizard "but I'm not going to reward you yet,
because I want to know what a map from **C** to V really is.".

"It's an element of V." said Richard, the Acolyte the Wizard Forgot to Introduce me to Earlier.

"Exactly." said the Wizard "Do you see why, Oz?".

"No." said Oz "How can a map *to* V be an element *in* V?".

"Well, let's look at the map T: **C** -> V.
Since T is linear, we can say, if c in **C**, that Tc = cT1.
So, T is determined entirely by the value T1 that it assigns to 1 in **C**.
And of course T1 is determined entirely by T.
So, they really are essentially the same thing.
But T1 could be any element of V!
So, a linear map from **C** to V is essentially just an element of V.
Not *any* function, but a *linear* map.

OK, now how about this picture:

_ / \ | T | \_/What's T in this case?".

"A linear map from **C** to **C**." said everybody.

"Or just a complex number." said Jay the Acolyte.

"A linear map from **C** to **C** is just multiplication by some constant."
said Toby the Acolyte "That constant is the complex number.".

"Right." said the Wizard "I'm glad you're all so good at these examples.

Now let's move on to something new. There's a diagram that we can call the "cup", which looks like this:

| | | | V ^ V v \ / \_______/I haven't told you what this means yet, but you can tell me what spaces it maps between: V* (x) V up top to

Now, since there's no circle in the middle,
this must be a natural map from V* (x) V to **C**,
and indeed there is a very natural map of this sort.
We call it the dual pairing and denote it by "e_{V}",
and we define it by e_{V}(L,v) = Lv for L in V* and v in V.

Notice that the symbol "e_{V}" appears only in linear symbols,
not in diagrammatic language; the diagram just has the cup shape.
It's the same thing with the identity map;
the symbol "1_V" appears in symbols on a line,
but the diagram just has an unbroken arrow.".

"Couldn't we call the cup the "trace"?" asked Toby the Acolyte "After all, V* (x) V is secretly the linear maps from V to V.".

"Actually," said the Wizard "I'd rather say that
the linear maps from V to V are V (x) V*.
Of course, you and I know that V* (x) V and V (x) V* are the same thing.
But our diagrams don't know that -- at least, not yet.
It's all a matter of convention, of course,
but, if you make V* (x) V your space of linear maps,
then you have to make e_{V} go from V (x) V* to **C** --
you'll see why later -- so e_{V} is never the trace.

Now, there have been some requests for index notation,
so let me explain how to define e_{V} in a basis.
First of all, choose a basis {e_{i}} for V.
Then there's a conjugate basic {e^{i}} for V*.
If v in V, its components v^{i} are e^{i}(v), and v = v^{i} e_{i},
where I'm implicitly summing over the index i (Einstein convention).
If L in V*, its components L_{i} are L(e_{i}), and L = L_{i} e^{i}.
In particular, e^{i}(e_{j}) = delta^{i}_{j}, the Kronecker delta.
Of course, all this works only in finite dimensions;
in infinite dimensions, the {e^{i}} wouldn't span V*.
Anyway, e_{V}(L,v) = e_{V}(L_{i} e^{i},v^{j} e_{j}) = L_{i} v^{j} e^{i}(e_{j})
= L_{i} v^{j} delta^{i}_{j} = L_{i} v^{j}, just as you might expect.

The next concept to consider is the cap:

_______ / \ / \ V v V ^ | | | |This is a natural map i

But maybe I'd better convince you that V (x) V*
really is the space of all linear maps from V to V.
Remember our bases {e_{i}} and {e^{i}} for V and V*.
Every element of V (x) V* can be written as
T^{i}_{j} e_{i} (x) e^{j} for some matrix of coefficients T^{i}_{j}.
To act this on e_{k} in V, we use the dual pairing:
make e^{j} act on e_{k} to produce delta^{j}_{k},
and we get T^{i}_{j} e_{i} delta^{j}_{k} = T^{i}_{k} e_{i},
which is another element of V, as it should be.
Conversely, if T is a linear map from V to V,
then T has a matrix T^{i}_{j} with respect to the basis {e_{i}}.
This matrix is defined by T(e_{k}) = T^{i}_{k} e_{i}.
But that's just what you get when you act T^{i}_{j} e_{i} (x) e^{j} on e_{k},
so T must be the same thing as T^{i}_{j} e_{i} (x) e^{j}.

It all works out, just like I said,
even without changing the order of tensor products
(which, as I said, our diagrams can't do yet).
You can even see that the conventions that
matrices belong to V (x) V* while e_{V} maps from V* (x) V
both come from the convention of writing operators on the left,
so you can't switch one without switching the other.
You'll see another reason later.

Now, what if we combine the cup and the cap like this:

| | | _______ | / \ V v / \ | | V ^ | | \ / V v \_______/ | | | |This is some kind of natural map from V to V. What could it be? Well, I claim that it's just the identity map

| | V v | |To prove that I'm right, just consider what's happening at each time. To begin with, we have an element v in V. But V is the same thing as

We can even do it in components the whole way through.
Write v as v^{i} e_{i} to begin with.
Then we start with 1 (x) v^{i} e_{i} = v^{i} (1 (x) e_{i}) in **C** (x) V.
Then 1 becomes e_{j} (x) e^{j}, so we have v^{i} (e_{j} (x) e^{j} (x) e_{i}).
Then e^{j} (x) e_{i} becomes e^{j}(e_{i}) = delta^{j}_{i},
so we have v^{i} (e_{j} (x) delta^{j}_{i}) = v^{i} (e_{i} (x) 1).
But e_{i} (x) 1 in V (x) **C** corresponds to just e_{i} in V,
so the final result in v^{i} e_{i} = v.

However, now that we've proved that this curved diagram is the same thing as the identity map, we no longer have to do this calculation, with or without components. Just think diagrammatically and pull the curve straight!

Now, how about this diagram:

| | | V ^ _______ | / \ | / \ | V v | \ / | \_______/ | V ^ | | |Can you pull this straight too, to get 1_V*? Of course the answer is yes, but your homework is to prove it! Once you prove it, you never have to calculate it again, but you do have to prove it first -- this is mathematics, after all.

OK, we've seen how to do vector spaces,
including tensor products and linear maps,
so the next question is, can we do linear algebras?
Well, we can think of an algebra as a vector space A
along with an identity map i: **C** -> A
and a multiplication map m: A (x) A -> A.
I know, usually we think of the identity of an algebra
as an element *in* the algebra, not as a map from **C** *to*
the algebra,
but we already know that these are secretly the same thing.
Let's draw these diagrammatically:

| | | | A v A v \ / \_/ / \ | m | \_/ | | A v | | _ / \ | i | \_/ | | A v | |Now, m and i have to satisfy a couple of nice properties. Not just any old m and i will do! m needs to satisfy the associativity law:

| | | | | | | | | | | | A v A v | | A v A v \ / | | \ / \_/ | | \_/ / \ | | / \ | m | A v A v | m | \_/ | | \_/ \ | = | / \ | | / A v | | A v \ / \ / \_/ \_/ / \ / \ | m | | m | \_/ \_/and there are left and right unit laws:

_ _ / \ | | | / \ | i | | | | | i | \_/ | | | \_/ | A v | A v | | | | | | A v | | | A v \ / | \ / \_/ = A v = \_/ / \ | / \ | m | | | m | \_/ | \_/ | | | | | | A v | A v | | | | | |Now, I claim that the linear transformations from V to V are in fact an algebra -- good old matrix multiplication. Let's see how this works diagrammatically!

Now, A is V (x) V*, as I've mentioned from time to time before. So, is there a natural map m: V (x) V* (x) V (x) V* -> V (x) V*? Of course there is; just apply the cup to the V* (x) V in the middle. How do we draw this in a diagram? Like so:

| | | | | | | | V v ^ V V v ^ V \ \ / / \ \___/ / \ / \ / V v ^ V | | | |Incidentally, for the physicists," -- Jay the Acolyte's head popped up -- "notice that V (x) V* is like having a particle next to an antiparticle. That's how a meson works -- a quark combined with an antiquark. 2 mesons can combine to form 1 meson, and in just this fashion: the antiquark and the quark in the middle mutually annihilate. Anyway, this is meson multiplication -- I mean matrix multiplication.

And what about i: **C** -> V (x) V*? That's easy -- it's just the cap:

___ / \ \ / V v ^ V | | | |In terms of mesons, this is production of a meson out of nothing. In real life, that violates energy conservation, but we haven't told our diagrams about energy conservation yet.

Now, I claim that matrix multiplication is associative. And here's where you see the true value of diagrammatic reasoning. Remember the proof of associativity you saw in linear algebra? Wasn't it a mess? Well, look at this:

| | | | | | | | | | | | | | | | | | | | | | | | V v ^ V V v ^ V V v ^ V V v ^ V V v ^ V V v ^ V \ \ / / / / \ \ \ \ / / \ \___/ / / / \ \ \ \___/ / \ / / / \ \ \ / \ / / / \ \ \ / V v ^ V V v ^ V = V v ^ V V v ^ V \ \ / / \ \ / / \ \___/ / \ \___/ / \ / \ / \ / \ / V v ^ V V v ^ V | | | | | | | |It's obvious! Just slide the top connection to the other side of the bottom one.

Now, technically, we have something to prove here, but there isn't really anything new to do -- no more calculations, whether messy or clean. It's just the old thing from last week about how it doesn't matter in which order you perform operations which lie on the other side of a tensor product. We've got 2 cups here; they can go in either order, since one is on the left side and the other is on the right.

OK, now to see if i is really an identity. We know it should be, since it's the identity matrix, but can we prove it? That is, can we prove:

___ ___ / \ | | | | | | / \ \ / | | | | | | \ / V v ^ V V v ^ V V v ^ V V v ^ V V v ^ V \ \ / / | | \ \ / / \ \___/ / = | | = \ \___/ / \ / | | \ / \ / | | \ / V v ^ V V v ^ V V v ^ V | | | | | | | | | | | |OK, let's make one of you do it. Yes, one of you track 1 students just sitting there, never saying a word. Right there -- YOU!".

Nervously, the hapless student the Wizard pointed to got out of her seat and came over to the board ....

[I leave this as an exercise for you, the hapless reader. -- Ed.]

toby@math.ucr.edu