Back in the 1990s, James Dolan got me interested in homotopy theory by explaining how it offers many important clues to n-categories. We spent a bunch of time trying to learn this fascinating subject. Since trying to explain something is often the best way to learn it, I wrote a quick tour of basic concepts in homotopy theory in my series This Week's Finds, starting with "week115" and going on to "week121". Here is that tour.
I'll only talk about the most basic constructions with simplicial sets, with a quick explanation of how they link up to n-categories, but at least it should get you started.
Let's call a topological space simply a "space", and call a continuous function between these simply a "map". Two maps f,g: X → Y are "homotopic" if one can be continuously deformed to the other, or in other words, if there is a "homotopy" between them: a continuous function F: [0,1] x X → Y with
F(0,x) = f(x)
and
F(1,x) = g(x).
Also, two spaces X and Y are "homotopy equivalent" if there are functions f: X → Y and g: Y → X for which fg and gf are homotopic to the identity. Thus, for example, a circle, an annulus, and a solid torus are all homotopy equivalent.
Homotopy theorists want to classify spaces up to homotopy equivalence. And given two spaces X and Y, they want to understand the set [X,Y] of homotopy classes of maps from X to Y. However, these are very hard problems! To solve them, one needs high-powered machinery.
There are two roughly two sides to homotopy theory: building machines, and using them to do computations. Of course these are fundamentally inseparable, but people usually tend to prefer either one or the other activity. Since I am a mathematical physicist, always on the lookout for more tools for my own work, I'm more interested in the nice shiny machines homotopy theorists have built than in the terrifying uses to which they are put.
What follows will strongly reflect this bias: I'll concentrate on a bunch of elegant concepts lying on the interface between homotopy theory and category theory. This realm could be called "homotopical algebra". Ideas from this realm can be applied, not only to topology, but to many other realms. Indeed, two of its most famous practitioners, James Stasheff and Graeme Segal, have spent the last decade or so using it in string theory! I'll eventually try to say a bit about how that works, too.
Okay.... now I'll start listing concepts and tools, starting with the more fundamental ones and then working my way up. This will probably only make sense if you've got plenty of that commodity known as "mathematical sophistication". So put on some Coltrane, make yourself a cafe macchiato, kick back, and read on. If at any point you feel a certain lack of sophistication, you might want to reread "the tale of n-categories", starting with "week73", where a bunch of the basic terms are defined.
Given a category C, a "presheaf" on C is a contravariant functor F: C → Set. The original example of this is where C is the category whose objects are open subsets of a topological space X, with a single morphism f: U → V whenever the open set U is contained in the open set V. For example, there is the presheaf of continuous real-valued functions, for which F(U) is the set of all continuous real functions on U, and for any inclusion f: U → V, F(f): F(V) → F(U) is the "restriction" map which assigns to any continuous function on V its restriction to U. This is a great way of studying functions in all neighborhoods of X at once.
However, I'm bringing up this subject for a different reason, related to a different kind of example. Suppose that C is a category whose objects are "shapes" of some kind, with morphisms f: x → y corresponding to ways the shape x can be included as a "piece" of the shape y. Then a presheaf on C can be thought of as a geometrical structure built by gluing together these shapes along their common pieces.
For example, suppose we want to describe directed graphs as presheaves. A directed graph is a bunch of vertices and edges, where the edges have a direction specified. Since directed graphs are made of two "shapes", the vertex and the edge, we'll cook up a little category C with two object, V and E. There are two ways a vertex can be included as a piece of an edge, either as its "source" or its "target". Our category C, therefore, has two morphisms, S: V → E and T: V → E. These are the only morphisms except for identity morphisms - which correspond to how the edge is part of itself, and the vertex is part of itself! Omitting identity morphisms, our little category C looks like this:
S --------> V E --------> TNow let's work out what a presheaf on C is. It's a contravariant functor F: C → Set. What does this amount to? Well, it amounts to a set F(V) called the "set of vertices", a set F(E) called the "set of edges", a function F(S): F(E) → F(V) assigning to each edge its source, and a function F(T): F(E) → F(V) assigning to each edge its target. That's just a directed graph!
Note the role played by contravariance here: if a little shape V is included as a piece of a big shape E, our category gets a morphism S: V → E, and then in our presheaf we get a function F(S): F(E) → F(V) going the other way, which describes how each big shape has a bunch of little shapes as pieces.
Given any category C there is actually a category of presheaves on C. Given presheaves F,G: C → Sets, a morphism M from F to G is just a natural transformation M: F ⇒ G. This is beautifully efficient way of saying quite a lot. For example, if C is the little category described above, so that F and G are directed graphs, a natural transformation M: F ⇒ G is the same as:
a map M(V) sending each vertex of the graph F to a vertex of the graph G,and
a map M(E) sending each edge of the graph F to a edge of the graph G,such that
M(V) of the source of any edge e of F equals the source of M(E) of e,and
M(V) of the target of any edge e of F equals the target of M(E) of e.Whew! Easier just to say M is a natural transformation between functors!
For more on presheaves, try:
This is a very important example of a category whose objects are shapes - namely, simplices - and whose morphisms correspond to the ways one shape is a piece of another. The objects of Δ are called 1, 2, 3, etc., corresponding to the simplex with 1 vertex (the point), the simplex with 2 vertices (the interval), the simplex with 3 vertices (the triangle), and so on. There are a bunch of ways for an lower-dimensional simplex to be a face of a higher- dimensional simplex, which give morphisms in Δ. More subtly, there are also a bunch of ways to map a higher-dimensional simplex down into a lower-dimensional simplex, called "degeneracies". For example, we can map a tetrahedron down into a triangle in a way that carries the vertices {0,1,2,3} of the tetrahedron into the vertices {0,1,2} of the triangle as follows:
0 -> 0 1 -> 0 2 -> 1 3 -> 2These degeneracies also give morphisms in Δ.
We could list all the morphisms and the rules for composing them explicitly, but there is a much slicker way to describe them. Let's use the old trick of thinking of the natural number n as being the totally ordered n-element set {0,1,2,...,n-1} of all natural numbers less than n. Thus for example we think of the object 4 in Δ, the tetrahedron, as the totally ordered set {0,1,2,3}. These correspond to the 4 vertices of the tetrahedron. Then the morphisms in Δ are just all order-preserving maps between these totally ordered sets. So for example there is a morphism f: {0,1,2,3} → {0,1,2} given by the order-preserving map with
f(0) = 0 f(1) = 0 f(2) = 1 f(3) = 2The rule for composing morphisms is obvious: just compose the maps! Slick, eh?
We can be slicker if we are willing to work with a category equivalent to Δ (in the technical sense described in "week76"), namely, the category of all nonempty totally ordered sets, with order-preserving maps as morphisms. This has a lot more objects than just {0}, {0,1}, {0,1,2}, etc., but all of its objects are isomorphic to one of these. In category theory, equivalent categories are the same for all practical purposes - so we brazenly call this category Δ, too. If we do so, we have following incredibly slick description of the category of simplices: it's just the category of finite nonempty totally ordered sets!
If you are a true mathematician, you will wonder "why not use the empty set, too?" Generally it's bad to leave out the empty set. It may seem like "nothing", but "nothing" is usually very important. Here it corresponds to the "empty simplex", with no vertices! Topologists often leave this one out, but sometimes regret it later and put it back in (the buzzword is "augmentation"). True category theorists, like Mac Lane, never leave it out. They define Δ to be the category of all totally ordered finite sets. For a beautiful introduction to this approach, try:
Of course, not everyone prefers the austere joys of algebra to the earthy pleasures of geometry. Algebraic topologists thrill to categories, functors and natural transformations, while geometric topologists like drawing pictures of hideously deformed multi-holed doughnuts in 4 dimensional space. It's all a matter of taste. Personally, I like both!
So, for example, a "simplicial abelian group" is a simplicial object in the category of abelian groups. Just as we may associate to any set X the free abelian group on X, we may associate to any simplicial set X the free simplicial abelian group on X. In fact, it's more than analogy: the latter construction is a spinoff of the former! There is a functor
L: Set → Ab
assigning to any set the free abelian group on that set (see "week77"). Given a simplicial set
X: Δ → Set
we may compose with L to obtain a simplicial abelian group
XL: Δ → Ab
(where I'm writing composition in the funny order that I'm going to use in these notes: XL means do first X, then L). This is the free simplicial abelian group on the simplicial set X!
Later I'll talk about how to compute the homology groups of a simplicial abelian group. Combined with the above trick, this will give a very elegant way to define the homology groups of a simplicial set. Homology groups are a very popular sort of invariant in algebraic topology; we will get them with an absolute minimum of sweat.
Just as a good firework show ends with lots of explosions going off simultaneously, leaving the audience stunned, deafened, and content, I should end with a blast of abstraction, just for the hell of it. Those of you who remember my discussion of "theories" in "week53" can easily check that there is a category called the "theory of abelian groups". This allows us to define an "abelian group object" in any category with finite limits. In particular, since the category of simplicial sets has finite limits (any presheaf category has all limits), we can define an abelian group object in the category of simplicial sets. And now for a very pretty result: abelian group objects in the category of simplicial sets are the same as simplicial abelian groups! In other words, an abstract "abelian group" living in the world of simplicial sets is the same as an abstract "simplicial set" living in the world of abelian groups. I'm very fond of this kind of "commutativity of abstraction".
Now let's see how this bridge actually works. In section C we learned about simplicial sets. A simplicial set is a presheaf on the category Δ. Intuitively, it's a purely combinatorial way of describing a bunch of abstract simplices glued together along their faces. We want a process that turns such things into actual topological spaces, and also a process that turns topological spaces back into simplicial sets. Let's start with the first one.
| |: SimpSet → Top
from the category of simplicial sets, SimpSet, to the category of topological space, Top.
It's straightforward to fill in the details. But if we want to be slick, we can define geometric realization using the magic of adjoint functors — see below.
Sing: Top → SimpSet.
We make this precise as follows.
By thinking of simplices as spaces in the obvious way, we can associate a space to any object of Δ, and also a continuous map to any morphism in Δ. Thus there's a functor
i: Δ → Top.
For any space X we define
Sing(X): Δ → Set
by
Sing(X)(-) = hom(i(-),X)
where the blank slot indicates how Sing(X) is waiting to eat a simplex and spit out the set of all ways of mapping it — thought of as a space! — into the space X. The blank slot also indicates how Sing(X) is waiting to eat a morphism between simplices and spit out a function between sets.
Having said what Sing does to spaces, what does it do to maps? The same formula works: for any map f: X → Y between topological spaces, we define
Sing(f)(-) = hom(i(-),f).
It may take some headscratching to understand this, but if you work it out, you'll see it works out fine. If you feel like you are drowning under a tidal wave of objects, morphisms, categories, and functors, don't worry! Medical research has determined that people actually grown new neurons when learning category theory.
In fact, even though it might not seem like it, I'm being incredibly pedagogical and nurturing. If I were really trying to show off, I would have compressed the last couple of paragraphs into the following one line:
Sing(--)(-) = hom(i(-),--).
where Sing becomes a functor using the fact that for any category C there's a functor
hom: Cop x C → Set
where Cop denotes the opposite of C, that is, C with all its arrows turned around. (See "week78" for an explanation of this.)
Or I could have said this: form the composite
i x 1 hom Δop x Top ------> Topop x Top -----> Setand dualize this to obtain
Sing: Top → SimpSet.These are all different ways of saying the same thing. Forming the singular simplical set of a space is not really an "inverse" to geometric realization, since if we take a simplicial set X, form its geometric realization, and then form the singular simplicial set of that, we get something much bigger than X. However, if you think about it, there's an obvious map from X into Sing(|X|). Similarly, if we start with a topological space X, there's an obvious map from |Sing(X)| down to X.
What this means is that Sing is the right adjoint of | |, or in other words, | | is the left adjoint of Sing. Thus if we want to be slick, we can just define geometric realization to be the left adjoint of Sing. (See "week77"-"week79" for an exposition of adjoint functors.)
A "chain complex" C is a sequence of abelian groups and "boundary" homomorphisms like this:
d1 d2 d3 C0 <--- C1 <--- C2 <--- C3 <--- ...satisfying the magic equation
di di+1 x = 0This equation says that the image of di+1 is contained in the kernel of di, so we may define the "homology groups" to be the quotients
Hi(C) = ker(di) / im(di+1)The study of this stuff is called "homological algebra". You can read about it in such classics as these:
or
But it you want something a bit more user-friendly, try:
The main reason chain complexes are interesting is that they are similar to topological spaces, but simpler. In "singular homology theory", we use a certain functor to convert topological spaces into chain complexes, thus reducing topology problems to simpler algebra problems. This is usually one of the first things people study when they study algebraic topology. In sections G and H below, I'll remind you how this goes.
Though singular homology is very useful, not everybody gets around to learning the deep reason why! In fact, chain complexes are really just another way of talking about a certain especially simple class of topological spaces, called "topological abelian groups". Such spaces are basically just products of building-blocks, one in each dimension, called "Eilenberg-Mac Lane spaces". Thus, topological phenomena in different dimensions interact in a particularly trivial way. Singular homology thus amounts to neglecting the subtler interactions between topology in different dimensions. This is what makes it so easy to work with — yet ultimately so limited.
Before I keep rambling on I should describe the category of chain complexes, which I'll call Chain. The objects are just chain complexes, and given two of them, say C and C', a morphism f: C → C' is a sequence of group homomorphisms
fi: Ci → Ci'making the following big diagram commute:
d1 d2 d3 C0 ← C1 ← C2 ← C3 ← ... f0↓ f1↓ f2↓ f3↓ C0' ← C1' ← C2' ← C3' ← ... d1' d2' d3'The reason Chain gets to be so much like the category Top of topological spaces is that we can define homotopies between morphisms of chain complexes by copying the definition of homotopies between continuous maps. First, there is a chain complex called I that's analogous to the unit interval. It looks like this:
d1 d2 d3 d4 Z+Z ← Z ← 0 ← 0 ← ...The only nonzero boundary homomorphism is d1, which is given by
d1(x) = (x,-x)(Why? We take I1 = Z and I0 = Z+Z because the interval is built out of one 1-dimensional thing, namely itself, and two 0-dimensional things, namely its endpoints. We define d1 the way we do since the boundary of an oriented interval consists of two points: its initial endpoint, which is positively oriented, and its final endpoint, which is negatively oriented. This remark is bound to be obscure to anyone who hasn't already mastered the mystical analogies between algebra and topology that underlie homology theory!)
There is a way to define a "tensor product" C ⊗ C' of chain complexes C and C', which is analogous to the product of topological spaces. And there are morphisms
i,j: C → I ⊗ Canalogous to the two maps from a space into its product with the unit interval:
i, j: X → [0,1] × X i(x) = (0,x), j(x) = (1,x)Using these analogies we can define a "chain homotopy" between chain complex morphisms f,g: C → C' in a way that's completely analogous to a homotopy between maps. Namely, it's a morphism F: I ⊗ C → C' for which the composite
i F C ----> I ⊗ C ----> C'equals f, and the composite
j F C ----> I x C ----> C'equals g. Here we are using the basic principle of category theory: when you've got a good idea, write it out using commutative diagrams and then generalize the bejeezus out of it!
The nice thing about all this is that a morphism of chain complexes f: C → C' gives rise to homomorphisms of homology groups,
Hn(f): Hn(C) → Hn(C').In fact, we've got a functor
Hn: Chain → Ab.And even better, if f: C → C' and g: C → C' are chain homotopic, then Hn(f) and Hn(g) are equal. So we say: "homology is homotopy-invariant".
∂0, ...., ∂n-1: Cn → Cn-1coming from all the ways the simplex with (n-1) vertices can be a face of the simplex with n vertices. We can thus can make C into a chain complex by defining dn: Cn → Cn-1 as follows:
dn = ∑ (-1)i ∂iThe thing to check is that
dn dn+1 x = 0The alternating signs make everything cancel out! In the immortal words of the physicist John Wheeler, "the boundary of a boundary is zero".
Unsurprisingly, this gives a functor from simplicial abelian groups to chain complexes. Let's call it
Ch: SimpAb → Chain
More surprisingly, this is an equivalence of categories! I leave you to show this — if you give up, look at May's book cited in section C above. What this means is that simplicial abelian groups are just another way of thinking about chain complexes... or vice versa. Thus, if I were being ultra-sophisticated, I could have skipped the chain complexes and talked only about simplicial abelian groups! This would have saved time, but more people know about chain complexes, so I wanted to mention them.
Sing: Top → SimpSet,
the "free simplicial abelian group on a simplicial set" functor:
L: SimpSet → SimpAb,
and the "chain complex of a simplicial abelian group" functor:
Ch: SimpAb → Chain,
and compose them! We get the "singular chain complex" functor
C: Top → Chain
that takes a topological space and distills a chain complex out of it. We can then take the homology groups of our chain complex and get the "singular homology" of our space. Better yet, the functor C: Top → Chain takes homotopies between maps and sends them to homotopies between morphisms of chain complexes! It follows that homotopic maps between spaces give the same homomorphisms between the singular homology groups of these spaces. Thus homotopy-equivalent spaces will have isomorphic homology groups... so we have gotten our hands on a nice tool for studying spaces up to homotopy equivalence.
Now that we've got our hands on singular homology, we could easily spend a long time using it to solve all sorts of interesting problems. I won't go into that here; you can read about it in all sorts of textbooks, like this free one:
or this one:
which uses cubes rather than simplices! (Cubes are nice because the product of two cubes is a cube, and there's a whole theory of "cubical sets", which are presheaves on some category of cubes.)
But here I'm just trying to emphasize here is that singular homology is a composite of functors that are interesting in their own right. I'll explore their uses a bit more soon.
As you can see, the key is to have lots of functors at your disposal, so you can take a problem in any given context — or more precisely, any given category! — and move it to other contexts where it may be easier to solve. Eventually you'll learn what all these categories have in common: they are all "model categories". Once you understand that, you'll be able to see more deeply what's going on.
But I just want to describe a few more tricks for turning one thing into anotehr. Recall from "week115" that there's a category Δ whose objects 0,1,2,... are the simplices, with n corresponding to the simplex with n vertices — the simplex with 0 vertices being the "empty simplex". We can also define Δ in a purely algebraic way as the category of finite totally ordered sets, with n corresponding to the totally ordered set {0,1,....,n-1}. The morphisms in Δ are then the order-preserving maps. Using this algebraic definition we can do some cool stuff.
xThe 1-simplices of Nerve(C) are just the morphisms, which look like this:
x---f-→yThe 2-simplices of Nerve(C) are just the commutative diagrams that look like this:
y / \ f g / \ x---h-→zwhere f: x → y, g: y → z, and h: x → z. And so on. In general, the n-simplices of Nerve(C) are just the commutative diagrams in C that look like n-simplices!
When I first heard of this idea I cracked up. It seemed like an insane sort of joke. Turning a category into a kind of geometrical object built of simplices? What nerve! What use could this possibly be?
Well, for an application of this idea to computer science, see "week70". We'll soon see lots of applications within topology. But first, let me give a slick abstract description of this "nerve" process that turns categories into simplicial sets. It's really a functor
Nerve: Cat → SimpSet
going from the category of categories to the category of simplicial sets.
First, a remark on Cat. This has categories as objects and functors as morphisms. Since the "category of all categories" is a bit creepy, we really want the objects of Cat to be all the "small" categories, i.e., those having a mere set of objects. This prevents Russell's paradox from raising its ugly head and disturbing our fun and games.
Next, note that any partially ordered set can be thought of as a category whose objects are just the elements of our set, and where we say there's a single morphism from x to y if x ≤ y. Composition of morphisms works out automatically, thanks to the transitivity of "less than or equal to". We thus obtain a functor
i: Δ → Cat
taking each finite totally ordered set to its corresponding category, and each order-preserving map to its corresponding functor.
Now we can copy the trick we played in section F. For any category C we define the simplicial set Nerve(C) by
Nerve(C)(-) = hom(i(-),C)
Think about it! If you put the simplex n in the blank slot, we get hom(i(n),C), which is the set of all functors from that simplex, regarded as a category, to the category C. This is just the set of all diagrams in C shaped like the simplex n, as desired!
We can say all this even more slickly as follows: take
i x 1 hom Δop × Cat ------> Catop × C Cat -----> Setand dualize it to obtain
Nerve: Cat → SimpSet.I should also point out that topologists usually do this stuff with the topologist's version of Δ, which does not include the "empty simplex". This makes a huge difference sometimes, so be careful.
Nerve: Cat → SimpSet
with the "geometric realization" functor
| |: SimpSet → Top
defined in section E, we get a way to turn a category into a space, called its "classifying space". This trick was first used by Graeme Segal, the homotopy theorist who later became the guru of conformal field theory. He invented this trick here:
• Graeme B. Segal, Classifying spaces and spectral sequences, Publ. Math. Inst. des Haut. Etudes Scient. 34 (1968), 105-112.
As it turns out, every reasonable space is the classifying space of some category! More precisely, every space that's the geometric realization of some simplicial set is homeomorphic to the classifying space of some category. To see this, suppose the space X is the geometric realization of the simplicial set S. Take the set of all simplices in S and partially order them by saying x ≤ y if x is a face of y. Here by "face" I don't mean just mean a face of one dimension less than that of y; I'm allowing faces of any dimension less than or equal to that of y. We obtain a partially ordered set. Now think of this as a category, C. Then Nerve(C) is the "barycentric subdivision" of S. In other words, it's a new simplicial set formed by chopping up the simplices of S into smaller pieces by putting a new vertex in the center of each one. It follows that the geometric realization of Nerve(C) is homeomorphic to that of S.
There are lots of interesting special sorts of categories, like groupoids, or monoids, or groups (see "week74"). These give special cases of the "classifying space" construction, some of which were actually discovered before the general case. You can read more about them in classifying spaces made easy.
Also sometimes people take categories that they happen to be interested in, which may have no obvious relation to topology, and study them by studying their classifying spaces. This gives surprising ways to apply topology to all sorts of subjects. A good example is "algebraic K-theory", where we start with some sort of category of modules over a ring.
Now, the "microcosm principle" says that algebraic gadgets often like to live inside categorified versions of themselves. It's a bit like the "homunculus theory", where I have a little copy of myself sitting in my head who looks out through my eyes and thinks all my thoughts for me. But unlike that theory, it's true!
For example, we can define a "monoid object" in any monoidal category. Given a monoidal category A with tensor product x and unit object 1, we define a monoid object a in A to be an object equipped with a "product"
m: a x a → a
and a "unit"
i: 1 → a
which satisfy associativity and the left and right unit laws (written out as commutative diagrams). A monoid object in Set is just a monoid, but a monoid object in Vect is an algebra, and I gave some very different examples of monoid objects in "week89".
Now let's consider the "free monoidal category on a monoid object". In other words, consider a monoidal category A with a monoid object a in it, and assume that A has no objects and no morphisms, and satisfies no equations, other than those required by the definitions of "monoidal category" and "monoid object".
Thus the only objects of A are the unit object together with a and its tensor powers. Similarly, all the morphism of A are built up by composing and tensoring the morphisms m and i. So A looks like this:
1x1xi --------> 1xi 1xix1 -----> --------> i ix1 ix1x1 1 -----> a -----> a x a --------> a x a x a ... m mx1 <----- <-------- 1xm <--------Here I haven't drawn all the morphisms, just enough so that every morphism in A is a composite of morphisms of this sort.
What is this category? It's just Δ! The nth tensor power of a corresponds to the simplex with n vertices. The morphisms going to the right describe the ways the simplex with n vertices can be a face of the simplex with n+1 vertices. The morphisms going to the left correspond to "degeneracies" — ways of squashing a simplex with n+1 vertices down into one with n vertices.
So: in addition to its other descriptions, we can define Δ as the free monoidal category on a monoid object! Next time we'll see how this is fundamental to homological algebra.
So now we need a trick for turning all sorts of gadgets into simplicial sets: groups, rings, algebras, Lie algebras, you name it! That's what we'll talk about next.
Concretely, a simplicial object in a category amounts to a bunch of objects x0, x1, x2,... together with morphisms like this:
---i1-> ---i0-> ---i0-> x0 <--d0-- x1 <--d0-- x2 <--d0-- x3 ... <--d1-- <--d1-- <--d2--The morphisms dj are called "face maps" and the morphisms ij are called "degeneracies". They are required to satisfy some equations which I won't bother writing down here, since you can figure them out yourself if you read section B and think about it.
Now, suppose we have an adjunction, that is, a pair of adjoint functors:
---L--> C D <--R---This means we have natural transformations
e: LR ⇒ 1D
i: 1C ⇒ RL
satisfying a couple of equations, which I again won't write down, since I explained them in "week79" and "week83".
Then an object d in the category D automatically gives a simplicial object as follows:
--LRL.i.R-> --L.i.R-> --L.i.RLR-> d <--e-- LR(d) <--e.LR-- LRLR(d) <--e.LRLR-- LRLRLR(d) ... <--LR.e-- <--LR.e.LR- <--LRLR.e--where . denotes horizontal composition of functors and natural transformations.
For example, if Gp is the category of abelian groups, we have an adjunction
---L--> Set AbGp <--R---where L assigns to each set the free group on that set, and R assigns to each group its underlying set. Thus given a group, the above trick gives us a simplicial object in Gp — or in other words, a simplicial group. This has an underlying simplicial set, and from this we can cook up a chain complex as in section H. This lets us study groups using homology theory! One can define the homology (and cohomology) of lots other algebraic gadgets in exactly the same way.
Note: I didn't explain why the equations in the definition of adjoint functors — which I didn't write down — imply the equations in the definition of a simplicial object — which I also didn't write down!
The point is, there's a more conceptual approach to understanding why this stuff works. Remember from section K that Δ is "the free monoidal category on a monoid object". This implies that whenever we have a monoid object in a monoidal category M, we get a monoidal functor
F: Δ → M.
This gives a functor
G: Δop → Mop
So: a monoid object in M gives a simplicial object in Mop.
Actually, if M is a monoidal category, Mop becomes one too, with the same tensor product and unit object. So it's also true that a monoid object in Mop gives a simplicial object in M!
Another name for a monoid object in Mop is a "comonoid object in M". Remember, Mop is just like M but with all the arrows turned around. So if we've got a monoid object in Mop, it gives us a similar gadget in M, but with all the arrows turned around. More precisely, a comonoid object in M is an object, say m, with "coproduct"
c: m → m x m
and "counit"
e: m → 1
morphisms, satisfying "coassociativity" and the left and right "counit laws". You get these laws by taking associativity and the left/right unit laws, writing them out as commutative diagrams, and turning all the arrows around.
So: a comonoid object in a monoidal category M gives a simplicial object in M. Now let's see how this is related to adjoint functors. Suppose we have an adjunction, so we have some functors
---L--> C D <--R---and natural transformations
e: LR ⇒ 1D
i: 1C ⇒ RL
satisfying the same equations I didn't write before.
Let hom(C,C) be the category whose objects are functors from C to itself and whose morphisms are natural transformations between such functors. This is a monoidal category, since we can compose functors from C to itself. In "week92" I showed that hom(C,C) has a monoid object in it, namely RL. The product for this monoid object is
R.e.L: RLRL ⇒ RL
and the unit is
i: 1C ⇒ RL
Folks often call this sort of thing a "monad".
Similarly, hom(D,D) is a monoidal category containing a comonoid object, namely LR. The coproduct for this comonoid object is
L.i.R: LR ⇒ LRLR
and the counit is
e: LR ⇒ 1D
People call this thing a "comonad". But what matters here is that we've seen this comonoid object automatically gives us a simplicial object in hom(D,D)! If we pick any object d of D, we get a functor
hom(D,D) → D
by taking
hom(D,D) x D → D
and plugging in d in the second argument. This functor lets us push our simplicial object in hom(D,D) forwards to a simplicial object in D. Vòila!
So, what's categorification? This tongue-twisting term, invented by Louis Crane, refers to the process of finding category-theoretic analogs of ideas phrased in the language of set theory, using the following analogy between set theory and category theory:
elements objects equations between elements isomorphisms between objects sets categories functions functors equations between functions natural isomorphisms between functorsJust as sets have elements, categories have objects. Just as there are functions between sets, there are functors between categories. Interestingly, the proper analog of an equation between elements is not an equation between objects, but an isomorphism. More generally, the analog of an equation between functions is a natural isomorphism between functors.
For example, the category FinSet, whose objects are finite sets and whose morphisms are functions, is a categorification of the set N of natural numbers. The disjoint union and Cartesian product of finite sets correspond to the sum and product in N, respectively. Note that while addition and multiplication in N satisfy various equational laws such as commutativity, associativity and distributivity, disjoint union and Cartesian product satisfy such laws only up to natural isomorphism. This is a good example of how equations between functions get replaced by natural isomorphisms when we categorify.
If one studies categorification one soon discovers an amazing fact: many deep-sounding results in mathematics are just categorifications of facts we learned in high school! There is a good reason for this. All along, we have been unwittingly "decategorifying" mathematics by pretending that categories are just sets. We "decategorify" a category by forgetting about the morphisms and pretending that isomorphic objects are equal. We are left with a mere set: the set of isomorphism classes of objects.
To understand this, the following parable may be useful. Long ago, when shepherds wanted to see if two herds of sheep were isomorphic, they would look for an explicit isomorphism. In other words, they would line up both herds and try to match each sheep in one herd with a sheep in the other. But one day, along came a shepherd who invented decategorification. She realized one could take each herd and "count" it, setting up an isomorphism between it and some set of "numbers", which were nonsense words like "one, two, three, ..." specially designed for this purpose. By comparing the resulting numbers, she could show that two herds were isomorphic without explicitly establishing an isomorphism! In short, by decategorifying the category of finite sets, the set of natural numbers was invented.
According to this parable, decategorification started out as a stroke of mathematical genius. Only later did it become a matter of dumb habit, which we are now struggling to overcome by means of categorification. While the historical reality is far more complicated, categorification really has led to tremendous progress in mathematics during the 20th century. For example, Noether revolutionized algebraic topology by emphasizing the importance of homology groups. Previous work had focused on Betti numbers, which are just the dimensions of the rational homology groups. As with taking the cardinality of a set, taking the dimension of a vector space is a process of decategorification, since two vector spaces are isomorphic if and only if they have the same dimension. Noether noted that if we work with homology groups rather than Betti numbers, we can solve more problems, because we obtain invariants not only of spaces, but also of maps.
In modern lingo, the nth rational homology is a functor defined on the category of topological spaces, while the nth Betti number is a mere function, defined on the set of isomorphism classes of topological spaces. Of course, this way of stating Noether's insight is anachronistic, since it came before category theory. Indeed, it was in Eilenberg and Mac Lane's subsequent work on homology that category theory was born!
Decategorification is a straightforward process which typically destroys information about the situation at hand. Categorification, being an attempt to recover this lost information, is inevitably fraught with difficulties. One reason is that when categorifying, one does not merely replace equations by isomorphisms. One also demands that these isomorphisms satisfy some new equations of their own, called "coherence laws". Finding the right coherence laws for a given situation is perhaps the trickiest aspect of categorification.
For example, a monoid is a set with a product satisfying the associative law and a unit element satisfying the left and right unit laws. The categorified version of a monoid is a 'monoidal category'. This is a category C with a product
⊗: C × C → C
and unit object 1. If we naively impose associativity and the left and right unit laws as equational laws, we obtain the definition of a 'strict' monoidal category. However, the philosophy of categorification suggests instead that we impose them only up to natural isomorphism. Thus, as part of the structure of a 'weak' monoidal category, we specify a natural isomorphism
ax,y,z: (x ⊗ y) ⊗ z → x ⊗ (y ⊗ z)
called the "associator", together with natural isomorphisms
lx: 1 ⊗ x → x,
rx: x ⊗ 1 → x.
Using the associator one can construct isomorphisms between any two parenthesized versions of the tensor product of several objects. However, we really want a unique isomorphism. For example, there are 5 ways to parenthesize the tensor product of 4 objects, which are related by the associator as follows:
((x ⊗ y) ⊗ z) ⊗ w ------> (x ⊗ (y ⊗ z)) ⊗ w | | | | | | V | (x ⊗ y) ⊗ (z ⊗ w) | | | | | | | V V {x ⊗ (y ⊗ (z ⊗ w)) <------ x ⊗ ((y ⊗ z) ⊗ w)In the definition of a weak monoidal category we impose a coherence law, called the 'pentagon identity', saying that this diagram commutes. Similarly, we impose a coherence law saying that the following diagram built using a, l and r commutes:
(1 ⊗ x) ⊗ 1 ------> 1 ⊗ (x ⊗ 1) | | | | V V x ⊗ 1 ------> x <------ 1 ⊗ xThis definition raises an obvious question: how do we know we have found all the right coherence laws? Indeed, what does 'right' even mean in this context? Mac Lane's coherence theorem gives one answer to this question: the above coherence laws imply that any two isomorphisms built using a, l and r and having the same source and target must be equal.
Further work along these lines allow us to make more precise the sense in which N is a decategorification of FinSet. For example, just as N forms a monoid under either addition or multiplication, FinSet becomes a monoidal category under either disjoint union or Cartesian product if we choose the isomorphisms a, l, and r sensibly. In fact, just as N is a 'rig', satisfying all the ring axioms except those involving additive inverses, FinSet is what one might call a 'rig category'. In other words, it satisfies the rig axioms up to natural isomorphisms satisfying the coherence laws discovered by Kelly and Laplaza, who proved a coherence theorem in this context.
Just as the decategorification of a monoidal category is a monoid, the decategorification of any rig category is a rig. In particular, decategorifying the rig category FinSet gives the rig N. This idea is especially important in combinatorics, where the best proof of an identity involving natural numbers is often a 'bijective proof': one that actually establishes an isomorphism between finite sets.
While coherence laws can sometimes be justified retrospectively by coherence theorems, certain puzzles point to the need for a deeper understanding of the origin of coherence laws. For example, suppose we want to categorify the notion of 'commutative monoid': The strictest possible approach, where we take a strict monoidal category and impose an equational law of the form x ⊗ y = y ⊗ x, is almost completely uninteresting. It is much better to start with a weak monoidal category equipped with a natural isomorphism
Bx,y: x ⊗ y → y ⊗ x
called the 'braiding', and then impose coherence laws called 'hexagon identities' saying that the following two diagrams built from the braiding and the associator commute:
x ⊗ (y ⊗ z) -----------→ (y ⊗ z) ⊗ x | ^ | | V | (x ⊗ y) ⊗ z y ⊗ (z ⊗ x) | ^ | | V | (y ⊗ x) ⊗ z -----------→ y ⊗ (x ⊗ z) (x ⊗ y) ⊗ z -----------→ z ⊗ (x ⊗ y) | ^ | | V | x ⊗ (y ⊗ z) (z ⊗ z) ⊗ y | ^ | | V | x ⊗ (z ⊗ y) -----------→ (x ⊗ z) ⊗ yThis gives the definition of a weak "braided monoidal category". If we impose an additional coherence law saying that Bx,y is the inverse of By,x, we obtain the definition of a "symmetric monoidal category". Both of these concepts are very important; which one is "right" depends on the context. However, neither implies that every pair of parallel morphisms built using the braiding are equal. A good theory of coherence laws must naturally account for these facts.
The deepest insights into such puzzles have traditionally come from topology. In homotopy theory it causes problems to work with spaces equipped with algebraic structures satisfying equational laws, because one cannot transport such structures along homotopy equivalences. It is better to impose laws only up to homotopy, with these homotopies satisfying certain coherence laws, but again only up to homotopy, with these higher homotopies satisfying their own higher coherence laws, and so on. Coherence laws thus arise naturally in infinite sequences. For example, Stasheff discovered the pentagon identity and a sequence of higher coherence laws for associativity when studying the algebraic structure possessed by a space that is homotopy equivalent to a "loop space": that is, the space of based loops in some space with a basepoint. Similarly, the hexagon identities arise as part of a sequence of coherence laws for a space that's homotopy equivalent to "double loop spaces": the loop space of a loop space! And the extra coherence law for symmetric monoidal categories arises as part of a sequence for spaces homotopy equivalent to triple loop spaces!
The higher coherence laws in these sequences turn out to be crucial when we try to iterate the process of categorification. To iterate the process of categorification, we need a concept of "n-category" — roughly, an algebraic structure consisting of a collection of objects (or "0-morphisms"), morphisms between objects (or "1-morphisms"), 2-morphisms between morphisms, and so on up to n-morphisms. There are various ways of making this precise, and right now there is a lot of work going on devoted to relating these different approaches. But the basic thing to keep in mind is that the concept of "(n+1)-category" is a categorification of the concept of "n-category". What were equational laws between n-morphisms in an n-category are replaced by natural (n+1)-isomorphisms, which need to satisfy certain coherence laws of their own.
To get a feeling for how these coherence laws are related to homotopy theory, it's good to think about certain special kinds of n-category. If we have an (n+k)-category that's trivial up to but not including the k-morphism level, we can turn it into an n-category by a simple reindexing trick: just think of its j-morphisms as (j-k)-morphisms! We call the n-categories we get this way "k-tuply monoidal n-categories". Here is a little chart of what they amount to for various low values of n and k:
k-tuply monoidal n-categories n = 0 n = 1 n = 2 k = 0 sets categories 2-categories k = 1 monoids monoidal monoidal categories 2-categories k = 2 commutative braided braided monoids monoidal monoidal categories 2-categories k = 3 " " symmetric weakly monoidal involutory categories monoidal 2-categories k = 4 " " " " strongly involutory monoidal 2-categories k = 5 " " " " " "One reason James Dolan and I got so interested in this chart is the "tangle hypothesis". Roughly speaking, this says that n-dimensional surfaces embedded in (n+k)-dimensional space can be described purely algebraically using the a certain special "k-tuply monoidal n-category with duals". If true, this reduces lots of differential topology to pure algebra! It also helps you understand the parameters n and k: you should think of n as "dimension" and k as "codimension".
For example, take n = 1 and k = 2. Knots, links and tangles in 3-dimensional space can be described algebraically using a certain "braided monoidal categories with duals". This was the first interesting piece of evidence for the tangle hypothesis. It has spawned a whole branch of math called "quantum topology", which people are trying to generalize to higher dimensions.
More recently, Laurel Langford tackled the case n = 2, k = 2. She proved that 2-dimensional knotted surfaces in 4-dimensional space can be described algebraically using a certain "braided monoidal 2-category with duals". These so-called "2-tangles" are particularly interesting to me because of their relation to spin foam models of quantum gravity, which are also all about surfaces in 4-space. For references, see "week103". But if you want to learn about more about this, you couldn't do better than to start with:
This is a magnificently illustrated book which will really get you able to see 2-dimensional surfaces knotted in 4d space. At the end it sketches the statement of Langford's result.
Another interesting thing about the above chart is that k-tuply monoidal n-categories keep getting "more commutative" as k increases, until one reaches k = n+2, at which point things stabilize. There is a lot of evidence suggesting that this "stabilization hypothesis" is true for all n. Assuming it's true, it makes sense to call a k-tuply monoidal n-category with k ≥ n+2 a "stable n-category".
Now, where does homotopy theory come in? Well, here you need to look at n-categories where all the j-morphisms are invertible for all j. These are called "n-groupoids". Using these, one can develop a translation dictionary between n-category theory and homotopy theory, which looks like this:
omega-groupoids homotopy types n-groupoids homotopy n-types k-tuply groupal omega-groupoids homotopy types of k-fold loop spaces k-tuply groupal n-groupoids homotopy n-types of k-fold loop spaces k-tuply monoidal omega-groupoids homotopy types of E_k spaces k-tuply monoidal n-groupoids homotopy n-types of E_k spaces stable omega-groupoids homotopy types of infinite loop spaces stable n-groupoids homotopy n-types of infinite loop spaces Z-groupoids homotopy types of spectraThe entries on the left-hand side are very natural from an algebraic viewpoint; the entries on the right-hand side are things topologists already study. We explain what all these terms mean in the paper, but maybe I should say something about the first two rows, which are the most basic in a way. A homotopy type is roughly a topological space "up to homotopy equivalence", and an omega-groupoid is a kind of limiting case of an n-groupoid as n goes to infinity. If infinity is too scary, you can work with homotopy n-types, which are basically homotopy types with no interesting topology above dimension n. These should correspond to n-groupoids.
Using these basic correspondences we can then relate various special kinds of homotopy types to various special kinds of omega-groupoids, giving the rest of the rows of the chart. Homotopy theorists know a lot about the right-hand column, so we can use this to get a lot of information about the left-hand column. In particular, we can work out the coherence laws for n-groupoids, and — this is the best part, but the least understood — we can then guess a lot of stuff about the coherence laws for general n-categories. In short, we are using homotopy theory to get our foot in the door of n-category theory.
I should emphasize, though, that this translation dictionary is partially conjectural. It gets pretty technical to say what exactly is and is not known, especially since there's pretty rapid progress going on. Even in the last few months there have been some interesting developments. For example, Breen has come out with a paper relating k-tuply monoidal n-categories to Postnikov towers and various far-out kinds of homological algebra:
Also, the following folks have also developed a notion of 'iterated monoidal category' whose nerve gives the homotopy type of a k-fold loop space, just as the nerve of a category gives an arbitrary homotopy type:
Anyway, in addition to explaining the relationship between n-category theory and homotopy theory, Dolan's and my paper discusses iterated categorifications of the very simplest algebraic structures: the natural numbers and the integers. The natural numbers are the free monoid on one generator; the integers are the free group on one generator. We believe this is just the tip of the following iceberg:
algebraic strutures and the free such structure on one generator sets the one-element set --------------------------------------------------------------------- monoids the natural numbers ---------------------------------------------------------------------- groups the integers ---------------------------------------------------------------------- k-tuply monoidal the braid n-groupoid n-categories in codimension k ---------------------------------------------------------------------- k-tuply monoidal the braid omega-groupoid omega-categories in codimension k ---------------------------------------------------------------------- stable n-categories the braid n-groupoid in infinite codimension ---------------------------------------------------------------------- stable omega-categories the braid omega-groupoid in infinite codimension ---------------------------------------------------------------------- k-tuply monoidal the n-category of framed n-tangles n-categories with duals in n+k dimensions ---------------------------------------------------------------------- stable n-categories the framed cobordism n-category with duals ---------------------------------------------------------------------- k-tuply groupal the homotopy n-type n-groupoids of the kth loop space of S^k ---------------------------------------------------------------------- k-tuply groupal the homotopy type omega-groupoids of the kth loop space of S^k --------------------------------------------------------------------- stable omega-groupoids the homotopy type of Loop^{infinity} S^infinity ------------------------------------------------------------------------- Z-groupoids the sphere spectrumYou may or may not know the guys on the right-hand side, but some of them are very interesting and complicated, so it's really exciting that they are all in some sense categorified and/or stabilized versions of the integers and natural numbers.
Whew! There is more to say, but I'm worn out. For more on homotopy theory try classifying spaces made easy, and for more on n-categories try "the tale of n-categories", starting in week73 and continuing on for many weeks after that.