Mathematical retrodiction and prediction
In this discussion over on Urs Schreiber's blog, The String Coffee Table, Greg Kuperberg writes:
I was reminded during The String Coffee Table discussion of a debate which raged for a time in the philosophy of science as to the credit a theory deserves for retrospectively accounting for observations - retrodiction - as compared to the prediction of unseen phenomena. The consensus seems to admit that, to some extent, you should think your theory more likely to be accurate in the future if it has retrodicted an observation, so long as that observation wasn't used in the construction of the theory, by, say, fixing values of parameters. But there were sharp disagreements on what one could say about permissible uses of the observation. One of these days, I shall write up what statistical learning theory has to say about this topic. The key idea is that if you start with a class of hypotheses that isn't too rich, then relying only on training data you can say how probably accurate a selected hypothesis is. And richness isn't measured by the number of parameters.
All things considered, though, there's nothing like a bold new prediction. Now, higher-category theory performs something like this task too. Say you are interested in a certain construction, such as Hopf algebras. You then formulate the construction in a way that suggests features of a 'categorified' version, i.e., sets become categories, categories become bicategories, functions become functors, functors become natural transformations, product operations on elements become monoidal products on objects, and so on. Or perhaps you look to internalise constructions within a category, by, say, representing a group as a diagram, and realising that diagram within a category. You might end up then with a good idea of what your categorified entity, say, a Hopf 2-algebra, should look like. You predict that there should be an important concept which possesses specified properties. This is higher category theory in its guiding, predictive role, pushing you in promising directions. But there is no magic bullet. There's still very likely a huge amount of work to be done in getting the construction right, and in finding rich examples of it. Sometimes these examples already exist, but recognising them as a cases of concepts which fit into some hierarchical pattern can be extremely important.
Consider, for example, how category theory might guide you to define quantum groups. As a first try, you might consider group objects in the contravariant category of non-commutative algebras. This is a natural proposal, but it isn’t the most fruitful one. A much better idea is to use Hopf algebras. Hopf algebras are a non-cocommutative generalization of group algebras, but they are not group objects. They have their own category-theoretic motivation (in terms of monoidal categories), but it takes hindsight to see the way in which category theory is important here.John Baez in some recent lectures shows how (involutory) Hopf algebras can be generated from groups (see p. 53), providing a category theoretic account of their nature. Kuperberg's point, however, is that this is reconstruction rather than construction. He also admits that category theory can play a suggestive role, providing targets for future research.
I was reminded during The String Coffee Table discussion of a debate which raged for a time in the philosophy of science as to the credit a theory deserves for retrospectively accounting for observations - retrodiction - as compared to the prediction of unseen phenomena. The consensus seems to admit that, to some extent, you should think your theory more likely to be accurate in the future if it has retrodicted an observation, so long as that observation wasn't used in the construction of the theory, by, say, fixing values of parameters. But there were sharp disagreements on what one could say about permissible uses of the observation. One of these days, I shall write up what statistical learning theory has to say about this topic. The key idea is that if you start with a class of hypotheses that isn't too rich, then relying only on training data you can say how probably accurate a selected hypothesis is. And richness isn't measured by the number of parameters.
All things considered, though, there's nothing like a bold new prediction. Now, higher-category theory performs something like this task too. Say you are interested in a certain construction, such as Hopf algebras. You then formulate the construction in a way that suggests features of a 'categorified' version, i.e., sets become categories, categories become bicategories, functions become functors, functors become natural transformations, product operations on elements become monoidal products on objects, and so on. Or perhaps you look to internalise constructions within a category, by, say, representing a group as a diagram, and realising that diagram within a category. You might end up then with a good idea of what your categorified entity, say, a Hopf 2-algebra, should look like. You predict that there should be an important concept which possesses specified properties. This is higher category theory in its guiding, predictive role, pushing you in promising directions. But there is no magic bullet. There's still very likely a huge amount of work to be done in getting the construction right, and in finding rich examples of it. Sometimes these examples already exist, but recognising them as a cases of concepts which fit into some hierarchical pattern can be extremely important.
3 Comments:
David writes:
All things considered, though, there's nothing like a bold new prediction. Now, higher category theory performs something like this task too. Say you are interested in a certain construction, such as Hopf algebras. You then formulate the construction in a way that suggests features of a 'categorified' version....
This is a great description of what some of us have being trying to do for the last decade!
By now we have lots of these predictions lined up, and we could really use more people to jump in and start firming things up. But this probably won't happen until a lot of "foundational" work on n-categories is done, so that grad students can pick up a book, learn "the definition of an n-category" (not knowing there were once 10 competing definitions), learn the basic theorems about them, and start using them to do stuff.
We haven't even settled on a handy definition of the omega-category of omega-categories, much less shown that omega-groupoids are "the same" as homotopy types, or proven the Stabilization Hypothesis, or the Tangle Hypothesis. Without these basic tools, work is slow and difficult. Personally I don't have the patience for a lot of what needs to be done.
Luckily Joyal's book on quasicategories will eventually come out, providing a complete tool kit for at least this special class of omega-categories: namely, those where all j-morphisms are weakly invertible for j > 1. "Quasicategories for the Working Mathematician", to all intents and purposes.
It will be great when things are up and running. We already understand a lot about the deformations of simple Lie algebras into Lie n-algebras for all n. For n = 2 we know that these "simple Lie 2-algebras" have corresponding Lie 2-groups, and we know a fair amount about how these Lie 2-groups show up in string theory - enough to see that Lie n-groups should arise in theories of branes whose worldsheets are n-dimensional. This is also evident from the theory of
2-connections on 2-bundles. But, there are so many details left to be worked out - and surely so many exciting discoveries to be made when we do this... that's it's really tantalizing, downright frustrating! It's like looking at a beautiful landscape through the tiny slit of a prison window. Past n = 2 or so, it will be quite hard to make this stuff precise without a good general theory of n-categories. We just happen to know enough about 2-categories to do things "by hand" in the n = 2 case.
David writes:
John Baez in some recent lectures shows how (involutory) Hopf algebras can be generated from groups - providing a category theoretic account of their nature.
To be more precise, I was showing how the definition of involutory Hopf algebra can be generated in a purely mechanical way from the definition of group.
However, this is not really so great. The fact that a mechanical "quantization" procedure applied to the concept of group gives some concept of Hopf algebra does partially explain the term "quantum groups"... but the fact that it gives the concept of involutory Hopf algebra means it misses the really interesting examples of Hopf algebras - namely, the ones everyone calls quantum groups.
The really good way to use category theory to understand quantum groups has been around for much longer and everyone already knows it. It's the Tannaka-Krein reconstruction philosophy. Putting extra structure on an algebra corresponds to putting extra structure on its category of representations. Making the category monoidal means making the algebra into a bialgebra. Giving the monoidal category duals for objects means making the bialgebra into a Hopf algebra. And so on.
This is the best way to understand the definition of "Hopf algebra".
David Kazhdan gave a great course on this stuff at Harvard back around 1992, when everyone and his brother was trying to understand quantum groups. Too bad he never published the course notes.
Actually, it turns out that the mechanical procedure I was describing generates the definition of cocommutative Hopf algebra when you give it the definition of "group".
The cocommutative law implies the involutory law.
Post a Comment
<< Home