|
Today I want to say a bit about physics, and then a bit about about logic, since that's what I was studying for the last month in Marseille. But first, the astronomy pictures of the week:
1) Endurance crater's dazzling dunes, NASA/JPL, available at http://marsrovers.jpl.nasa.gov/gallery/press/opportunity/20040806a.html
On August 4, 2004, the Mars rover called Opportunity took these pictures of dunes as it entered Endurance Crater. The red one is what we'd actually see; the blue one is a false-color image designed to bring out certain details.
Both images show show tendrils of sand less than 1 meter high stretching from the big dunes toward the rover, and some rocks in the foreground. The false-color image emphasizes accumulations of millimeter-sized spheres called "blueberries" on the flat parts of the dunes. Here's what they look like close up:
2) Mineral in Mars "berries" adds to water story, NASA/JPL, available at http://marsrover.nasa.gov/newsroom/pressreleases/20040318a.html
Thanks to a Mössbauer spectroscope aboard the rover, which studies rocks by firing γ rays at them, we know these blueberries contain a lot of hematite.
Hematite is made of ferric oxide, Fe2O3, otherwise known as "rust". It's usually formed in the presence of water. For this and other reasons, it's believed that the blueberries in Endurance Crater were formed when Mars was wetter than today. An interesting puzzle is whether they were formed by groundwater leaching ferric oxide from rocks, or deposited in standing water - for example, a lake.
It's amazing how much we can learn from unmanned space probes. And it's amazing to me how some people want to spend billions on manned missions. To read more about what SF writers Larry Niven, Joe Haldeman, Greg Bear and other folks including me think about the merits of manned versus unmanned space missions, try this:
3) Meme Therapy: Life from a science fiction point of view, http://memetherapy.blogspot.com/2006/03/brain-parade-feature-where-we-pester.html
But enough space stuff... now for some physics!
A strange thing happened around the 1980s. Before that, theorists had been making rapid and revolutionary progress in understanding the fundamental laws of physics for almost a century. They kept predicting shocking new effects that were soon found in actual experiments: radio waves, nuclear chain reactions, black holes, lasers, antimatter, neutrinos, quarks,... up to and including the W and Z bosons. The power of human thought never seemed greater.
Since the 1980s, most of the new discoveries in fundamental physics have come from unexpected observations in astronomy. These observations were mostly not predicted by theorists. The key examples are: dark matter, dark energy, and neutrino oscillations. The only serious counterexamples that come to mind are the top quark, discovered in 1995, and Alan Guth's inflationary cosmology, dreamt up around 1979 and partially confirmed by recent data - though the jury is still out.
Theorists are publishing more than ever, but most of their theories are either not yet testable (string theory, loop quantum gravity) or seem to have been disproved by experiment (grand unified theory predictions of proton decay).
It's interesting to meditate on why this change has happened, what it means, and what will happen next. I spoke about this in Marseille shortly before coming home to Riverside. You can see my slides here:
4) John Baez, Fundamental physics: where we stand today, lecture at at the Faculty of Sciences, Luminy, February 27th, 2006, available at http://math.ucr.edu/home/baez/where_we_stand/
This talk was for nonphysicists, so it's not very technical. The first part is a gentle introduction to the laws of physics as we know them.
For more info on dark matter and dark energy, try this:
5) Varun Sahri, Dark matter and dark energy, available as astro-ph/0403324.
If you read this, you'll learn about the "cuspy core problem" - existing cold dark matter models produce galaxies with a sharp spikes of high density near their cores, sharper than observed. You'll learn about "quintessence", a kind of hypothetical field that some people use to model dark energy, thus "explaining" the accelerating expansion of the universe. You'll learn about the "Chaplygin gas", a hypothetical substance whose properties interpolate between those of cold dark matter and dark energy. And, you'll learn about "phantom energy" models of dark energy, which fit the accelerating expansion of the universe quite nicely now but predict a "Big Rip", in which the expansion rate eventually becomes infinite.
In short, you'll see how people are flailing around trying to understand dark matter and dark energy.
For more on neutrino oscillations, try this:
6) K. M. Heeger, Evidence for neutrino mass: a decade of discovery, available as hep-ex/0412032.
I had a great time in Marseille. The area around there is great for mathematicians. Algebraists can visit the beautiful nearby city of Aix - pronounced "x". Logicians will enjoy the dry, dusty island of If - pronounced "eef", just like a French logician would say it. And everyone will enjoy the medieval hill town of Les Baux, which looks like something out of Escher. Here's a picture of it:
Actually the Chateau D'If, on the island of the same name, is where Edmond Dantes was imprisoned in Alexander Dumas' novel "The Count of Monte Cristo". It's in this formidable fortress that the wise old Abbe Faria tells Dantes the location of the treasure that later made him rich.
I guess everyone except me read this story as a kid - I'm just reading it now. But how many of you remember that Faria spent his time in prison studying the works of Aristotle? There's a great scene where Dantes asks Faria where he learned so much about logic, and Faria replies: "If - and only If!"
That Dumas guy sure was a joker.
Luckily I didn't need to be locked up on a deserted island to learn some logic in Marseille. There were lots of great talks on this topic at the conference I attended:
7) Geometry of Computation 2006 (Geocal06), http://iml.univ-mrs.fr/geocal06/
For example, Yves Lafont gave a category-theoretic approach to Boolean logic gates which explains their relation to Feynman diagrams:
8) Yves Lafont, Towards an algebraic theory of Boolean circuits, Journal of Pure and Applied Algebra 184 (2003), 257-310. Also available at http://iml.univ-mrs.fr/~lafont/publications.html
and together with Yves Guiraud, Francois Metayer and Albert Burroni, he gave a detailed introduction to the homology of n-categories and its application to rewrite rules. The idea is to study any sort of algebraic gadget (like a group) by creating an n-category where the objects are "expressions" for elements in the gadget, the morphisms are "ways of rewriting expressions" by applying the rules at hand, the 2-morphisms are "ways of passing from one way of rewriting expressions to another" by applying certain "meta-rules", and so on. Then one can use ideas from algebraic topology to study this n-category and prove stuff about the original gadget!
To understand how this actually works, it's best to start with Craig Squier's work on the word problem for monoids. I explained this pretty carefully back in "week70" when I first heard Lafont lecture on this topic - it made a big impression on me! You can read more here:
9) Yves Lafont and A. Proute, Church-Rosser property and homology of monoids, Mathematical Structures in Computer Science, Cambridge U. Press, 1991, pp. 297-326. Also available at http://iml.univ-mrs.fr/~lafont/publications.html
10) Yves Lafont, A new finiteness condition for monoids presented by complete rewriting systems (after Craig C. Squier), Journal of Pure and Applied Algebra 98 (1995), 229-244. Also available at http://iml.univ-mrs.fr/~lafont/publications.html
Then you can go on to the higher-dimensional stuff:
11) Albert Burroni, Higher dimensional word problem with application to equational logic, Theor. Comput. Sci. 115 (1993), 43-62. Also available at http://www.math.jussieu.fr/~burroni/
12) Yves Guiraud, The three dimensions of proofs, Annals of Pure and Applied Logic (in press). Also available at http://iml.univ-mrs.fr/%7Eguiraud/recherche/cos1.pdf
13) Francois Metayer, Resolutions by polygraphs, Theory and Applications of Categories 11 (2003), 148-184. Available online at http://www.tac.mta.ca/tac/volumes/11/7/11-07abs.html
I was also lucky to get some personal tutoring from folks including Laurent Regnier, Peter Selinger and especially Phil Scott. Ever since "week40", I've been trying to understand something called "linear logic", which was invented by Jean-Yves Girard, who teaches in Marseille. Thanks to all this tutoring, I think I finally get it!
To get a taste of what Phil Scott told me, you should read this:
14) Philip J. Scott, Some aspects of categories in computer science, Handbook of Algebra, Vol. 2, ed. M. Hazewinkel, Elsevier, New York, 2000. Available as http://www.site.uottawa.ca/~phil/papers/
Right now, I'm only up to explaining a microscopic portion of this stuff. But since the typical reader of This Week's Finds may know more about physics than logic, maybe that's good. In fact, I'll use this as an excuse to simplify everything tremendously, leaving out all sorts of details that a real logician would want.
Logic can be divided into two parts: SYNTAX and SEMANTICS. Roughly speaking, syntax concerns the symbols you scribble on the page, while semantics concerns what these symbols mean.
A bit more precisely, imagine some kind of logical system where you write down some theory - like the axioms for a group, say - and use it to prove theorems.
In the realm of syntax, we focus on the form our theory is allowed to have, and how we can deduce new sentences from old ones. So, one of the key concepts is that of a PROOF. The details will vary depending on the kind of logical system we're studying.
In the realm of semantics, we are interested in gadgets that actually satisfy the axioms in our theory - for example, actual groups, if we're thinking about the theory of groups. Such a gadget is called a MODEL of the theory. Again, the details vary immensely.
In the realm of syntax, we say a list of axioms X "implies" a sentence P if we can prove P from X using some deduction rules, and we write this as
X |- P
In the realm of semantics, we say a list of axioms X "entails" a sentence P if every model of X is also a model of P, and we write this as
X |= P
Syntax and semantics are "dual" in a certain sense - a sense that can be made very precise if one fixes a specific class of logical systems. This duality is akin to the usual relation between vector spaces and their duals, or more generally groups and their categories of representations. The idea is that given a theory T you can figure out its models, which form a category Mod(T) - and conversely, given the category of models Mod(T), perhaps with a little extra information, you can reconstruct T.
A little extra information? Well, in some cases a model of T will be a set with some extra structure - for example, if T is the theory of groups, a model of T will be a group, which is a set equipped with some operations. So, in these cases there's a functor
U: Mod(T) → Set
assigning each model its underlying set. And, you can easily reconstruct T from Mod(T) together with this functor.
This idea was worked out by Lawvere for a class of logical systems called algebraic theories, which I discussed in "week200". But, the same idea goes by the name of "Tannaka-Krein duality" in a different context: a Hopf algebra H has a category of comodules Rep(H), which comes equipped with a functor
U: Rep(H) → Vect
assigning each comodule its underlying vector space. And, you can reconstruct H from Rep(H) together with this functor. The proof is even very similar to Lawvere's proof for algebraic theories!
I gave a bunch of talks in Marseille about algebraic theories, some related logical systems called PROPs and PROs, and their relation to quantum theory, especially Feynman diagrams:
14) John Baez, Universal algebra and diagrammatic reasoning, available as http://math.ucr.edu/home/baez/universal/
I came mighty close to explaining how to compute the cohomology of an algebraic theory... and you can read more about that here:
15) Mauka Jibladze and Teimuraz Pirashvili, Cohomology of algebraic theories, J. Algebra 137 (1991) 253-296.
Mauka Jibladze and Teimuraz Pirashvili, Quillen cohomology and Baues-Wirsching cohomology of algebraic theories, Max-Planck-Institut für Mathematik, preprint series 86 (2005).
But alas, I didn't get around to talking about the duality between syntax and semantics. For that Lawvere's original thesis is a good place to go:
16) F. William Lawvere, Functorial Semantics of Algebraic Theories, Ph.D. thesis, Columbia University, 1963. Also available at http://www.tac.mta.ca/tac/reprints/articles/5/tr5abs.html
Anyway, the stuff Phil Scott told me about was mainly over on the syntax side. Here categories show up in another way. Oversimplifying as usual, the idea is to create a category where an object P is a sentence - or maybe a list of sentences - and a morphism
f: P → Q
is a proof of Q from P - or maybe an equivalence class of proofs.
We can compose proofs in a more or less obvious way, so with any luck this gives a category! And, different kinds of logical system give us different kinds of categories.
Quite famously, the multiplicative fragment of intuitionistic logic gives cartesian closed categories. (The "multiplicative fragment" is the portion that deals with "and" and "implies" but leaves out "or" and "not". I'm calling it this because "and" acts like multiplication, while "or" acts like addition.) Similarly, the multiplicative fragment of linear logic gives *-autonomous categories. Full-fledged intuitionistic logic gives cartesian closed categories with finite coproducts, and full-fledged linear logic gives us even fancier kinds of categories! If you want to learn about these examples, read the handbook article by Phil Scott mentioned above.
One thing that intrigues me is the equivalence relation we need to get a category whose morphisms are equivalence classes of proofs. In Gentzen's "natural deduction" approach to logic, there are various deduction rules. Here's one:
P |- Q P |- Q' ------------------ P |- Q & Q'This says that if P implies Q and it also implies Q', then it implies Q & Q'.
Here's another:
P |- Q => R ------------ P and Q |- RAnd here's a very important one, called the "cut rule":
P |- Q Q |- R ----------------- P |- RIf P implies Q and Q implies R, then P implies R!
There are a bunch more... and to get the game rolling we need to start with this:
P |- PIn this setup, a proof f: P → Q looks vaguely like this:
f-crud f-crud f-crud f-crud f-crud f-crud ------------- P |- QThe stuff I'm calling "f-crud" is a bunch of steps which use the deduction rules to get to P |- Q.
Suppose we also we also have a proof
g: Q → R
There's a way to stick f and g together to get a proof
fg: P → R
This proof consists of setting the proofs f and g side by side and then using the cut rule to finish the job. So, fg looks like this:
f-crud g-crud f-crud g-crud f-crud g-crud f-crud g-crud -------- -------- P |- Q Q |- R --------------------- P |- RNow let's see if composition is associative. Suppose we also have a proof
h: R → S
We can form proofs
(fg)h: P → S
and
f(gh): P → S
Are they equal? No! The first one looks like this:
f-crud g-crud f-crud g-crud f-crud g-crud h-crud f-crud g-crud h-crud -------- -------- h-crud P |- Q Q |- R h-crud --------------------- ----------- P |- R R |- S ---------------------------- P |- Swhile the second one looks like this:
g-crud h-crud g-crud h-crud f-crud g-crud h-crud f-crud g-crud h-crud f-crud -------- -------- f-crud Q |- R R |- S --------- --------------------- P |- Q Q |- S ---------------------------- P |- SSo, they're not quite equal! This is one reason we need an equivalence relation on proofs to get a category. Both proofs resemble trees, but the first looks more like this:
\ / / \/ / \ / |while the second looks more like this:
\ \ / \ \/ \ / |So, we need an equivalence relation that identifies these proofs if we want composition to be associative!
This sort of idea, including this "tree" business, is very familiar from homotopy theory, where we need a similar equivalence relation if we want composition of paths to be associative. But in homotopy theory, people have learned that it's often better NOT to impose an equivalence relation on paths! Instead, it's better to form a weak 2-category of paths, where there's a 2-morphism going from this sort of composite:
\ / / \/ / \ / |to this one:
\ \ / \ \/ \ / |This is called the "associator". In our logic context, we can think of the associator as a way to transform one proof into another.
The associator should satisfy an equation called the "pentagon identity", which I explained back in "week144". However, it will only do this if we let 2-morphisms be equivalence classes of proof transformations.
So, there's a kind of infinite regress here. To deal with this, it would be best to work with a "weak ω-category" with
sentences (or sequences of sentences) as objects,
proofs as morphisms,
proof transformations as 2-morphisms,
transformations of proof transformations as 3-morphisms,...
and so on. With this, we would never need any equivalence relations: we keep track of all transformations explicitly. This is almost beyond what mathematicians are capable of at present, but it's clearly a good thing to strive toward.
So far, it seems Seely has gone the furthest in this direction. In his thesis, way back in 1977, he studied what one might call "weak cartesian closed 2-categories" arising from proof theory. You can read an account of this work here:
17) R.A.G. Seely, Weak adjointness in proof theory, in Proc. Durham Conf. on Applications of Sheaves, Springer Lecture Notes in Mathematics 753, Springer, Berlin, 1979, pp. 697-701. Also available at http://www.math.mcgill.ca/rags/WkAdj/adj.pdf
R.A.G. Seely, Modeling computations: a 2-categorical framework, in Proc. Symposium on Logic in Computer Science 1987, Computer Society of the IEEE, pp. 65-71. Also available at http://www.math.mcgill.ca/rags/WkAdj/LICS.pdf
Can we go all the way and cook up some sort of ω-category of proofs? Interestingly, while the logicians at Geocal06 were talking about n-categories and the geometry of proofs, the mathematician Vladimir Voevodsky was giving some talks at Stanford about something that sounds pretty similar:
17) Vladimir Voevodsky, lectures on homotopy λ calculus, notice at http://math.stanford.edu/distinguished_voevodsky.htm
Voevodsky has thought hard about n-categories, and he won the Fields medal for his applications of homotopy theory to algebraic geometry.
The typed λ calculus is another way of thinking about intuitionistic logic - or in other words, cartesian closed categories of proofs. The "homotopy λ calculus" should thus be something similar, but where we keep track of transformations between proofs, transformations between transformations between proofs... and so on ad infinitum.
But that's just my guess! Is this what Voevodsky is talking about??? I haven't managed to get anyone to tell me. Maybe I'll email him and ask.
There were a lot of other cool talks at Geocal06, like Girard's talk on applications of von Neumann algebras (especially the hyperfinite type II1 factor!) in logic, and Peter Selinger's talk on the category of completely positive maps, diagrammatic methods for dealing with these maps, and their applications to quantum logic:
18) Peter Selinger, Dagger compact closed categories and completely positive maps, available at http://www.mscs.dal.ca/~selinger/papers.html
But, I want to finish writing this and go out and have some waffles for my Sunday brunch. So, I'll stop here!
An anonymous correspondent had this to say about the "homotopy λ calculus":
Several years ago, Kontsevich explained to me an idea he had about "homotopy proof theory" (or model theory, or logic, ...). As soon as I saw Voevodsky's abstract it reminded me of what Kontsevich said; perhaps it's a well-known idea in the Russian-Fields-medallist club. Somewhere I have notes from what Kontsevich said, but as far as I remember it went roughly like this.In certain set-ups (such as Martin-Löf type theory) every statement carries a proof of itself. Of course, a statement may have many proofs. If we imagine that all the statements are of the form "A = B", then what we're saying is that every equals sign carries with it a reason for equality, or proof of equality. If I remember rightly, Kontsevich's idea was to do a topological analogue, so that every term (like A and B) is assigned a point in some fixed space, and equalities of terms induce paths between points. There was more, pushing the idea further, but I forget what.
© 2006 John Baez
baez@math.removethis.ucr.andthis.edu
|