|
I spent this Christmas in Greenwich, England. Over repeated visits to England I have discovered many fascinating things of which many Americans are unaware. For example: while in traffic one must drive on the left side of the road, in escalators one must stand on the right. You flip switches down to turn on lights. Camels and zebras have escaped from the Royal Zoo and mated, and their hybrids roam the English countryside. On the roadside you will occasionally see signs for "humped zebra crossings". Also, the Royal Observatory in Greenwich fires a powerful green laser each night to mark the Prime Meridian - zero degrees longitude.
Four of the last five sentences are true. In particular, you really can see a green laser beam shining due north from the Royal Observatory, across the Thames, past the Citigroup Building and out into the night. And speaking of longitude, the day before Christmas I visited this observatory and had a wonderful time learning how John Harrison solved the longitude problem.
The longitude problem? Ah, how soon we forget! It's pretty easy to tell your latitude by looking at the sun or the stars. However, it's pretty hard to tell your longitude, unless you have a clock that keeps good time. After all, if you know what time it is in a fixed place, like Greenwich, you can figure out how far east or west you've gone by comparing the time you see the sun rise to the time it would rise there. Unfortunately, until the late 1700's, pendulum clocks didn't work well at sea, due to the rocking waves. This was a real problem! Ships would lose track of their longitude, go astray, and sometimes even run aground, killing hundreds of sailors.
Since England was a big maritime power, in 1714 they set up the Board of Longitude, which offered a prize of 20,000 pounds to anyone who could solve this problem. Newton and Halley favored a solution which involved measuring the angle between the moon and nearby stars and then consulting a bunch of tables. This was a complicated system that could only work with the help of an accurate star atlas and a detailed understanding of the motion of the moon. Newton set to work on the necessary calculations. John Flamsteed was made the royal astronomer of England, and he set to work on the star atlas. He moved into the Royal Observatory, and stayed up each night making observations with the help of his wife.
However, before this "lunar distance method" came online, the watchmaker John Harrison invented the first of a series of ingenious clocks that worked well despite rocking waves and fluctuations of temperature. All these can still be seen at the Royal Observatory - they're very beautiful! In the process, Harrison developed a whole bunch of cool technology like ball bearings and the bimetallic strip used in thermostats.
Alas, the Board refused to pay up even when Harrison built a clock that was accurate to within .06 seconds a day, which was certainly good enough. Finally King George III persuaded the board to give him the prize - but by then he was an old man. Luckily, I get the feeling Harrison was really more interested in building clocks than winning the prize money. He loved his work... one of the keys to a happy life.
Here's a book that tells his story in more detail:
1) Dava Sobel, Longitude, Fourth Estate Ltd., London, 1996.
I found it in the gift shop of the Observatory. It's a fun read, but for the technical reader it's frustratingly vague on the technical details of how Harrisons' clocks actually work.
I also bought this book there:
2) E. G. Richards, Mapping Time: The Calendar and its History, Oxford U. Press, Oxford, 1998.
Since it's almost New Year's Day, let me tell you a bit what I learned about calendars!
Mathematical physics has deep roots in astronomy, which may have been the first exact science. Thanks to astrology, the ancient theocratic states put a lot of resources into precisely tracking and predicting the motion of the sun, moon and planets. For example, by 700 BC the Babylonians had measured the length of the year to be 365.24579 days, with an error of only .00344 days. Two hundred years later, they had measured the length of the month to be 29.53014 days - an error of only 2.6 seconds.
If there were 360 days in a year, 30 days in a month, and 12 months in a year, the ancients would have been happy, since they loved numbers with lots of divisors. But alas, there aren't! These whole numbers come tantalizingly close, but not close enough, so the need for accurate calendars, balanced by the desire for simplicity, kept pushing the development of mathematics and astronomy forward.
There are also lots of complications I haven't mentioned. I've been talking about the "mean solar day", the "mean synodic month" and the "tropical year", but in fact the length of the day and month vary substantially due to the tilt of the earth's axis, the tilt of the moon's orbit, and other effects - so actually there are several different definitions of day, month and year. This was enough to keep the astronomer-priests in business for centuries. For more on the physics of it all, try:
3) John Baez, The wobbling of the earth and other curiosities, http://math.ucr.edu/home/baez/wobble.html
Unfortunately, the Romans, whose calendar we inherit, were real goofballs when it came to calendrics. Their system was run by a body of "pontifices" headed by the Pontifex Maximus. In 450 BC these guys adopted a calendar in which odd-numbered years had 12 months and 355 days, while even-numbered years had 13 months and alternated between 377 and 378 days. The extra month, called Mercedonius, was stuck smack in the middle of February. Even worse, this system gave an average of 366 and 1/4 days per year - one too many - so it kept drifting out of kilter with the seasons. The pontifices were authorized to fix things on an ad hoc basis as needed, but power corrupts, so they started taking bribes to suddenly advance or postpone the start of the year.
As a result, by the time Julius Caesar became dictator, the calendar was three months in advance of the seasons! After consulting with the Alexandrian astronomer Sosigenes, he decided to institute reforms. To straighten things out, the year 46 BC was made 445 days long. This was known as the Last Year of Confusion. It featured an extra long Mercedonius as well as two extra months after December, called Undecimber and Duodecimber.
The new so-called "Julian calendar" featured 12 months and 365 days, with an extra day in February every fourth year. The months alternated nicely between 31 and 30 days, except for February, which only had 30 on leap years. Unfortunately, Caesar was assassinated in 44 BC before this system fully took hold. The pontifices ineptly interpreted his orders and stuck in an extra day every third year. This didn't get fixed until 9 BC, Augustus stopped this practice and decreed that the next 3 leap years be skipped to make up for the extra ones the pontifices had inserted.
From then on, things went more smoothly, except for a lot of name-grabbing. When Julius Caesar was assassinated, the Senate took the month of Quintilis and renamed it "Iulius" in his honor, giving us July. Augustus followed suit, naming the month of Sextilis after himself - giving us August. More annoyingly, he stole the last day from February and stuck it on his own month to make it 31 days long, and did some extra reshuffling so the months next to his had only 30 - giving us our current messy setup.
The Senate offered to name a month after the next emperor, Tiberius, but he modestly declined. The next one, Caligula, was not so modest: he renamed June after his father Germanicus. Then Claudius renamed May after himself, and Nero grabbed April. Later, Domitian took October and Antonius took September. The vile Commodus tried to rename all twelve months, but that didn't stick. Then Tacitus snatched September away from Antonius... but luckily, all these later developments have been forgotten!
This is only a tiny fraction of the fascinating lore in Richards' book. Ever wonder why there are 7 days in a week? That's pretty easy: they're named after the 7 planets - in the old sense of "planets", meaning heavenly bodies visible by eye that don't move with the stars. But here's a harder puzzle! Why are the 7 planets are listed in this order?
Sun (Sunday - Dies Solis) Moon (Monday - Dies Lunae) Mars (Tuesday - Dies Martis) Mercury (Wednesday - Dies Mercurii) Jupiter (Thursday - Dies Iovis) Venus (Friday - Dies Veneris) Saturn (Saturday - Dies Saturnis)There's actually a nice explanation. However, I won't give it away here. Can you guess it?
Since ancient science was closely tied to numerology, I can't resist mentioning some fun facts relating the calendar and the deck of cards. As you probably know, playing cards come in 4 suits of 13 cards each, for a total of 52. 52 is also the number of weeks in a year. The 4 suites correspond to the 4 seasons, so there are 13 weeks in each season, just as there are 13 cards in each suite.
Even better, if we add up the face values of all the cards in the deck, counting an ace as 1, a deuce as 2, and so on up to 13, we get
(1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + 11 + 12 + 13) x 4 = 364,
which is one less than the number of days in a year! The remaining day corresponds to the "joker", a card which does not belong to any suite.
Many calendars contain "epagomenal days" not included in any month. For example, the Egyptians had 5 epagomenal days, leaving 360 which they could split up neatly into 12 months. In a system with one epagomenal day - the "joker" - the remaining 364 days can be divided not only as
(30 + 30 + 31) x 4,
which allows for two 30-day calendar months and one 31-day calendar month per season, but also as
13 x 28
which allows for 13 anomalistic months of 28 days each - where an "anomalistic month" is the time it takes for the moon to come round to its perigee, where it's as close to the earth as possible.
Putting it all together, we see that the number 364 factors as
13 x 4 x 7,
which corresponds to 13 months, each containing 4 weeks, each containing 7 days - or alternatively to 4 seasons, each containing 13 weeks, each containing 7 days - or to 4 suites, each containing 13 cards, with an average face value of 7.
Cute, eh? I'm not sure how much of this stuff is coincidence and how much was planned out by the mysterious mystics who invented playing cards. Of course we can't take these whole numbers too seriously - for example, the anomalistic month is actually 27.55455 days long, not 28. However, a 364-day year is mentioned in the the Book of Enoch, a pseudepigrapical Hebrew text which was found, among other places, in the Dead Sea Scrolls. In fact, a year of this length was used in Iceland as late as 1940. The idea of having one epagomenal day and dividing each season into months with 30, 30 and 31 days has also been favored by many advocates of calendar reform.
Of course, numerology should always be left to competent mathematicians who don't actually believe in it.
Here's another nice book:
4) Alain Connes, Andre Lichnerowicz and Marcel Paul Schutzenberger, A Triangle of Thoughts, AMS, Providence, 2000.
This consists of polished-up transcripts of dialogues (or should I say trialogues?) among these mathematicians. I wish more good scientists would write this sort of thing; it's much less strenuous to learn stuff by listening to people talk than by reading textbooks! It's true that textbooks are necessary when you want to master the details, but for the all-important "big picture", conversations can be much better.
This book focuses on mathematical logic and physics, with a strong touch of philosophy... but it wanders all over the map in a pleasant way - from Bernoulli numbers to game theory! The conversation is dominated by Connes, whose name appears on the title in bigger letters than the other two authors, perhaps because they others are now dead.
There is only one mistake in this book that I would like to complain about. Following Roger Penrose, Connes takes quasicrystals as evidence for some mysterious uncomputability in the laws of nature. The idea is that since there's no algorithm for deciding when a patch of Penrose tiles can be extended to a tiling of the whole plane, nature must do something uncomputable to produce quasicrystals of this symmetry. The flaw in this reasoning seems obvious: when nature gets stuck, it feels free to insert a defect in the quasicrystal. Quasicrystals do not need to be perfect to produce the characteristic diffraction patterns by which we recognize them.
But that's a minor nitpick: the book is wonderful! Read it!
In case you don't know: Alain Connes is a Fields medalist, who won the prize mainly for two things: his work on Von Neumann algebras, and his work on noncommutative geometry. Now I'll talk a bit about von Neumann algebras, since you'll need to understand a bit about them to follow the rest of my description of the paper by Michael Mueger that I have been slowly explaining throughout "week173" and "week174".
So: what's a von Neumann algebra? Before I get technical and you all leave, I should just say that von Neumann designed these algebras to be good "algebras of observables" in quantum theory. The simplest example consists of all n x n complex matrices: these become an algebra if you add and multiply them the usual way. So, the subject of von Neumann algebras is really just a grand generalization of the theory of matrix multiplication.
But enough beating around the bush! For starters, a von Neumann algebra is a *-algebra of bounded operators on some Hilbert space of countable dimension - that is, a bunch of bounded operators closed under addition, multiplication, scalar multiplication, and taking adjoints: that's the * business. However, to be a von Neumann algebra, our *-algebra needs one extra property! This extra property is cleverly chosen so that we can apply functions to observables and get new observables, which is something we do all the time in physics.
More precisely, given any self-adjoint operator A in our von Neumann algebra and any measurable function f: R → R, we want there to be a self-adjoint operator f(A) that again lies in our von Neumann algebra. To make sure this works, we need our von Neumann algebra to be "closed" in a certain sense. The nice thing is that we can state this closure property either algebraically or topologically.
In the algebraic approach, we define the "commutant" of a bunch of operators to be the set of operators that commute with all of them. We then say a von Neumann algebra is a *-algebra of operators that's the commutant of its commutant.
In the topological approach, we say a bunch of operators Ti converges "weakly" to an operator T if their expectation values converge to that of T in every state, that is,
<ψ, Ti ψ> → <ψ, T ψ>for all unit vectors ψ in the Hilbert space. We then say a von Neumann algebra is an *-algebra of operators that is closed in the weak topology.
It's a nontrivial theorem that these two definitions agree!
While classifying all *-algebras of operators is an utterly hopeless task, classifying von Neumann algebras is almost within reach - close enough to be tantalizing, anyway. Every von Neumann algebra can be built from so-called "simple" ones as a direct sum, or more generally a "direct integral", which is a kind of continuous version of a direct sum. As usual in algebra, the "simple" von Neumann algebras are defined to be those without any nontrivial ideals. This turns out to be equivalent to saying that only scalar multiples of the identity commute with everything in the von Neumann algebra.
People call simple von Neumann algebras "factors" for short. Anyway, the point is that we just need to classify the factors: the process of sticking these together to get the other von Neumann algebras is not tricky.
The first step in classifying factors was done by von Neumann and Murray, who divided them into types I, II, and III. This classification involves the concept of a "trace", which is a generalization of the usual trace of a matrix.
Here's the definition of a trace on a von Neumann algebra. First, we say an element of a von Neumann algebra is "nonnegative" if it's of the form xx* for some element x. The nonnegative elements form a "cone": they are closed under addition and under multiplication by nonnegative scalars. Let P be the cone of nonnegative elements. Then a "trace" is a function
tr: P → [0, +∞]which is linear in the obvious sense and satisfies
tr(xy) = tr(yx)
whenever both xy and yx are nonnegative.
Note: we allow the trace to be infinite, since the interesting von Neumann algebras are infinite-dimensional. This is why we define the trace only on nonnegative elements; otherwise we get "∞ minus ∞" problems. The same thing shows up in the measure theory, where we start by integrating nonnegative functions, possibly getting the answer +∞, and worry later about other functions.
Indeed, a trace very much like an integral, so we're really studying a noncommutative version of the theory of integration. On the other hand, in the matrix case, the trace of a projection operator is just the dimension of the space it's the projection onto. We can define a "projection" in any von Neumann algebra to be an operator with p* = p and p2 = p. If we study the trace of such a thing, we're studying a generalization of the concept of dimension. It turns out this can be infinite, or even nonintegral!
We say a factor is type I if it admits a nonzero trace for which the trace of a projection lies in the set {0,1,2,...,+∞}. We say it's type In if we can normalize the trace so we get the values {0,1,...,n}. Otherwise, we say it's type I∞, and we can normalize the trace to get all the values {0,1,2,...,+∞}.
It turn out that every type In factor is isomorphic to the algebra of n x n matrices. Also, every type I∞ factor is isomorphic to the algebra of all bounded operators on a Hilbert space of countably infinite dimension.
Type I factors are the algebras of observables that we learn to love in quantum mechanics. So, the real achievement of von Neumann was to begin exploring the other factors, which turned out to be important in quantum field theory.
We say a factor is type II1 if it admits a trace whose values on projections are all the numbers in the unit interval [0,1]. We say it is type II∞ if it admits a trace whose value on projections is everything in [0,+∞].
Playing with type II factors amounts to letting dimension be a continuous rather than discrete parameter!
Weird as this seems, it's easy to construct a type II1 factor. Start with the algebra of 1 x 1 matrices, and stuff it into the algebra of 2 x 2 matrices as follows:
( x 0 ) x |-> ( ) ( 0 x )This doubles the trace, so define a new trace on the algebra of 2 x 2 matrices which is half the usual one. Now keep doing this, doubling the dimension each time, using the above formula to define a map from the 2n x 2n matrices into the 2n+1 x 2n+1 matrices, and normalizing the trace on each of these matrix algebras so that all the maps are trace-preserving. Then take the union of all these algebras... and finally, with a little work, complete this and get a von Neumann algebra!
One can show this von Neumann algebra is a factor. It's pretty obvious that the trace of a projection can be any fraction in the interval [0,1] whose denominator is a power of two. But actually, any number from 0 to 1 is the trace of some projection in this algebra - so we've got our paws on a type II1 factor.
This isn't the only II1 factor, but it's the only one that contains a sequence of finite-dimensional von Neumann algebras whose union is dense in the weak topology. A von Neumann algebra like that is called "hyperfinite", so this guy is called "the hyperfinite II1 factor".
It may sound like something out of bad science fiction, but the hyperfinite II1 factor shows up all over the place in physics!
First of all, the algebra of 2n x 2n matrices is a Clifford algebra, so the hyperfinite II1 factor is a kind of infinite-dimensional Clifford algebra. But the Clifford algebra of 2n x 2n matrices is secretly just another name for the algebra generated by creation and annihilation operators on the fermionic Fock space over C2n. Pondering this a bit, you can show that the hyperfinite II1 factor is the smallest von Neumann algebra containing the creation and annihilation operators on a fermionic Fock space of countably infinite dimension.
In less technical lingo - I'm afraid I'm starting to assume you know quantum field theory! - the hyperfinite II1 factor is the right algebra of observables for a free quantum field theory with only fermions. For bosons, you want the type I∞ factor.
There is more than one type II∞ factor, but again there is only one that is hyperfinite. You can get this by tensoring the type I∞ factor and the hyperfinite II1 factor. Physically, this means that the hyperfinite II∞ factor is the right algebra of observables for a free quantum field theory with both bosons and fermions.
The most mysterious factors are those of type III. These can be simply defined as "none of the above"! Equivalently, they are factors for which any nonzero trace takes values in {0,∞}. In a type III factor, all projections other than 0 have infinite trace. In other words, the trace is a useless concept for these guys.
As far as I'm concerned, the easiest way to construct a type III factor uses physics. Now, I said that free quantum field theories had different kinds of type I or type II factors as their algebras of observables. This is true if you consider the algebra of all observables. However, if you consider a free quantum field theory on (say) Minkowski spacetime, and look only at the observables that you can cook from the field operators on some bounded open set, you get a subalgebra of observables which turns out to be a type III factor!
In fact, this isn't just true for free field theories. According to a theorem of axiomatic quantum field theory, pretty much all the usual field theories on Minkowski spacetime have type III factors as their algebras of "local observables" - observables that can be measured in a bounded open set.
Okay, so much for the crash course on von Neumann algebras! Next time I'll hook this up to Mueger's work on 2-categories.
In the meantime, here are some references on von Neumann algebras in case you want to dig deeper. For the math, try these:
5) Masamichi Takesaki, Theory of Operator Algebras I, Springer, Berlin, 1979.
6) Richard V. Kadison and John Ringrose, Fundamentals of the Theory of Operator Algebras, 4 volumes, Academic Press, New York, 1983-1992.
7) Shoichiro Sakai, C*-algebras and W*-algebras, Springer, Berlin, 1971.
A W*-algebra is basically just a von Neumann algebra, but defined "intrinsically", in a way that doesn't refer to a particular representation as operators on a Hilbert space.
For applications to physics, try these:
8) Gerard G. Emch, Algebraic Methods in Statistical Mechanics and Quantum Field Theory, Wiley-Interscience, New York, 1972.
9) Rudolf Haag, Local Quantum Physics: Fields, Particles, Algebras, Springer, Berlin, 1992.
10) Ola Bratelli and Derek W. Robinson, Operator Algebras and Quantum Statistical Mechanics, 2 volumes, Springer, Berlin, 1987-1997.
Postscript:
For more about the measurement of time, Theo Buehler recommends this lecture:
11) John B. Conway, http://www.math.utk.edu/~conway/Time.html
For technical information on John Harrison's clocks, Nigel Seeley recommends this book, which also has a bunch of nice pictures:
12) William J. H. Andrewes, editor, The Quest for Longitude: The Proceedings of the Longitude Symposium, Harvard University, Cambridge, Massachusetts, November 4-6, 1993. Harvard University Collection of Historical Scientific Instruments, Cambridge Massachusetts, 1996.
Nigel Seeley and Julian Gilbey also recommend the following book on calendrics:
13) Edward M. Reingold, and Nachum Dershowitz, Calendrical Calculations: The Millennium Edition, Oxford U. Press, Oxford, 1997. 268 pages.
Finally, here's a correction and the answer to the puzzle I gave above:
Derek Wise wrote: >JB wrote: >> ....[Augustus] stole the last day from February and stuck it on his >> own month to make it 31 days long, and did some extra reshuffling so the >> months next to his had only 30 - giving us our current messy setup. >In the modern calendar, July has 31 days and is adjacent to August.Yeah - I only remembered that a few days ago, after writing that issue of This Week's Finds. As a kid I refused to remember how many days were in each month, since it seemed hopelessly arbitrary and ugly - an all-too-human invention, rather than something intrinsic to the universe. Also, I was never fond of the mnemonic
Thirty days hath September All the rest I don't remember ....mainly because so many months end in "-ember" that this mnemonic would need a mnemonic of its own for me to recall it. It was only much later that I learned the "knuckles and spaces" method for keeping track of this information. For some reason I tried this a few days ago, and then I said "Hey! There's a month with 31 days next to August! What gives?" I meant to look up the facts in Richards' book Mapping Time, but I forgot. Thanks for reminding me!
Anyway, here's the deal: the calendar reform of Julius Caesar gave the months these numbers of days:
Januarius 31 Februarius 29/30 Martius 31 Aprilis 30 Maius 31 Iunius 30 Iulius 31 Sextilis 30 September 31 October 30 November 31 December 30A nice systematic alternation, though you might why February gets picked on; this is because the earlier Roman calendar had a short February, and a month called Mercedonius stuck in the middle of February now and then.
Augustus screwed it up as follows:
Januarius 31 Februarius 29/30 Martius 31 Aprilis 30 Maius 31 Iunius 30 Iulius 31 Augustus 31 September 30 October 31 November 30 December 31In short: he took the month of Sextilis, renamed it after himself, gave it an extra day, and switched the alternating pattern of 30 and 31 after that month.
By the way, Richard Bullock gave the "right" answer to my puzzle about why the 7 planets are listed in the order they are as names of days of the week. By this I mean he gives the same answer that Richards does in Mapping Time. Astrologers like to list the planets in order of decreasing orbital period, counting the sun as having period 365 days, and the moon as period 29 days:
Saturn (29 years) Jupiter (12 years) Mars (687 days) Sun (365 days) Venus (224 days) Mercury (88 days) Moon (29.5 days)For the purposes of astrology they wanted to assign a planet to each hour of each day of the week. They did this in a reasonable way: they assigned Saturn to the first hour of the first day, Jupiter to the second hour of the first day, and so on, cycling through the list of planets over and over, until each of the 7 x 24 = 168 hours was assigned a planet. Each day was then named after the first hour in that day. Since 24 mod 7 equals 3, this amounts to taking the above list and reading every third planet in it (mod 7), getting:
Saturn (Saturday) Sun (Sunday) Moon (Monday) Mars (Tuesday) Mercury (Wednesday) Jupiter (Thursday) Venus (Friday)I don't think anyone is sure that this is how the days got the names they did; the earliest reference for this scheme is the Roman historian Dion Cassius (AD 150-235), who came long after the days were named. However, Dion says the scheme goes back to Egypt. In the Moralia of Plutarch (AD 46-120) there was an essay entitled "Why are the days named after the planets reckoned in a different order from the actual order?" Unfortunately this essay has been lost and only the title is known.
To bring the subject back to physics: we should see all these attempts to bring order to time as part of a gradual process of developing ever more precise and logical coordinate systems for the spacetime manifold we call our universe. We may laugh at how the Roman pontifices took bribes to start the year a day early; our descendants may laugh at how we add or subtract leap seconds from Coordinated Universal Time (UTC) to keep it in step with the irregular rotation of that lumpy ball of rock we call Earth (or more precisely, the time system called UT2, based on the Earth's rotation). How precise will we get? Will we someday be worrying about leap attoseconds? Leap Planck times?
|