
A team of astronomers has found evidence that a dwarf galaxy near the Milky Way is surrounded by an enormous halo of dark matter, which may be 200 times heavier than all the stars in the galaxy itself:
1) Jan T. Kleyna, Mark I. Wilkinson, N. Wyn Evans and Gerard Gilmore, First clear signature of an extended dark matter halo in the Draco dwarf spheroidal, Astrophysical Journal Letters 563 (2001), L115118. Also available at astroph/0111329.
This just emphasizes a wellknown fact: "dark matter" is one of the biggest mysteries in physics today. Unless we're mixed up, which is always possible, most of the energy density of the universe is made of some invisible stuff about which we know almost nothing! To add insult to injury, after dark matter the second biggest constituent of the mass/energy appears to be "dark energy". All other forms of matter  mainly hydrogen  come a distant third.
Perhaps I should say a word about the difference between dark matter and dark energy, since this is awfully confusing to the uninitiated.
The main reason people believe in "dark matter" is that galaxies and clusters of galaxies seem to have a lot more mass than can be accounted for by all the stuff we understand: stars, gas, and so forth. It's fairly easy to measure this mass using gravity, by seeing how fast things orbit around each other  stars around galaxies, or galaxies around each other. The hard part is guessing how much stuff is in the galaxies. Could there be lots of faint stars we don't see? Black holes, maybe? People have thought about all sorts of possibilities, but they just don't seem to add up. So, people postulate mysterious extra stuff: "dark matter".
"Dark energy", on the other hand, is basically just a fashionable name for the cosmological constant: that is, the builtin energy density of the vacuum. Einstein noticed that you can tinker with general relativity by making this nonzero, but only by making the pressure nonzero too, and of opposite sign, but with exactly the same magnitude in units where c = G = 1. This is very different from normal matter  or even dark matter, as far as we can tell  where both the energy density and pressure are positive.
This is important because the expansion of the universe is governed both by energy density and pressure. More precisely, a calculation using general relativity shows that the expansion of the universe decelerates at a rate proportional to the energy density plus 3 times the pressure. (In case you're wondering, the number 3 comes from the fact that space is 3dimensional.)
If you think about what I've told you, this means that normal matter makes the expansion decelerate  but a positive cosmological constant makes the expansion accelerate, since the effects of negative pressure dominate those of positive energy density, thanks to that factor of 3.
Starting around 1995, convincing evidence started to build up that the expansion of the universe is accelerating. The simplest way to explain this is to posit a positive cosmological constant  or in other words, dark energy!
In case you're dreaming up alternative theories as I speak, let me assure you that hundreds of papers have been written about this subject, probing all sorts of possibilities. Perhaps the cosmological constant isn't really constant: maybe the negative pressure is due to a new form of matter called "quintessence". Perhaps general relativity is wrong: that's what people working on "modified Newtonian dynamics" believe. I don't have the energy or expertise to talk about all these ideas, so I'm just telling you the current conventional wisdom.
But if dark matter really exists, what could it be? There are lots of options. It could be an excess of familiar stuff that's somehow slipped through our bookkeeping, or MACHOs (massive compact halo objects), or WIMPs (weakly interacting massive particles), or... something else!
If you're trying to figure out the mystery of dark matter, you should first study all the hoops your theory must succesfully jump through. Besides getting galaxies and clusters to rotate faster than they otherwise would, dark matter should collapse under its own gravity early in the history of the universe. Why? Otherwise, people seem unable to explain why galaxies formed as soon as they did! In the early universe, the ordinary matter was very hot gas. The hotter a ball of gas is, the bigger it must be before it collapses under its own gravity, since this happens when the escape velocity exceeds the average speed of the atoms. Without something to help it out, it seems that ordinary matter in the early universe could not collapse under its own gravity to form galaxysized lumps, but only much bigger lumps. But it seems galaxies formed quite early! This dilemma would go away if there were "cold dark matter" which clumped up under its own gravitation early on, seeding galaxy formation.
The new observation of this dwarf galaxy is further evidence that cold dark matter is real and plays an important role in galaxy formation. There are in fact 9 "dwarf spheroidal galaxies" near the Milky Way; the one studied is about 250,000 light years away from us in the constellation of Draco. Many astronomers believe that big galaxies like ours were formed from the accretion of such dwarfs.
Physicists are actually doing experiments to look for dark matter. Galaxy formation and everything else would work quite nicely if dark matter consisted of some sort of weakly interacting massive particle with a mass of about 100 GeV. The dark matter density near us seems to be roughly 5 x 10^{24} grams per cubic centimeter, which would mean about 3 WIMPs per thousand cubic centimeters. That's not much, but since these WIMPs would be moving in random orbits in the gravitational potential well of the galaxy, they should be zipping past us at an average of 300 kilometers per second. This gives a flux of about 10^{5} WIMPs per square centimeter per second!
The problem is that, like neutrinos, most of these guys would pass through matter undetected. If you pick some specific theory concerning these WIMPs  for example that they're some sort of "neutralino" in the minimal supersymmetric extension of the Standard Model  and make some plausible assumptions about various numbers, you'd guess that about 10 WIMPs per year would interact with a 1kilogram lump of matter. Of course the actual number could easily be many orders of magnitude different, but the point is: this is within the realm of what we might actually detect!
One way to go about it is to use sodium iodide crystals as scintillation detectors. When a WIMP smacks into one of these, it should emit a flash of light. The problem is to eliminate other causes such as cosmic rays and natural radioactivity from the surroundings. To get away from cosmic rays, it's good to go down into a mine. To get away from radioactivity it's good to use shielding made from highpurity copper or aged lead. The UK Dark Matter Collaboration has done just this, placing several 110 kilogram sodium iodide crystals 1100 meters below ground in the Boulby salt mine in Yorkshire. They've been taking data since 1997, and they've seen a number of anomalous events:
2) UK Dark Matter Collaboration (UKDMC) homepage, http://hepwww.rl.ac.uk//UKDMC/
The DAMA group  that's short for "dark matter"  has found even more fascinating results. This collaboration involves Italian and Chinese physicists who are using nine 9.7kilogram sodium iodide crystals in a laboratory 1400 meters below ground, off of a tunnel on a highway near Rome. The idea behind this experiment is not just to detect WIMPs  but to look for seasonal variations in the rate of their detection!
This may sound crazy, but it's based on sound logic. The sun orbits the galaxy at 232 kilometers/second, but also the earth orbits the sun at 30 kilometers/second in a plane that lies at a 60degree angle to the galactic plane. As a result the earth is going through the galaxy faster when these motions add up, in June, than when they're pointing in opposite directions, in December. So, if WIMPs are more or less randomly orbiting the galaxy in all directions, we should thus see a higher flux of WIMPs through the earth in summer than in winter!
The DAMA group has been collecting data for four years, and claims to have actually seen such a "annual modulation signature". You can see a graph of their data here:
3) DAMA collaboration, Searching for the WIMP annual signature by the ~100 kg NaI(Tl) setup, http://www.lngs.infn.it/lngs/htexts/dama/dama39.html
For more information, try their homepage:
4) Dark Matter (DAMA) experiment home page, http://www.lngs.infn.it/lngs/htexts/dama/welcome.html
Unfortunately, their result is controversial, because the Cryogenic Dark Matter Search (CDMS) was unable to replicate it. This experiment works a different way: it uses germanium and silicon crystals cooled to a hundredth of a degree above absolute zero. The idea is to detect the phonons  that is, quantized sound waves  produced when a WIMP smacks into an atomic nucleus.
The original CDMS experiment was done at Stanford, only 10 meters below the ground; this meant it had to distinguish WIMPs from a background of cosmic rays. Now they are redoing the experiment in an abandoned mine in Minnesota, which should give more accurate results.
For more on the CDMS experiment, try:
5) Cryogenic Dark Matter Search (CDMS) home page, http://cdms.berkeley.edu/
In short, the situation is still murky. Luckily, a bunch more dark matter detectors are coming online as we speak, which should help straighten things out. You can find websites for these dark matter experiment and also conference here:
6) Frederic Mayet, Dark Matter Portal, http://isnwww.in2p3.fr/ams/fred/dm.html
Finally, here are some things to read if you want to learn more. First, some general introductions to cosmology, in roughly increasing order of difficulty:
7) Edward R. Harrison, Cosmology, the Science of the Universe, Cambridge University Press, Cambridge, 1981.
8) M. Berry, Cosmology and Gravitation, Adam Hilger, Bristol, 1986.
9) John A. Peacock, Cosmological Physics, Cambridge University Press, Cambridge, 1999.
Second, a nice easy review article on dark matter:
10) Shaaban Khalil and Carlos Munoz, The enigma of the dark matter, to appear in Contemp. Phys., also available at hepph/0110122.
Third, two articles surveying candidates for what dark matter might be: neutralinos, axions, axinos, gravitinos, MACHOs  you name it!
11) Leszek Roszkowski, Nonbaryonic dark matter, available as hepph/0102327.
12) B. J. Carr, Recent developments in the search for baryonic dark matter, available as astroph/0102389.
Okay, now on to something more mathematical....
I've been having fun lately learning about "teleparallel" theories gravity from Simon Clark, Chris Hillman and Stephen Speicher on sci.phyics.research. This is a good introduction:
13) V. C. de Andrade, L. C. T. Guillen and J. G. Pereira, Teleparallel gravity: an overview, available at grqc/0011087.
In ordinary general relativity, you describe the gravitational field using a "metric": a field that lets you measure times, distances and angles. In teleparallel gravity, you instead use a field that allows you to decide whether two vectors at two points of spacetime are "the same". This notion of unambiguously comparing vectors at different points of spacetime is called "distant parallelism", hence the term "teleparallel".
At first the idea of distant parallelism seems antithetical to general relativity. After all, in the usual formalism of general relativity, you can only compare vectors at different points of a curved spaceteim after you pick a path from one to the other! The wonderful thing is that you can formulate theories of teleparallel gravity that are equivalent to general relativity for all practical purposes. The philosophy is completely different: for example, in general relativity you shouldn't think of gravity as a "force" that "accelerates" particles, but in teleparallel gravity you can. However, the physical predictions are the same for a huge class of situations.
Here's a sketch of how it works. I'm afraid I'll have to turn on the differential geometry now.
It's easiest to start with the socalled Palatini formulation of general relativity. Here we take spacetime to be an orientable smooth 4manifold M and pick a vector bundle T that is isomorphic to the tangent bundle TM. We equip T with a Lorentzian metric and orientation. A good name for T would be the "fake tangent bundle", but physicists usually call its fiber the "internal space". The trick is then to describe a Lorentzian metric on M by means of a vector bundle map
e: TM → Twhich we call a "coframe field". We can use this to pull the metric on T back to the tangent bundle. If e is an isomorphism, this gives a Lorentzian metric on M. If it's not, we get something like a metric, but with degenerate directions. You can think of the Palatini formulation as extending general relativity to allow such "degenerate metrics", and this becomes really important in quantum gravity, but for now let's only consider the case where e is an isomorphism.
The coframe field is one of the two basic fields in the Palatini formulation. The other is a metriccompatible connection on T. This connection is usually denoted A and called a "Lorentz connection". Its curvature is denoted F.
The Lagrangian for the Palatini formulation of general relativity looks like this:
tr(e ^ e ^ *F)This takes a bit of explaining! First of all, the curvature F is an End(T)valued 2form, but using the metric on T we get an isomorphism between T and its dual, so we can also think of the curvature as a 2form taking values in T tensor T. However, if we do this, the fact that A is metriccompatible means that F is skewsymmetric: it takes in the second exterior power of T, Λ^{2}(T).
Since T has a metric and orientation, we can define a Hodge star operator on the exterior algebra Λ(T) just as we normally do for differential forms on a manifold with metric and orientation. We call this the "internal" Hodge star operator. Using this we can define *F, which is again a 2form taking values in Λ^{2}(T).
Whew! It takes some work making sense of that terse formula above! We're not done yet, either. Of course, all these verbal descriptions can be avoided by writing down formulas packed with indices. That's what working physicists do. And when they've got two different vector bundles around, like T and the tangent bundle TM they use two different fonts for their indices: for example, Latin letters for the "internal indices" associated to T, and Greek letters for the "spacetime indices" associated to TM. Once you get used to this, it's really efficient. It's only mathematicians who would rather read a paragraph of complicated verbiage than a fancy equation. The equation helps you compute, but the verbiage helps you understand  at least if you follow it! If you don't know enough geometry, the verbiage probably seems more confusing than helpful.
Okay. Next, note that the coframe field e can be thought of as a Tvalued 1form. This allows us to define the wedge product e ^ e as a Λ^{2}(T)valued 2form. Note that this is the same sort of gadget as the curvature F and its internal Hodge dual *F. This means we can take the wedge product of the differential form parts of e ^ e and *F while using the metric on T to pair together their Λ^{2}(T) parts and get a number. The result is a plain old 4form, which we call tr(e ^ e ^ *F). This is our Lagrangian!
If you work out the equations of motion coming from this Lagrangian, they say A that pulls back via e to a torsionfree metriccompatible connection on the tangent bundle: the LeviCivita connection! It follows that F pulls back to the curvature of the LeviCivita connection: the Riemann tensor! Finally, it turns out that tr(e ^ e ^ *F) is just the Ricci scalar curvature times the volume form on M... so we were doing general relativity all along!
This may seem convoluted, but one advantage of this approach is that it describes gravity as a kind of gauge theory. From the viewpoint of field theory, the metric is a rather curious beast: it's a section of a bundle, but it's required to satisfy inequalities saying that it is nondegenerate and has a certain signature. Here we have tamed this beast  or at least locked it up safely inside the formalism of differential forms and connections. As a spinoff, we don't get those nasty factors of "the square root of the determinant of the metric" which plague the oldfashioned approach to general relativity. The reason is that the coframe field acts like a "square root" of the metric.
Physicists have spent a lot of time trying to recast gravity as a gauge theory If you read old journals, you'll see endless arguments about what gauge group to use. It turns out there are a lot of right answers. The gauge group for the Palatini formulation of general relativity is the Lorentz group, but we can also cook up formulations where the gauge group is the Poincare group or the translation group R^{4}. I'd known about the Poincare group version  I'll explain that in a minute  but I hadn't known you could get away with using just the translation group! That's where teleparallel gravity comes in. It all fits together in a beautiful big picture....
The Poincare group is the semidirect product of the Lorentz group SO(3,1) and the translation group R^{4}. This means that a Poincare group connection can be written as a Lorentz group connection plus a part related to the translation group. We know the Palatini formalism involves a Lorentz connection. What about the other part? This is just the coframe field! To see this, note that since each fiber of T looks just like Minkowski spacetime, we can use T to create a principal bundle over M whose gauge group is the Poincare group. A connection on this principal bundle works out to be exactly the same as a Lorentz connection A together with a Tvalued 1form e.
So, without lifting a finger, we can reinterpret the Palatini formalism as a theory in which the only field is a Poincare group connection. Like the Poincare group itself, the curvature of this connection can be chopped into two pieces. The Lorentz group part is our old friend, the Λ^{2}(T)valued 2form
F = dA + A ^ A.The translation group part is a Tvalued 2form:
t = de + A ^ e.Using e: TM → T we can pull all this stuff back to the tangent bundle, where its meaning becomes evident. The metric on T pulls back to a metric on the tangent bundle, A pulls back to a metriccompatible connection on the tangent bundle, F pulls back to the curvature of this connection, and t pulls back to the torsion of this connection! As already hinted, one of the equations of motion says that t vanishes, so A really pulls back to a torsionfree metriccompatible connection: the LeviCivita connection.
Finally, let's see how to get rid of the Lorentz connection A and formulate gravity using just the coframe field e, which we'll interpret as a translation group connection. It seems the teleparallel gravity crowd only knows how to pull this stunt when the tangent bundle of M is trivializable. But this is not as bad as it sounds: every orientable 3manifold S has a trivializable tangent bundle, so the same is true of every orientable 4manifold of the form R x S.
So: suppose M is a 4manifold with trivializable tangent bundle. This means we can take T to be the trivial bundle M x R^{4}. The usual Minkowski metric on R^{4} puts a Lorentzian metric on T, and the trivialization gives this bundle a flat metriccompatible connection A.
We've seen a connection like this A before, but this time it won't be one of the dynamical fields in our theory: it'll be a "fixed background structure", cast in iron. It's so boring it looks just like "0" when we do calculations using our trivialization of T, but I prefer to give a name to it nonetheless.
The only dynamical field in teleparallel gravity is the coframe field e. We can think of this as a Tvalued 1form, or if you prefer, a "translation group connection": a connection on the bundle T regarded as a principal bundle with gauge group R^{4}. The curvature of this connection is a Tvalued 2form which we'll again call t. As before we have
t = de + A ^ ebut using our trivialization of T this formula boils down to
t = de.As before, we can use e to pull stuff from T back to the tangent bundle TM. The metric on T pulls back to a metric on TM, the connection A pulls back to a metriccompatible connection W on TM, and t pulls back to a TMvalued 2form which is just the torsion of W. In this setup there's no reason for t to vanish, so the connection W will have torsion. On the other hand, A has no curvature, so neither will W.
Folks call W the "Weitzenboeck connection". Of course when e is an isomorphism there's another connection on TM, too: the LeviCivita connection, L. Both these are metriccompatible, but they're very different. The Weitzenbock connection has torsion but no curvature; the LeviCivita connection has curvature but no torsion!
Andrade and company give a nice explanation for what's going on here.
According to general relativity, curvature is used to geometrize spacetime, and in this way successfully describe the gravitational interaction. Teleparallelism, on the other hand, attributes gravitation to torsion, but in this case torsion accounts for gravitation not by geometrizing the interaction, but by acting as a force. This means that, in the teleparallel equivalent of general relativity, there are no geodesics, but force equations quite analogous to the Lorentz force equation of electrodynamics. Thus, we can say that the gravitational interaction can be described alternatively in terms of curvature, as is usually done in general relativity, or in terms of torsion, in which case we have the socalled teleparallel gravity. Whether gravity requires a curved or torsioned spacetime, therefore, turns out to be a matter of convention.The difference of the Weitzenboeck and LeviCivita connections,
K = W  L,goes by the charming name of the "contorsion", since it says how much the coframe field twists around as measured by the LeviCivita connection.
The review article by Andrade et al gives a nice formula for the contorsion in terms of the torsion of the Weitzenboeck connection. This means we can express the LeviCivita connection completely in terms of the Weitzenboeck connection and its torsion. And that means we can express the Ricci scalar curvature in terms of the Weitzenboeck connection and its torsion. Great  so we can write down the Lagrangian for general relativity in this new lingo! Ultimately, we can express it purely in terms of the coframe field e.
Unfortunately, I haven't smoothed down the calculations to the point where you'd actually want to see them here. The prettiest formula for the Lagrangian shows up in this paper:
14) Yakov Itin, Energymomentum current for coframe gravity, available as grqc/0111036.
Up to a constant factor, it looks like this:
2(e^{i} ^ de_{i}) ^ *(e^{j} ^ de_{j})  (e^{i} ^ de^{j}) ^ *(e_{i} ^ de_{j})where i and j are internal indices, but * is the usual "spacetime" Hodge star operator.
By now I've probably lost everyone except people who understand this stuff already, so I'll stop here. If you read the references, you'll find a nice equation for how a freely falling particle moves in teleparallel gravity, a nice formula for the gravitational energymomentum pseudotensor in teleparallel gravity, and so on. Itin's paper also considers versions of teleparallel gravity with more general Lagrangians built from the coframe field, which are not necessarily equivalent to general relativity.
Now for something completely different! Here's the final episode of my description of this paper by Michael Mueger:
15) From subfactors to categories and topology I: Frobenius algebras in and Morita equivalence of tensor categories, available as math.CT/0111204.
In "week174" I talked about Frobenius algebras and 2categories; in "week175" I said a bit about subfactors; now it's time for me to say something about how Mueger puts these together! This will be very sketchy, I'm afraid.
First, it's worth noting that lots of mathematical gadgets form not just categories but also 2categories. For example, we all know the category of groups, where the objects are groups and the morphisms are homomorphisms. But there is also a 2category lurking around here! Between any morphisms
f,f': G → Hwe can define a 2morphism
a: f => f'to be an element of H with the property that
af(g) = f'(g)a for all g in G.This just says that f' is f conjugated by an element of H, so we could call these 2morphisms "conjugations".
This definition may seem forced, but it's actually quite natural if you remember that a group is a special sort of category with one object and with all morphisms being invertible. Functors between these special categories are just group homomorphisms, and natural transformations between these functors are just conjugations! If you don't follow this, check out "week73"  you'll see the above equation is just a special case of the definition of "natural transformation".
For fans of group theory, one nice thing about this 2category is that it explains where "inner automorphisms" fit into the grand ncategorical scheme of things. It also explains why conjugations become important in algebraic topology when you're playing around with the "fundamental group": this is actually a 2functor from the 2category of
spaces with basepoint, basepointpreserving maps, and notnecessarilybasepointpreserving homotopiesto the 2category of
groups, group homomorphisms, and conjugations.We can also cook up a 2category of rings which works in a similar way; the objects are rings, the morphisms are ring homomorphisms, and the 2morphisms are conjugations, defined by the same formula as above.
Mueger's work uses this 2category, or more precisely, a sub2category where we use not all rings, but only certain specially nice type III factors, and not all homomorphisms, but only certain specially nice *homomorphisms. He gives a nice simple condition for a morphism in this 2category to have a "twosided adjoint"  meaning precisely that it's part of what I called an "ambidextrous adjunction" in "week174". And as we saw back then, any ambidextrous adjunction gives a Frobenius object! So, he gets lots of Frobenius objects from the theory of factors. But more importantly, he shows that a whole lot of concepts beloved by folks who study von Neumann algebras are really concepts from 2category theory, applied to this situation!
This is cool, because there are already deep connections between ncategories and quantum theory  see "week78" for an introduction to these ideas. Since von Neumann algebras are the basic "algebras of observables" in quantum theory, we should expect them to be deeply ncategorical in nature. And now, thanks to the work of Mueger, it's becoming a lot clearer just how. But I don't think we're anywhere near the bottom of it yet  at least, not me!
By the way, it's taken me so long to explain Mueger's last paper that he's already written another:
16) Michael Mueger, On the structure of modular categories, available as math.CT/0201017.
© 2002 John Baez
baez@math.removethis.ucr.andthis.edu
