While for the most part a FAQ covers the answers to frequently asked questions whose answers are known, in physics there are also plenty of simple and interesting questions whose answers are not known. Here we list some of these. We could have called this article Frequently Unanswered Questions, but the resulting acronym would have been rather rude.
Before you set about answering these questions on your own, it's worth noting that while nobody knows what the answers are, a great deal of of work has already been done on most of these subjects. So, do plenty of research and ask around before you try to cook up a theory that'll answer one of these and win you the Nobel prize! You'll probably need to really know physics inside and out before you make any progress on these.
The following partial list of open questions is divided into five groups:
There are many other interesting and fundamental questions in other fields, and many more in these fields besides those listed here. Their omission is not a judgement about importance, but merely a decision about the scope of this article.
Since this article was last updated in 1997, a lot of progress has been made in answering some big open questions in physics. We include references on some of these questions. There is also a lot to read about the other open questions - especially the last one, which we call The Big Question. But, we haven't had the energy to list it.
For more details, try this:
- David Knapp, Sonoluminescence: an Introduction.
To learn more about superconductivity, see this webpage and its many links:
This is more of a question of mathematical physics than physics per se - but it's related to the previous question, since (one might argue) how can we deeply understand turbulence if we don't even know that the equations for fluid motion have solutions? At the turn of the millennium, the Clay Mathematics Institute offered a $1,000,000 prize for solving this problem. For details, see:
- Clay Mathematics Institute, Navier-Stokes Equation.
Many physicists think these issues are settled, at least for most practical purposes. However, some still think the last word has not been heard. Asking about this topic in a roomful of physicists is the best way to start an argument, unless they all say "Oh no, not that again!". There are many books to read on this subject, but most of them disagree.
This question is to some extent impacted by the previous one, but it also has a strong engineering aspect to it. Some physicists think quantum computers are impossible in principle; more think they are possible in principle, but are still unsure if they will ever be practical.
Here are some ways to learn more about quantum computation:
- Centre for Quantum Computation homepage.
- John Preskill, course notes on quantum computation.
- Michael A. Nielsen and Isaac L. Chuang, Quantum Computation and Quantum Information, Cambridge U. Press, Cambridge, 2000. Errata, table of contents and Chapter 1 available here.
We still don't know, but in 2003 some important work was done on this issue:
- Neil J. Cornish, David N. Spergel, Glenn D. Starkman and Eiichiro Komatsu, Constraining the Topology of the Universe.
Briefly, the Wilkinson Microwave Anisotropy Probe (WMAP) was used to rule out nontrivial topology within a distance of 24 billion parsecs - at least for a large class of models.
For more details, you should read the article. But here's one question that naturally comes to mind. 24 billion parsecs is about 78 billion light years. But since the universe is only about 14 billion years old, it's commonly said that the radius of the observable universe is 14 billion light years. So, how is the above paper making claims about even larger distances?
The reason is that the universe is expanding! If we look at the very farthest objects we can see and ask how far from us they are now, the answer is about 78 billion light years.
This sort of thing means one should be careful when talking about extremely large distances. Astronomers usually speak not of distances but of redshifts, which are what we directly measure.
Here are two pieces of required reading for anyone interested in this tough question:
- Huw Price, Time's Arrow and Archimedes' Point: New Directions for a Physics of Time, Oxford University Press, Oxford, 1996.
- H. D. Zeh, The Physical Basis of the Direction of Time, second edition, Springer Verlag, Berlin, 1992.
There's been some progress on this one recently. Starting in the late 1990s, a bunch of evidence has accumulated suggesting that the universe is not slowing down enough to recollapse in a so-called "big crunch" In fact, it seems that some form of "dark energy" is making the expansion speed up! We know very little about dark energy; it's really just a name for any invisible stuff that has enough negative pressure compared to its energy density that it tends to make the expansion of the universe tend to accelerate, rather than slow down. (In general relativity, energy density tends to make the expansion slow down, but negative pressure has the opposite effect.)
Einstein introduced dark energy to physics under the name of "the cosmological constant" when he was trying to explain how a static universe could fail to collapse. This constant simply said what the density dark energy was supposed to be, without providing any explanation for its origin. When Hubble observed the redshift of light from distant galaxies, and people concluded the universe was expanding, the idea of a cosmological constant fell out of fashion and Einstein called it his "greatest blunder". But now that the expansion of the universe seems to be accelerating, a cosmological constant or some other form of dark energy seems plausible.
For an examination of what an ever-accelerating expansion might mean for our universe, see:
But, we still can't be sure the universe will expand forever, because the possibility remains that at some point the dark energy will go away, switch sign, or get bigger! Here's a respectable paper suggesting that the dark energy will change sign and make the universe recollapse in a big crunch:
- John Baez, The End of the Universe.
- Renata Kallosh and Andrei Linde, Dark Energy and the Fate of the Universe.
But here's a respectable paper suggesting the exact opposite: that the dark energy will get bigger and tear apart the universe in a "big rip":
- Robert R. Caldwell, Marc Kamionkowski, and Nevin N. Weinberg, Phantom Energy and Cosmic Doomsday.
In short, the ultimate fate of the universe remains an open question!
But, before you launch into wild speculations, it's worth emphasizing that the late 1990s and early 2000s have seen a real revolution in experimental cosmology, which answered many open questions (for example: "how long ago was the Big Bang?") in shockingly precise ways (about 13.7 billion years). For good introduction to this material, try:
Our evidence concerning the expansion of the universe, dark energy, and dark matter now comes from a wide variety of sources, and what makes us confident we're on the right track is how nicely all this data agrees. People are getting this data from various sources including:
- Distant Supernovae. See especially these two experimental groups:
- The High-z Supernova Search Team.
See also their big paper.
- The Supernova Cosmology Project.
See also their big paper.
- The Cosmic Microwave Background (CMB). There have been lots of great experiments measuring little ripples in the background radiation left over from the Big Bang. For example:
- Balloon Observations of Millimetric Extragalatic Radiation and Geophysics (BOOMERanG)
- The Millimeter Anisotropy Experiment Imaging Array (MAXIMA)
- The Wilkinson Microwave Anisotropy Probe (WMAP).
- The Digital All-Sky Imager (DASI).
- The Cosmic Background Imager (CBI).
- Large-Scale Structure. Detailed studies of galactic clustering and how it changes with time give us lots of information about dark matter. Here's the 800-pound gorilla in this field:
As mentioned above, evidence has been coming in that suggests the universe is full of some sort of "dark energy" with negative pressure. For example, an analysis of data from the Wilkinson Microwave Anisotropy Probe in 2003 suggested that 73% of the energy density of the universe is in this form! But even this is right and dark energy exists, we're still in the dark about what it is.
The simplest model is a cosmological constant, meaning that so-called "empty" space actually has a negative pressure and positive energy density, with the pressure exactly equal to minus the energy density in units where the speed of light is 1. However, nobody has had much luck explaining why empty space should be like this, especially with an energy density as small as what we seem to be observing: about 6 x 10-30 grams per cubic centimeter if we use Einstein's E = mc2 to convert it into a mass density. Other widely studied possibilities for dark matter include various forms of "quintessence". But, this term means little more than "some mysterious field with negative pressure", and there's little understanding of why such a field should exist.
For more details, try these:
The third is the most detailed, and it has lots of good references for further study.
- Ned Wright, Vacuum Energy Density, or: How Can Nothing Weigh Something?
- John Baez, What's the Energy Density of the Vacuum?
- Sean Carroll, The Cosmological Constant.
Since the late 1990s, a consensus has emerged that some sort of "cold dark matter" is needed to explain all sorts of things we see. For example, in 2003 an analysis of data from the Wilkinson Microwave Anisotropy Probe suggested that the energy density of the universe consists of about 23% cold dark matter, as compared to only 4% ordinary matter. (The rest is dark energy.)
Unfortunately nobody knows what this cold dark matter is! It probably can't be ordinary matter we've neglected, or neutrinos, since these wouldn't have been sufficiently "cold" in the early universe to collapse into the lumps needed for galaxy formation. There are many theories about what it might be. There's also still a possibility that we are very confused about something, like our theory of gravity.
For details, try these:
The second of these is the most detailed, and it has lots of references for further study.
In 2003 the case for inflation was bolstered by the Wilkinson Microwave Anisotropy Probe, which made detailed measurements of "anisotropies" (slight deviations from perfect evenness) in the cosmic microwave background radiation. The resulting "cosmic microwave background power spectrum" shows peaks and troughs whose precise features should be sensitive to many details of the very early history of the Universe. Models that include inflation seem to fit this data very well, while those that don't, don't.
However, the mechanism behind inflation remains somewhat mysterious. Inflation can be nicely explained using quantum field theory by positing the existence of a special particle called the "inflaton", which gave rise to extremely high negative pressure before it decayed into other particles. This may sound wacky, but it's really not. The only problem is that nobody has any idea how this particle fits into known physics. For example, it's not part of the Standard Model.
For details, try:
As of 2004 this was quite a hot topic in astrophysics. See for example:
- Volker Bromm and Richard B. Larson, The First Stars.
Gamma ray bursters (GRBs) appear as bursts of gamma rays coming from points randomly scattered in the sky. These bursts are very brief, lasting between a few milliseconds to a few hundred seconds. For a long time there were hundreds of theories about what caused them, but very little evidence for any of these theories, since nothing was ever seen at the location where one of these bursts occured. Their random distribution eventually made a convincing case that they occurred not within our solar system or within our galaxy, but much farther away. Given this, it was clear that they must be extraordinarily powerful.
Starting in the late 1990s, astronomers made a concerted effort to catch gamma ray bursters in the act, focusing powerful telescopes to observe them in the visible and ultraviolet spectrum moments after a burst was detected. These efforts paid off in 1999 when one was seen to emit visible light for as long as a day after the burst occurred. A redshift measurement of z = 1.6 indicated that the gamma ray burster was about 10 billion light years away. If the burst of gamma rays was omnidirectional, this would mean that its power was about 1016 times that of our sun - for a very short time. For details on this discovery, see:
- Burst and Transient Source Experiment (BATSE), GOTCHA! The Big One That Didn't Get Away, Gamma Ray Burst Headlines, January 27, 1999.
A more detailed observation of a burst on March 3, 2003 convinced many astrophysicists that at least some gamma-ray bursters are so-called "hypernovae". A hypernova is an exceptionally large supernova formed by the nearly instantaneous collapse of the core of a very large star, at least 10 times the mass of the sun, which has already blown off most of its hydrogen. Such stars are called Wolf-Rayet stars. The collapse of such a star need not be spherically symmetric, so the gamma ray burst could be directional, reducing the total power needed to explain the brightness we see here (if the burst happened to point towards us). For more, try:
- European Southern Observatory (ESO), Cosmological Gamma-Ray Bursts and Hypernovae Conclusively Linked, Press Release, June 18, 2003.
It's hard to resist quoting the theory described here:
Here is the complete story about GRB 030329, as the astronomers now read it.
Thousands of years prior to this explosion, a very massive star, running out of hydrogen fuel, let loose much of its outer envelope, transforming itself into a bluish Wolf-Rayet star. The remains of the star contained about 10 solar masses worth of helium, oxygen and heavier elements.
In the years before the explosion, the Wolf-Rayet star rapidly depleted its remaining fuel. At some moment, this suddenly triggered the hypernova/gamma-ray burst event. The core collapsed, without the outer part of the star knowing. A black hole formed inside, surrounded by a disk of accreting matter. Within a few seconds, a jet of matter was launched away from that black hole.
The jet passed through the outer shell of the star and, in conjunction with vigorous winds of newly formed radioactive nickel-56 blowing off the disk inside, shattered the star. This shattering, the hypernova, shines brightly because of the presence of nickel. Meanwhile, the jet plowed into material in the vicinity of the star, and created the gamma-ray burst which was recorded some 2,650 million years later by the astronomers on Earth. The detailed mechanism for the production of gamma rays is still a matter of debate but it is either linked to interactions between the jet and matter previously ejected from the star, or to internal collisions inside the jet itself.
This scenario represents the "collapsar" model, introduced by American astronomer Stan Woosley (University of California, Santa Cruz) in 1993 and a member of the current team, and best explains the observations of GRB 030329.
"This does not mean that the gamma-ray burst mystery is now solved", says Woosley. "We are confident now that long bursts involve a core collapse and a hypernova, likely creating a black hole. We have convinced most skeptics. We cannot reach any conclusion yet, however, on what causes the short gamma-ray bursts, those under two seconds long."
Indeed, there seem to be at least two kinds of gamma-ray bursters, the "long" and "short" ones. Nobody has caught the short ones in time to see their afterglows, so they are more mysterious. For more information, try these:
- NASA, Gamma Ray Bursts.
- Edo Berger, Gamma-ray Burst FAQ.
- Wikipedia, Gamma Ray Bursters.
- Peter Mészáros, Gamma-ray Burst Physics.
At the time this was written, NASA was scheduled to launch a satellite called "Swift", specially devoted to gamma-ray burst detection, in September 2004. For details, see:
Cosmic rays are high-energy particles, mainly protons and alpha particles, which come from outer space and hit the Earth's atmosphere producing a shower of other particles. Most of these are believed to have picked up their energy by interacting with shock waves in the interstellar medium. But, the highest-energy ones remain mysterious - nobody knows how they could have acquired such high energies.
The record is a 1994 event detected by the Fly's Eye in Utah, which recorded a shower of particles produced by a cosmic ray of about 300 EeV. A similar event has been detected by the Japanese scintillation array AGASA. An EeV is an "exa-electron-volt", which is the energy an electron picks up going through a potential of 1018 volts. 300 EeV is about 50 joules - the energy of a one-kilogram mass moving at 10 meters/second, presumably all packed into one particle!
Nobody knows how such high energies are attained - perhaps as a side-effect of the shock made by a supernova or gamma-ray burster? The puzzle is especially acute because particles with energies like these are expected to interact with the cosmic microwave background radiation and lose energy after travelling only moderate extragalactic distances, say 30 megaparsecs. This effect is called the Greisen-Zatsepin-Kuz'min (or GZK) cutoff. So, either our understanding of the GZK cutoff is mistaken, or ultra-high-energy cosmic rays come from relatively nearby - in cosmological terms, that is.
Right now the data is confusing, because two major experiments on ultra-high-energy cosmic rays have yielded conflicting results. The Fly's Eye seems to see a sharp dropoff in the number of cosmic rays above 100 EeV, while the AGASA detector does not. People hope that the Pierre Auger cosmic ray observatory, being built in western Argentina, will settle the question.
For more, try these:
- HiRes - the High Resolution Fly's Eye experiment.
- AGASA - the Akeno Giant Air Shower Array.
- Pierre Auger Observatory.
- A. A. Watson, Observations of Ultra-High Energy Cosmic Rays.
- D. J. Bird et al, Detection of a Cosmic Ray with Measured Energy Well Beyond the Expected Spectral Cutoff due to Cosmic Microwave Radiation.
The Pioneer 10 and Pioneer 11 spacecraft are leaving the the Solar System. Pioneer 10 sent back radio information about its location until January 2003, when it was about 80 times farther from the Sun than the Earth is. Pioneer 11 sent back signals until September 1995, when its distance from the Sun was about 45 times the Earth's.
The Pioneer missions have yielded the most precise information we have about navigation in deep space. However, analysis of their radio tracking data indicates a small unexplained acceleration towards the Sun! The magnitude of this acceleration is roughly 10-9 meters per second per second. This is known as the "Pioneer anomaly".
This anomaly has also been seen in the Ulysses spacecraft, and possibly also in the Galileo spacecraft, though the data is much more noisy, since these were Jupiter probes, hence much closer to the Sun, where there is a lot more pressure from solar radiation. The Viking mission to Mars did not detect the Pioneer anomaly - and it would have, had an acceleration of this magnitude been present, because its radio tracking was accurate to about 12 meters.
Many physicists and astronomers have tried to explain the Pioneer anomaly using conventional physics, but so far nobody seems to have succeeded. There are many proposals that try to explain the anomaly using new physics - in particular, modified theories of gravity. But there is no consensus that any of these explanations are right, either. For example, explaining the Pioneer anomaly using dark matter would require more than .0003 solar masses of dark matter within 50 astronomical units of the Sun (an astronomical unit is roughly the distance between the Sun and the Earth). But, this is in conflict with our calculations of planetary orbits.
For more information, see:
Perhaps the most ambitious physics experiments of our age are the attempts to detect gravitational waves. Right now the largest detector is LIGO - the the Laser Interferometer Gravitational-Wave Observatory. This consists of two facilities: one in Livingston, Louisiana, and one in Hanford, Washington. Each facility consists of laser beams bouncing back and forth along two 4-kilometer-long tubes arranged in an L shape. As a gravitational wave passes by, the tubes should alternately stretch and squash - very slightly, but hopefully enough to be detected via changing interference patterns in the laser beam.
LIGO is coming into operation in stages. The first stage, called LIGO I, is supposed to allow detection of gravitational waves made by binary neutron stars within 20 megaparsecs of us. These binaries emit lots of gravitational radiation, spiral into each other, and eventually merge. In the last few minutes of this process you've got two objects heavier than the sun whipping around each other about 100 times a second, faster and faster, and they should emit a "chirp" of gravitational waves increasing in amplitude and frequency until the final merger. It's these "chirps" that LIGO is optimized for detecting. Later, in LIGO II, they'll try to boost the sensitivity to allow detection of inspiralling binary neutron stars within 300 megaparsecs of us.
To give you an idea of what these distances are like: the radius of the Milky Way is about 15 kiloparsecs. The distance to the Andromeda galaxy is about 700 kiloparsecs. The radius of the "Local Group" consisting of three dozen nearby galaxies is about 2 megaparsecs. The distance to the "Virgo Cluster", the nearest large cluster of galaxies, is about 15 megaparsecs. The radius of the observable universe is roughly 3000 megaparsecs. So, if everything works as planned, we'll be able to see quite far with gravitational waves.
However, binary neutron stars don't merge very often! The current best guess is that with LIGO I we will be able to see such an event somewhere between once every 3000 years and once every 3 years. I know, that's not a very precise estimate! Luckily, the volume of space we survey grows as the cube of the distance we can see out to, so LIGO II should see between 1 and 1000 events per year.
The really scary thing is how good LIGO needs to be to work as planned. Roughly speaking, LIGO I aims to detect gravitational waves that distort distances by about 1 part in 1021. Since the laser bounces back and forth between the mirrors about 50 times, the effective length of the detector is 200 kilometers. Multiply this by 10-21 and you get 2 x 10-16 meters. By comparison, the radius of a proton is 8 x 10-16 meters! So, we're talking about measuring distances to within a quarter of a proton radius! And that's just LIGO I. LIGO II aims to detect waves that distort distances by a mere 2 parts in 1023, so it needs to do 50 times better.
Actually all this is a bit misleading. The goal is not really to measure distances, but really vibrations with a given frequency. However, it will still be an amazing feat... if it works.
Getting LIGO to work has been a heroic endeavor: so far two earthquakes have caused damage to the equipment, and problems from tree logging in Livingston to wind-blown tumbleweeds in Hanford have made life more difficult than expected. To keep up with the latest news, try the "LIGO Web Newsletter" here:
- Laser Interferometer Gravitational Wave Observatory (LIGO) home page.
LIGO is working in collaboration with the British/German GEO 600 detector in Hannover, Germany, a smaller detector that tests out lots of new technology. Other gravitational wave detectors include the French/Italian collaboration VIRGO, the Japanese TAMA 300 project, and ACIGA in Australia. For information on these and others try:
But, the coolest gravitational wave detector of all - if it gets funded and gets off the ground - will be LISA, the Laser Interferometric Space Antenna:
The idea is to orbit 3 satellites in an equilateral triangle with sides 5 million kilometers long, and constantly measure the distance between them to an accuracy of a tenth of an angstrom (10-11 meters) using laser interferometry. The big distances would make it possible to detect gravitational waves with frequencies of .0001 to .1 hertz, much lower than the frequencies for which the ground-based detectors are optimized. The plan involves a really neat technical trick to keep the satellites from being pushed around by solar wind and the like: each satellite will have a free-falling metal cube floating inside it, and if the satellite gets pushed to one side relative to this mass, sensors will detect this and thrusters will push the satellite back on course.
For more details on what people hope to see with all these detectors, try this:
- Curt Cutler and Kip Thorne, An Overview of Gravitational-Wave Sources.
Proving the Cosmic Censorship Hypothesis is a matter of mathematical physics rather than physics per se, but doing so would increase our understanding of general relativity. There are actually at least two versions: Penrose formulated the "Strong Cosmic Censorship Conjecture" in 1986, and the "Weak Cosmic Censorship Hypothesis" in 1988. A fairly precise mathematical version of the former one states:Every maximal Hausdorff development of generic initial data for Einstein's equations, compact or asymptotically flat, is globally hyperbolic.That's quite a mouthful, but roughly speaking, "globally hyperbolic" spacetimes are those for which causality is well-behaved, in the sense that there are no closed timelike curves or other pathologies. Thus this conjecture states that for generic initial conditions, Einstein's equations lead to a spacetime in which causality is well-behaved.
The conjecture has not been proved, but there are a lot of interesting partial results so far. For a nice review of this work see:
- Piotr Chrusciel, On the uniqueness in the large of solutions of Einstein's equations ("Strong cosmic censorship"), in Mathematical Aspects of Classical Field Theory, Contemp. Math. 132, American Mathematical Society, 1992.
Besides the particles that carry forces (the photon, W and Z boson, and gluons), all elementary particles we have seen so far fit neatly into three "generations" of particles called leptons and quarks. The first generation consists of:
- the electron
- the electron neutrino
- the up quark
- the down quark
The second consists of:
- the muon
- the muon neutrino
- the charm quark
- the strange quark
and the third consists of:
- the tau
- the tau neutrino
- the top quark
- the bottom quark
How do we know there aren't more?
Ever since particle accelerators achieved the ability to create Z bosons in 1983, our best estimates on the number of generations have come from measuring the rate at which Z bosons decay into completely invisible stuff. The underlying assumption is that when this happens, the Z boson is decaying into a neutrino-antineutrino pair as predicted by the Standard Model. Each of the three known generations contains a neutrino which is very light. If this pattern holds up, the total rate of "decay into invisible stuff" should be proportional to the number of generations!
Experiments like this keep indicating there are three generations of this sort. So, most physicists feel sure there are exactly three generations of quarks and leptons. The question then becomes "why?" - and so far we haven't a clue!
For details see:
- D. Karlen, The number of neutrino types from collider experiments, revised August 2001.
Honesty compels us to point out a slight amount of wiggle room in the remarks above. Conservation of energy prevents the Z from decaying into a neutrino-antineutrino pair if the neutrino in question is of a sort that has more than half the mass of Z. So, if there were a fourth generation with a very heavy neutrino, we couldn't detect it by studying the decay of Z bosons. However, all three known neutrinos have a mass less than 1/3000 times the Z mass, so a fourth neutrino would have to be much heavier than the rest to escape detection this way.
Another bit of wiggle room lurks in the phrase "decaying into a neutrino-antineutrino pair in the manner predicted by the Standard Model". If there were a fourth generation with a neutrino that didn't act like the other three, or no neutrino at all, we might not see it. However, in this case it would be stretching language a bit to speak of a "fourth generation", since the marvelous thing about the three known generations is how they're completely identical except for the values of certain constants like masses.
If you're familiar with particle physics, you'll know it goes much deeper than this: the Standard Model says every generation of particles has precisely the same mathematical structure except for some numbers that describe Higgs couplings. We don't know any reason for this structure, although the requirement of "anomaly cancellation" puts some limits on what it can be.
If you're not an expert on particle physics, perhaps these introductions to the Standard Model will help explain things:
The second is much more detailed and technical than the first.
The Standard Model predicts the existence of a spin-0 particle called the Higgs boson, which comes in two isospin states, one with charge +1 and one neutral. (It also predicts that this particle has an antiparticle.) According to the Standard Model, the interaction of the Higgs boson with the electroweak force is responsible for a "spontaneous symmetry breaking" process that makes this force act like two very different forces: the electromagnetic force and the weak force. Moreover, it is primarily the interaction of the Higgs boson with the other particles in the Standard Model that endows them with their masses! The Higgs boson is very mysterious, because in addition to doing all these important things, it stands alone, very different from all the other particles. For example, it is the only spin-0 particle in the Standard Model. To add to the mystery, it is the only particle in the Standard Model that has not yet been directly detected!
As of 2004, we are waiting in suspense for the Large Hadron Collider (LHC) to come into operation and pick up the search for the Higgs where the Large Electron Positron Collider (LEP) left off. Both these are particle accelerators based at the CERN laboratory in Geneva, Switzerland. LHC is being built using some of the same components as LEP, but will be more powerful. In order to make these improvements, LEP was closed down in November 2000 shortly after seeing tantalizing traces of evidence for the Higgs at a mass of around 114.9 GeV/c2.
It's also possible that the physicists at Fermilab in Chicago will beat the folks at CERN when it comes to finding - or not finding - the Higgs.
Some particle physicists hope that the Higgs boson, when seen, will work a bit differently than the Standard Model predicts. For example, some variants of the Standard Model predict more than one type of Higgs boson. LHC may also discover other new phenomena when it starts colliding particles at energies higher than ever before explored. For example, it could find evidence for supersymmetry, providing indirect support for superstring theory.
So, stay tuned. But meanwhile, try these:
Starting in the 1990s, our understanding of neutrinos has dramatically improved, and the puzzle of why we see about 1/3 as many electron neutrinos coming from the sun as naively expected has pretty much been answered: the different neutrinos can turn into each other via a process called "oscillation". But, there are still lots of loose ends. For details, try:
- The Neutrino Oscillation Industry.
- John Baez, Neutrinos and the Mysterious Maki-Nakagawa-Sakata Matrix.
- Paul Langacker, Implications of Neutrino Mass.
- Boris Kayser, Neutrino Mass: Where Do We Stand, and Where Are We Going?
The first of these has lots of links to the webpages of research groups doing experiments on neutrinos. It's indeed a big industry!
Most physicists believe the answers to all these questions are "yes". There are currently a number of experiments going on to produce and detect a quark-gluon plasma. It's believed that producing such a plasma at low pressures requires a temperature of 2 trillion kelvin. Since this is 10,000 times hotter than the sun, and such extreme temperatures were last prevalent in our Universe only 1 microsecond after the Big Bang, these experiments are lots of fun. The largest, the Relativistic Heavy Ion Collider on Long Island, New York, began operation in 2000. It works by slamming gold nuclei together at outrageous speeds. For details, see:
These are questions of mathematical physics rather than physics per se, but they are important. At the turn of the millennium, the Clay Mathematics Institute offered a $1,000,000 prize for providing a mathematically rigorous foundation for the quantum version of Yang-Mills theory in four spacetime dimensions, and proving that there's a "mass gap" - meaning that the lightest particle in this theory has nonzero mass. For details see:
- Clay Mathematics Institute, Yang-Mills and Mass Gap.
Most "grand unified theories" (GUTs) predict that the proton decays, but so far experiments have (for the most part) only put lower limits on the proton lifetime. As of 2002, the lower limit on the mean life of the proton was somewhere between 1031 and 1033 years, depending on the presumed mode of decay, or 1.6 x 1025 years regardless of the mode of decay.
Proton decay experiments are heroic undertakings, involving some truly huge apparatus. Right now the biggest one is "Super-Kamiokande". This was built in 1995, a kilometer underground in the Mozumi mine in Japan. This experiment is mainly designed to study neutrinos, but it doubles as a proton decay detector. It consists of a tank holding 50,000 tons of pure water, lined with 11,200 photomultiplier tubes which can detect very small flashes of light. Usually these flashes are produced by neutrinos and various less interesting things (the tank is deep underground to minimize the effect of cosmic rays). But, flashes of light would also be produced by certain modes of proton decay, if this ever happens.
Super-Kamiokande was beginning to give much improved lower bounds on the proton lifetime, and excellent information on neutrino oscillations, when a strange disaster happened on November 12, 2001. The tank was being refilled with water after some burnt-out photomultiplier tubes had been replaced. Workmen standing on styrofoam pads on top of some of the bottom tubes made small cracks in the neck of one of the tubes, causing that tube to implode. The resulting shock wave started a chain reaction in which about 7,000 of the photomultiplier tubes were destroyed! Luckily, after lots of hard work the experiment was rebuilt by December 2002.
In 2000, after about 20 years of operation, the Kolar Mine proton decay experiment claimed to have found proton decay, and their team of physicists gave an estimate of 1031 years for the proton lifetime. Other teams are skeptical.
For more details, try these:
Of course their mass in kilograms depends on an arbitrary human choice of units, but their mass ratios are fundamental constants of nature. For example, the muon is about 206.76828 times as heavy as the electron. We have no explanation of this sort of number! We attribute the masses of the elementary particles to the strength of their interaction with the Higgs boson (see above), but we have no understanding of why these interactions are as strong as they are.
Particle masses and strengths of the fundamental forces constitute most of the 26 fundamental dimensionless constants of nature. Another one is the cosmological constant - assuming it's constant. Others govern the oscillation of neutrinos (see below). So, we can wrap a bunch of open questions into a bundle by asking: Why do these 26 dimensionless constants have the values they do?
Perhaps the answer involves the anthropic principle, but perhaps not. Right now, we have no way of knowing that this question has any answer at all!
For a list of these 26 dimensionless constants, try:
John Baez, How Many Fundamental Constants Are There?
Very roughly speaking, the Anthropic Principle says that our universe must be approximately the way it is for intelligent life to exist, so that the mere fact we are asking certain questions constrains their answers. This might "explain" the values of fundamental constants of nature, and perhaps other aspects of the laws of physics as well. Or, it might not.
Different ways of making the Anthropic Principle precise, and a great deal of evidence concerning it, can be found in a book by Barrow and Tipler:
- John D. Barrow and Frank J. Tipler, The Cosmological Anthropic Principle, Oxford U. Press, Oxford, 1988.
This book started a heated debate on the merits of the Anthropic Principle, which continues to this day. Some people have argued the principle is vacuous. Others have argued that it distracts us from finding better explanations of the facts of nature, and is thus inherently unscientific. For one interesting view, see:
In 1994 Lee Smolin advocated an alternative but equally mind-boggling idea, namely that the parameters of the Universe are tuned, not to permit intelligent life, but to maximize black hole production! The mechanism he proposes for this is a kind of cosmic Darwinian evolution, based on the (unproven) theory that universes beget new baby universes via black holes. For details, see:
- Lee Smolin, The Life of the Cosmos, Crown Press, 1997.
- Lee Smolin, The Fate of Black Hole Singularities and the Parameters of the Standard Models of Particle Physics and Cosmology.
More recently, the string theorist Leonard Susskind has argued that the "string theory vacuum" which describes the laws of physics we see must be chosen using the Anthropic Principle:
- Edge: The Landscape - a talk with Leonard Susskind.
Despite a huge amount of work on string theory over the last decades, it still has made no predictions that we can check with our particle accelerators, whose failure would falsify the theory. The closest it comes so far is by predicting the existence of a "superpartner" for each of the observed types of particle. None of these superpartners have ever been seen. It is possible that the Large Hadron Collider will detect signs of the lightest superpartner. It's also possible that dark matter is due to a superpartner! But, these remain open questions.
It's also interesting to see what string theorists regard as the biggest open questions in physics. At the turn of the millennium, the participants of the conference Strings 2000 voted on the ten most important physics problems. Here they are:
- Are all the (measurable) dimensionless parameters that characterize the physical universe calculable in principle or are some merely determined by historical or quantum mechanical accident and uncalculable?
- How can quantum gravity help explain the origin of the universe?
- What is the lifetime of the proton and how do we understand it?
- Is Nature supersymmetric, and if so, how is supersymmetry broken?
- Why does the universe appear to have one time and three space dimensions?
- Why does the cosmological constant have the value that it has, is it zero and is it really constant?
- What are the fundamental degrees of freedom of M-theory (the theory whose low-energy limit is eleven-dimensional supergravity and which subsumes the five consistent superstring theories) and does the theory describe Nature?
- What is the resolution of the black hole information paradox?
- What physics explains the enormous disparity between the gravitational scale and the typical mass scale of the elementary particles?
- Can we quantitatively understand quark and gluon confinement in Quantum Chromodynamics and the existence of a mass gap?
For details see:
- Strings 2000, Physics Problems for the Next Millennium.
This last question sits on the fence between cosmology and particle physics:
The answer to this question will necessarily rely upon, and at the same time may be a large part of, the answers to many of the other questions above.