Fine tuning and numerical coincidences July 1, 2008
Posted by dorigo in Blogroll, cosmology, games, internet, physics, science.Tags: crackpots, dark energy, fine tuning
comments closed
The issue of fine tuning is a headache for today’s theorists in particle physics. I reported here some time ago the brilliant and simple explanation of the problem with the Higgs boson mass given by Michelangelo Mangano. In a nutshell, the Higgs boson mass is expected to be light in the Standard Model, and yet it is very surprising that it be so, given that there are a dozen very large contributions to its value, each of which could make the Higgs hugely massive: but they altogether magically cancel. They are “fine-tuned” to nullify one another like gin and Martini around the olive in a perfectly crafted drink.
A similar coincidence -and actually an even more striking one- happens with dark energy in Cosmology. Dark energy has a density which is orders and orders of magnitude smaller than what one would expect from simple arguments, calling for an explanation which is still unavailable today. Of course, the fact that neither for the Higgs boson nor for dark energy there is as of today a solid experimental evidence is no deterrent: these entities are quite hard to part with, if we insist that we have understood at least in rough terms what exists in the Universe and what is the cause of electroweak symmetry breaking in particle physics. Yet, we should not forget that there might not be a problem after all.
I came across a brilliant discussion of fine tuning in this paper today by sheer chance -or rather, by that random process I entertain myself with every once in a while, called “checking the arXiV”. For me, that simply means looking at recent hep-ph and hep-ex papers, browsing through every third page, and getting caught by the title of some other article quoted in the bibliography, then iterating the process until I remind myself I have to run for some errand.
So, take the two numbers 987654321 and 123456789: could you imagine a more random choice for two 9-digit integers ? Well, what then, if I argued with you that it is by no means a random choice but an astute one, by showing that their ratio is 8.000000073, which deviates from a perfect integer only by nine parts in a billion!
Another more mundane and better known example is the 2000 US elections: the final ballots in Florida revealed that the Republican party got 2,913,321 votes, while the Democratic votes where only 2,913,144: a difference of sixty parts in a million.
Numerical “coincidences” such as the first one above have always had a tremendous impact on the standard crackpot: a person enamoured with a discipline but missing at least in part the institutional background required to be regarded as an authoritative source. A crackpot physicist, if shown a similarly odd coincidence (imagine if those numbers represented two apparently uncorrelated measurements of different physical quantities) would certainly start to build a theory around it with the means he has at his or her disposal. This would be enough for him or her to be tagged as a true crackpot. But there is nothing wrong with trying to understand a numerical coincidence! The only difference is that acknowledged scientists only get interested when those coincidences are really, really, really odd.
Yes, the feeling of being fooled by Nature (the bitch, not the magazine) is what lies underneath. You study electroweak theory, figure that the Higgs boson cannot be much heavier than 100 GeV, and find out that to be so there has to be a highly unlikely numerical coincidence in effect: this is enough for serious physicists to build new theories. And sometimes it works!
The guy in the picture on the right, Johann Jakob Balmer, got his name in all textbooks because of discovering the ratio (in the Latin sense) of the measured hydrogen emission lines. He was no crackpot, but in earnest all he did to become textbook famous was finding out that the wavelength of Hydrogen lines in the visible part of its emission spectrum could be obtained with a simple formula involving an integer number n -none other than the principal quantum number of the Hydrogen atoms.
So, is it a vacuous occupation to try and find out the underlying reason -the ratio- of the Koidé mass formula or other coincidences ? I think it only partly depends on the tools one uses; much more on the likelihood that these observed oddities are really random or not. And, since a meaningful cut-off in the probability is impossible to determine, we should not laugh at the less compelling attempts.
As far as the numerical coincidence I quoted above is concerned, you might have guessed it: it is no coincidence! Greg Landsberg explains in a footnote to the paper I quoted above that one could in fact demonstrate, with some skill in algebra, that
“It turns out that in the base-N numerical system the ratio of the number composed of digits N through 1 in the decreasing order to the number obtained from the same digits, placed in the increasing order, is equal to N-2 with the precision asymptotically approaching
. Playing with a hexadecimal calculator could easily reveal this via the observation that the ratio of FEDCBA987654321 to 123456789ABCDEF is equal to 14.000000000000000183, i.e. 14 with the precision of
.”
Aptly, he concludes the note as follows:
“Whether the
precision needed to fine-tune the SM [Standard Model] could be a result of a similarly hidden principle is yet to be found out.”
Ah, the beauty of Math! It is so reassuring to know the absolute truth on something… Alas, too bad for Godel’s incompleteness theorem. On the opposite side, whether one can demonstrate that the Florida elections were fixed, it remains to be shown.
Three space telescopes May 22, 2008
Posted by dorigo in astronomy, cosmology, news, physics, science.Tags: adept, dark energy, destiny, snap, space telescope, supernova, weak lensing
comments closed
This morning at PPC2008 the audience heard three different talks on proposed space missions to measure dark energy through supernova surveys at high redshift and weak lensing observations. I am going to give some highlights of the presentations below.
The first presentation was by Daniel Holz on “The SuperNova Acceleration Probe“, SNAP.
SNAP is all about dark energy (DE). Supernovae show there is acceleration in the universe. However, to measure precisely the amount of DE in the universe one is required to determine the distance versus redshift of supernovae of type 1A. These are collapsing carbon-oxygen white dwarfs at the Chandrasekhar limit, i.e. with exactly the right amount of mass: they make a very well understood explosion when they die, and they can thus be used as standard candles to measure distance and their redshift tells us how much they are receding from us. The precision required to extract information on DE is at the level of a few % or so, which is very difficult to do. So one needs a great control of systematics. One also wants to distinguish DE from modified gravity models, which accommodate some of the observed features of the universe by hypothesizing that the strength of gravity is not exactly the same as one goes to very large distance scales.
Separating out the different models is not easy. Supernovae allow to determine the integrated expansion – how much the universe accelerated since its origin; the origin of growth of structure in the universe is not measured there. The way snap is approaching this is by combining SN measurements with weak lensing. Weak lensing is the deviation of photons from a distant source when passing through large amounts of mass, like a cluster of nearby galaxies.
SNAP is a space telescope (see picture). Its strong point is the width of the field of view it can imagine at a time: 0.7 squared degrees of sky, much larger than that of the Hubble space telescope. Also, it provides for lots of color information: it goes into the infrared, and it has 9 filters to get spectral information from the observed objects.
SNAP aims at obtaining 2000 supernovae of type 1A at redshift z<1.7, and to do a weak lensing survey over 4000 square degrees. It is a 1.8m telescope, diffraction limited. The bottom line for SNAP is to measure to 0.4%, and the parameters
to 1.6% and
to 9%.
After Holz, R. Kirshner described “Destiny, the Dark Energy Space Telescope“. Destiny is a similar concept to the previous one. Its science goal is to determine the expansion history of the universe to 1% accuracy.
It is a 1.65m wide telescope – slightly smaller than SNAP (1.8m). The program has to cost 600 million dollars or less including cost. It is receiving a green light from its funding agencies, NASA and DOE. It will operate from the same location as SNAP -a lagrangian orbit point called “L2”, which is not affected much by gravitational effects from the earth and the moon. Kirshner made me laugh out loud as he added that despite the location is the same as that of SNAP, it will not be too crowded a place, since SNAP won’t be there.
Kirshner explained that the project is very conservative. They do not need low redshift SN measurement from space. From space one can work in near IR, something that can only be done there. It complements well with ground-based telescopes. A very distinctive difference with SNAP is that Destiny is an imaging spectrograph. It takes a spectrum of every object in the field every time.
For SNAP you need to make a choice to what object to take a spectrum. The resolution in wavelength is , equivalent as having 75 filters of spectral energy distribution.
The Destiny philosophy is to keep it simple, stupid. It is a satellite for which every piece has flown in space previously: nothing we do not know we can do already. It uses the minimal instrument required to do the job. And it does in space what must be done in space, without taking the job away from ground based observations. Also, a point which was emphasized is that there will be no time-critical operations: it is a highly automated, fixed program. No need for 24/7 crews on earth to decide what star to take spectra of.
DE measurements require to measure both the acceleration and deceleration epochs, z<>0.8. On the
ground magnitudes accessible for SN are lower than 24, on space they are higher, at z>0.8.
To take a picture of the time derivative of the equation of state, you need to measure over the
jerk region in the distance-to-redshift diagram, where the curve changes from acceleration to deceleration, at z about 1. This you do from space.
Kirshner explained that the sky is dark at night in the optical wavelengths, but at IR it is ten to a hundred times brighter. In space you go down by a factor of 100 in brightness. But there is also absorption in the atmosphere, mostly water vapor. From the ground you can work at small intervals of wavelength: these for the infrared lie at 1.4 micrometers and 1.85 micrometers. In space you can look at the entire range.
Finally, Daniel Eisenstein discussed “The Advanced Dark Energy Physics Telescope” (ADEPT)
He started by explaining that baryon acoustic oscillations are a standard ruler we can use to measure cosmological distances. You are seeing sound waves coming from the early universe. Recombination time at z=1000, 400,000 years after the big bang. After recombination, the universe becomes neutral, phase of oscillation at recombination time affects last-time amplitude. Before recombination, the universe is ionized, photons provide enormous pressure and restoring force, and perturbations oscillate as acoustic waves. These perturbations propagate as sound waves.
Overdensity is an overpressure that launches a sound spherical wave. This travels at 57% of the speed of light. It travels out until at time of recombination, the wave stalls, and it deposits the gas perturbation at 150 Mpc. Overdensity in shell (gas) and in the center both seed the formation of galaxies. So the aim is looking for a bump of perturbation at distance scales of 150 Megaparsecs. The acoustic signature is carried by pairs of galaxies separated by 150 Megaparsecs (Mpc). Nonlinearities push galaxies around by 3 to 10 Mpc. Broadens the peak, making it hard to measure the scale. Some of the broadening can be recovered by measuring the large scale structure which acted to broaden the peak: it is a perturbation which can be corrected for.
The most serious concern is that the peak would shift. A small effect. Most of the motion is random. Less than 1%. One can run large volumes of universe in cosmological n-body simulations, and find shifts of 0.25% to 0.5%. Also these shifts can be predicted and removed.
To measure the peak of baryon acoustic oscillations there is one program, BOSS, the next phase of SDSS which will run from 2008 to 2014, providing a definitive study of low-redshift <0.7 acoustic oscillations.
Instead, ADEPT will take a survey of three fourths of the sky for redshifts 1<z<2 from a 1.3 meter space telescope, with slitless IR spectroscopy of the H-alpha line. 100 million redshifts will be taken. ADEPT is designed for maximum synergy with ground-based dark energy programs -a point Kirshner had also made for Destiny. ADEPT will measure angular diameter distance from BAO and expansion rate too. It will be a huge galaxy redshift survey.
By hearing these three talks I was under the impression that cosmologists have become a bit too careful with the design of their future endeavours. If we use technology that is old now to design experiments that will fly in five years, are we not going against the well-working paradigm of advancing technology through the push of needs from new, advanced experiments ? I do understand that space telescopes are not particle detectors, and if something breaks they cannot be taken apart and serviced at will; however, it is a sad thing to see so little will to be bold. A sign of the funding-poor times ?
Denny Marfatia’s talk on Neutrinos and Dark Energy May 22, 2008
Posted by dorigo in astronomy, cosmology, news, physics, science.Tags: dark energy, fine structure constant, neutrino, oklo, PPC2008
comments closed
Denny spoke yesterday afternoon at PPC 2008. Below I summarize as well as I can those parts of his talk that were below a certain crypticity threshold (above which I cannot even take meaningful notes, it appears).
He started with a well-prepared joke: “Since at this conference the tendency is to ask questions early, Any questions ?“. This caused the hilarity of the audience, but indeed, as I noted in another post, the setting is relaxed and informal, and the audience interrupts loosely the speaker. Nobody seemed to complain so far…
Denny started by stating some observational facts: the dark energy (DE) density is , and the neutrino mass difference
is of the same order of magnitude. This coincidence of scale mighti imply that neutrinos coupled to a light scalar might explain why
has a similar value to
, i.e. why we observe a rather similar amount of dark energy and matter in the universe.
But he noted that there is more than just one coincidence problem. In fact DE density and other densities have ratios which are small values. Within a factor of 10 the components are equal.
Why not consider these coincidences to have some fundamental origin ? Perhaps neutrino and DE densities are related. It is easy to play with this hypothesis with neutrinos because we understand them the least!
We can have the neutrino mass made a variable quantity. Imagine a fluid, the scalar, a quintessence scalar, and the potential is . There is a ansatz to be made: the effective potential is stationary with respect to the neutrino mass.
So one makes a decreasing neutrino mass at the minimum of the potential, with a varying potential. Some consequences of this model are given by the expression . W can thus deviate from -1. It is like quintessence without a light scalar.
Neutrino masses can be made to vary with their number density: if w is close to -1, the effective potential has to scale with the neutrino contribution. Neutrinos are then most massive in empty space, and they are lighter when they cluster.
This could create an intriguing conflict between cosmological and terrestrial probes of neutrino mass. The neutrino masses vary with matter density if the scalar induces couplings to matter. There should be new matter effects in neutrino oscillations. One could also see temporal variations in the fine structure constant.
If neutrinos are light, DE becomes a cosmological constant, w = -1, and we cannot distinguish it from other models. Also, light neutrinos do not cluster, so the local neutrino mass will be the same as the background value; and high redshift data and tritium beta decay will be consistent because neither will show evidence for neutrino mass.
So one can look at the evidence for variations in time of the fine structure constant. The status of measurements of transition frequencies in atomic clocks give the limit .
The abundance ratio of Sm 149 to Sm147 at the natural reactor in Oklo shows no variation in the last 1.7 billion years, with a limit .
Resonant energy for neutron capture.
Meteoritic data (at a redshift z<0.5) constrain the beta decay rate of Rhenium 187 back to the time of solar system formation (4.6 Gy), .
Going back to , transition lines in quasars QSO spectra indicate a value
. Found to be varying at 5-sigma level! The lines have an actual splitting which is different. Result not confirmed, depends on the magnesium isotopic ratio assumed. People say you are just measuring chemical evolution of magnesium in these objects: the three isotope ratio might be different from what is found in our sun, and this would mess up the measurement.
Then, there is a measurement of CMB (at z=1100) which determines from the temperature of decoupling, depending on binding energy.
Also primordial abundances from Big-bang nucleosynthesis (at z=10 billion) allow one to find . Bounds on a variation of the fine structure constant at high redshift are thus very loose.
One can therefore hypothesize a phase transition in : it was done by Anchordoqui, Barger, Goldberg, Marfatia recently. The bottomline is to construct the model such that when the neutrino density crosses the critical value as the universe expands,
will change.
The first assumption is the following: M has a unique stationary point. Additional stationary points are possible, but for nonrelativistic neutrinos with subcritical neutrino density, only one minimum, fixed, and no evolution.
For non-relativistic neutrinos, with supercritical density, there is a window of instability.
You expect neutrinos at the critical density at some redshift. The instability can be avoided if the growth-slowing effects provided by cold dark matter dominate over the acceleron-neutrino coupling. Requiring no variation of up to z=0.5, and then a step in
, and enforcing that the signal in quasar spectra is reproduced, one gets results which are consistent with CMB and BBN.
Alexander Kusenko on sterile neutrinos, and the other afternoon talks May 20, 2008
Posted by dorigo in astronomy, cosmology, news, physics, science.Tags: cepheids, dark energy, pulsars, Sterile neutrinos, supernovae
comments closed
Still many interesting talks in the afternoon session at PPC08 today, and I continued the unprecedented record of NOT dozing during the talks. Maybe I am growing old.
Alexander Kusenko was the first to speak, and he discussed “The dark side of light fermions: sterile neutrinos as dark matter“. I would love to be able to report in detail the contents of his talk, which was enlightening, but I would do a very poor job because I tried to understand it by following it with undivided attention rather than taking notes (core duo upgrade to grey matter, anyone?). I failed miserably, but what I did get and took notes about was, however, the fact that sterile neutrinos can explain the velocity distribution of pulsars.
Before I get to that, however, let me tell you how I found myself grinning during Krusenko’s talk. He discussed at some point the fact that making the Majorana mass large one can observe small masses with a Yukawa coupling of order unity. He pointed out that all quarks have y<<1 except the top quark: if these couplings come from string theory, they have a reason to be all of order unity; while from extra dimension kinds of models these would typically be exponentially small (as a function of the scale of the extra dimension). So, concluded Alexander, “both options are equally likely”. I found that sentence a daring stretch (and attached a “UN” before “likely” in my notes)! He appeared to be inferring some freedom in the phenomenology of neutrinos from the fact that string theory and large extra dimension models make conflicting predictions… Tsk tsk.
Anyway, about pulsars. Pulsars are rapidly rotating, magnetized neutron stars produced in supernova explosions. They have been known to have a velocity spectrum which far exceeds that of other stars – by about an order of magnitude. Proposed explanations of this phenomenon do not work: for example, the explosion itself does not seem to have enough asymmetry. There is a lot of energy in the explosion, but 99% of it escapes with neutrinos. If these escape anisotropically they could propel the pulsar. Through the reactions with polarized electrons one can easily get 10% asymmetries in the neutrino production in the core of imploding supernovas, while one needs just 1% to explain the velocity distribution. The asymmetry, however, is lost as they rescatter inside the star. If instead one has sterile neutrinos, they interact much more weakly than ordinary ones, they escape asymmetrically, and a potential 10% asymmetry in produced neutrinos, 10% of which could be going to sterile neutrinos, would explain it.
Alexander pointed out that sterile neutrinos must have lifetimes larger than age of universe, but they can annihilate, and produce photons which have energy m/2. So concetrations of sterile neutrino dark matter emits x-rays.
In summary, we have introduced additional degrees of freedom in our particle model when we discovered neutrino masses. This implies the need of right-handed neutrinos. Usually one sets a large Majorana mass and hides these right-handed states to a very high mass scale, but one neutrino remains at low mass, and it could provide a good dark matter candidate. It could play a role in baryon asymmetry of the universe, and explain pulsar velocities. X-ray telescopes could discover it, and if discovered, the line from relic sterile neutrino annihilation can be used to map out the redishift distribution of DM. This could potentially be used in an optimistic future to study the structure and the expansion history of the Universe.
After Krusenko’s talk, A. Riess discussed the “implications for dark energy of supernovae with redshift larger than one“. With the Hubble space telescope they made a systematic search for supernovae in a small region of sky, and found 50 in three years, 25 of them with z>1. These allowed an improved constraint on w=1.06+-0.10. A paper is in preparation. I am quite sorry to have no chance of reporting about the very interesting discussion Riess made of distance indicators like cepheids, keplerian motion of masers around the central black hole in NGC4258 -which anchors the distance of this galaxy and allows relative distances to become absolute measurements- and parallax measurements made on cepheids. His talk was very instructive, but my brain does not work well tonight…
Dragan Huterer discussed “A decision tree for dark energy“. In two words, he proposes to start from the simple lambda_CDM model and try more complex constructions incrementally, starting with the extensions that have more predictive power. He also discussed a generic figure of merit of measurements – basically defined as , or the inverse of the area of 95% CL measurements of these two parameters. Apologies here too for having to cut this summary short…
Mark Trodden concluded the afternoon talks by discussing “Cosmic Acceleration and Modified Gravity“. Mark is a blogger and so I feel no shame in saying he should report about his talk rather than having me do it. However, I found very interesting a note he made on extrapolations of newtonian mechanics working or not working. When observed perturbations of the Uranus orbit pointed to the existence of an outer planet, Neptune, the prediction was on solid ground: As an effective theory, you expect newtonian mechanics to work well at higher radii. On the other hand, when Mercury’s precession was hypotesized to come from perturbing effects from an inner planet, the answer turned out to be wrong -there, newtonian mechanics did break down, and general relativity showed its effect. The lesson for modified gravity models is clear.
The session was concluded by a panel led by Krusenko, where some of the speakers were asked a few questions. The first was: “If you had a billion dollars how would you spend them for measuring cosmological parameters ?”
A.Riess said he would do the easiest things first. Before launching to space, one has to look at things that are easy and less expensive. He sees a lot of room. Space has advantages (stability of conditions) but it is hard to get there. We want measurements to the few percent level on a few cosmological parameters: maybe can we do that now from the ground, or from instruments that are
already in space now. He stated the necessity of being more creative at using the resources we already have. All of the small but significant improvements that are possible matter.
I gave my own answer by grabbing the microphone after a few of the speakers had their say. I said we do not need billions of dollars: there is, in fact, a lot we can do with much smaller amounts of money on the ground, without spending huge amounts of money for space experiments. CP violation experiments can be improved with limited budgets, and also low energy hadronic cross sections and nuclear cross sections are quite important for the understanding of the early universe. The LHC did cost a few billion dollars, and it will maybe give us an answer about dark matter; but in general, particle physics -even direct DM searches- require smaller budgets, and the payoff may be large.