jump to navigation

Guest post: Rick Ryals – “The Anthropic Principle” June 23, 2008

Posted by dorigo in cosmology, physics, science.
Tags: ,
comments closed

Rick Ryals, a frequent visitor of this site, wrote a guest post here some time ago, on Dirac’s theory and the Einstein constant. He sent me today another text about his views on the Anthropic principle, which I am happy to host here. Of course, his views and mine need not be the same ;-) for me to find the following text fit for this site. -TD

Ever wonder why David Gross calls the inability of science to produce a “dynamical principle” that would “make the landscape go away”, the biggest failure of science in the last twenty years?

I would assert that it’s quite obviously because scientists can’t or won’t add one and one.

The anthropic principle is possibly the most misunderstood and misused observation in all of science, whose mere mention brings about the most extreme reactions from just about everyone who comes into contact with it. Creationists read-in the hand of god, while some String theorists find hope for a real theory, but most others find only utter disgust and complete disdain, as very few actually get the point. If this post was about a “variant interpretation”, then it would be called “The Unpopular Anthropic Principle”, because that’s exactly what it will be, since it includes all of the dirty little truths that nobody on any highly motivated side of the popular issues really wants to know about.

The physics concerns the unexpected carbon-life orientation of certain structure defining features of our universe that do not concur with the cosmological projections of modern physics.

The pointed nature of the physics indicates the direction that one might look in for the as yet undefined dynamical structure mechanism that is normally expected to explain why the universe is configured the way that it is, rather than some other way. Brandon Carter called this “a line of reasoning that requires further development”. But the Anthropic Principle was originally formalized by Carter as an ideological statement against the dogmatic non-scientific prejudices that scientists commonly harbor, that cause them to consciously deny anthropic relevance in the physics, so they instead tend to be willfully ignorant of just enough pertinent facts to maintain an irrational cosmological bias that leads to absurd, “Copernican-like” projections of mediocrity that contradict what is actually observed.

Carter was talking about an equally extreme form of counter-reaction-ism to old historical beliefs about geocentricism that cause scientists to automatically dismiss evidence for anthropic “privilege” right out of the realm of the observed reality. I intend to put very heavy emphasis on this point, because people go to unbelievable lengths to distort what Carter said on that fateful day in Poland, in order to willfully ignore this point as it applies to modern physics speculations and variant interpretations, which are neither, proven, nor definitively justified, theoretically.

Why do none of the popular definitions of the anthropic principle include what Carter actually said?
…a reaction against conscious and subconscious – anticentrist dogma.

This a the real problem for science.

Carter’s example was as follows:

Unfortunately, there has been a strong and not always subconscious tendency to extend this to a most questionable dogma to the effect that our situation cannot be privileged in any sense. This dogma (which in its most extreme form led to the “perfect cosmological principle” on which the steady state theory was based) is clearly untenable, as was pointed out by Dicke (Nature 192, 440,
1961).
-Brandon Carter

Carter expounded on the anthropic coincidence that Robert Dicke had deduced from Dirac’s Large Numbers Hypothesis. Dicke had noted that “the forces are not random, but are constrained by biological factors” that cause the universe to evolve contrarily to the standard cosmological prediction in a unique manner that favors carbon life. It is important to note that this evolving physics includes all carbon-based-life, and this also limits life to a very narrow range of time in the history of the universe. But this feature also dictates that the same combination of “homeostatic” environmental balances that define the Goldilocks Enigma will occur on similarly developed planets in similarly developed galaxies that exists along the same fine “layer” or time/location “plane” that our galaxy evolved on, so there is absolutely no apparent reason to assume that the physics applies exclusively to only one planet, or to a single form of carbon-based life.



Circumstellar Habitable Zone – Ecobalance – Ecosphere

How Carter’s anti-political statement applies, including its strength, depends on the cosmological model that physics is being applied to, so Brandon Carter’s own “strong” multiverse interpretation differs from what is actually observed. Carter’s point was that unscientific ideological bias should be honestly weighed into consideration whenever a scientist is faced with anomalous features of the universe that are also relevant to our place in it, in order to serve as a counterbalancing constraint on their preconceived prejudices against evidence for “preference” or “specialness”. Unfortunately for science, this is rarely the case, as these words will fly right past the theoretical confidence of the “cutting-edge”.

Add to that the creation/evolution “debate” and you have all the makings for a very bad situation for science, where zealots will either, embrace what physicists commonly call the “appearance of design”, as being just that, or, on the other side of the fanatical coin, anti-zealots will all together deny that there is any such implication for “specialness” in the physics whatsoever, while appealing to multiverses and quantum uncertainty, in lieu of causality and first principles. This is done in order to “explain-away” the evidence, rather than to honestly recognize and give credible time to the most readily apparent implication for a biocentric cosmological principle that is indicated by the “appearance of design”. The anticentrist’s tendency to deny the significance of the observation is an over-reaction to pressure from religious extremists and from ill-considered assumptions about human arrogance, which doesn’t even make sense if we’re spread-out across the universe like bacteria on a thin slide of time. Unfortunately for science, it is also a perfectly true example of Carter’s point, as anticentrists typically and wrongly believe that such an admission constitutes evidence in favor of the religious fanatic’s argument, so willful ignorance takes the place of science when the argument is a culture war between zealots and their antifanatical counterparts.

But it is an unavoidable fact that the anthropic physics is directly observed to be uniquely related to the structuring of the universe in a way that defies the most natural expectation for the evolution of the universe in a manner that is also highly-pointed toward the production of carbon based life at a specific time in its history, (and over an equally specific, fine-layer or region of the Goldilocks zone of the observed universe).

If you disallow unproven and speculative physics theory, then an evidentially supported implication does necessarily exist that carbon-based life is somehow intricately connected to the structure mechanism of the universe, and weak, multiverse interpretations do not super cede this fact, unless a multiverse is proven to be more than cutting-edge theoretical speculation.

That’s the “undeniable fact” that compels Richard Dawkins and Leonard Susskind to admit that the universe “appears designed” for life! There is no valid “weak” interpretation without a multiverse, because what is otherwise unexpectedly observed without the admission of speculation, is most-apparently geared toward the production of carbon-base life. Their confidence comes from the fact that their admissions are qualified by their shared “belief” in unproven multiverse theories, but their interpretation is strictly limited to equally non-evidenced “causes”, like supernatural forces and intelligent design.

These arguments do not erase the fact that the prevailing evidence still most apparently does indicate that we are somehow relevantly linked to the structure mechanism, until they prove it isn’t so, so we must remain open to evidence in support of this, or we are not honest scientists, and we are no better than those who would intentionally abuse the science. We certainly do not automatically dismiss the “appearance” by first looking for rationale around the most apparent implication of evidence.

That’s like pretending that your number one suspect doesn’t even exist! There can be nothing other than self-dishonesty and pre-conceived prejudicial anticipation of the meaning that motivates this approach, and often *automatically* elicits false, ill-considered, and, therefore, necessarily flawed assumptions, that most often elicit equally false accusations about “geocentricism” and “creationism”. That’s not science, it’s irrational reactionary skepticism that is driven without justification by sheer disbelief and denial.

And then along came this highly inconvenient… WHOOPS! WHAT’S THIS SUPPORTING HERESY that we must only work to explain-away?!?!

Does the motion of the solar system affect the microwave sky? http://cerncourier.com/cws/article/cern/29210

Lawrence Krauss even talks about this direct observation:

THE ENERGY OF EMPTY SPACE THAT ISN’T ZERO But when you look at CMB map, you also see that the structure that is observed, is in fact, in a weird way, correlated with the plane of the earth around the sun. Is this Copernicus coming back to haunt us? That’s crazy. We’re looking out at the whole universe. <b>There’s no way there should be a correlation of structure with our motion of the earth around the sun — the plane of the earth around the sun — the ecliptic. That would say we are truly the center of the universe.
-Lawrence Krauss

“That’s Crazy”… “There’s no way”… Really, Larry?… Are you sure that it isn’t more-like… willful ignorance and denial?

Or isn’t it actually compounded supporting evidence for the life-oriented cosmological structure principle that we already have theoretical precedence for?

The problem here isn’t that we don’t have evidence, (make that, compounded evidence, and/or independently supportive evidence), the problem is that nobody is looking into this from any perspective that isn’t aimed at refuting the significance of the evidence.

They have had some success at this, too, because it has been discovered that the correlation applies to a specific region of galaxies like ours, but they act like they don’t have a clue, (and I’m sure that they don’t), that this is exactly what the Goldilocks Enigma predicts will be found.

It isn’t a case of not having evidence, rather, it is a matter of unscientific interpretation and an unwillingness to look at the physics straight-up, without automatically dragging some abstract and unproven assumptions about quantum observers into it, to see if maybe something that we do quite naturally might make us entirely necessary to the energy-economy of the physical process.

If you take Brandon Carter’s statement and bring it with you to the “consensus of opinion”, then you might begin to understand why the problem doesn’t get resolved:

And it ain’t pretty:

http://arxiv.org/abs/0705.2462

GLAST launching NOW! June 11, 2008

Posted by dorigo in astronomy, cosmology, internet, news, science.
Tags: ,
comments closed

You can see here (click on the “media channel” button on the right panel) a streaming video of the launch of GLAST. It is going on now! Good luck to a mission that has potential to discover gamma ray sources in the universe and shed light on dark matter and a number of other unsolved questions in astrophysics and cosmology…

UPDATE: Launch time scheduled is now 12.05 – still 15 minutes to go! The NASA site is showing a unnerving video of the rocket waiting to be ignited…

Keith Olive: Big bang nucleosynthesis – concordance of theory and observations May 26, 2008

Posted by dorigo in cosmology, physics, science.
Tags: , , , ,
comments closed

In the file with my notes on the talks I heard in Albuquerque I have a few more summaries that can be of general interest. So let me offer to you today the following one.

Keith gave a very clear and instructive seminar (slides here) on the status of our understanding of big-bang nucleosynthesis (BBN), the theory which uses well-understood nuclear physics to predict how light elements were forged during a very short time interval after the big bang, when the universe was still dense enough that nuclear reactions were frequent, but not any more so hot that photons would destroy nuclear bounds between protons and neutrons. The calculations of BBN are a formidable evidence for the big bang theory, since the abundance of light elements that are present in the universe but cannot have been formed inside stars -and are thus called “primordial”- is explained to good accuracy.

BBN was the primary vehicle to obtain an estimate of the baryon abundance in the universe in the past, but now WMAP data on the cosmic microwave background -the radiation seeping through the universe in all directions, which originated when the universe became transparent to radiation- does that much more precisely. It is thus more important now to try and match up the abundances of individual elements. BBN has become a test, a definite prediction of what are light element abundances. The prediction can be compared with observations to see whether any effect is not yet well understood.

Deuterium is now the best of the three important light isotopes to study (Helium-4 and Lithium-7 are the others). The uncertainty from BBN in Li-7 is larger, and the discrepancy with observations is also larger: there is currently an interesting problem to solve with this element.

WMAP observations now allow us to compute \Omega_B = 0.0227 \pm 0.0006, so the baryon to photon ratio is found to be \eta_{10} = 6.22 \pm 0.16 \times 10^{-10}. BBN is thus now a parameterless theory to be used to make comparisons.

Keith pointed out that everything about the energy content of the universe is well determined. The equilibrium at high temperature is maintained by weak interactions. What is important for BBN is the neutron to proton conversion rate; this freezes out at temperature of 1 MeV, so one uses the ratio at that time to determine abundances.

Nucleosynthesis is delayed because of the so-called Deuterium bottleneck: there is a delay in the onset of the forging of light elements because of production and destruction rates. As the temperature drops, energetic photons can still be found in the tail of the energy distribution which are able to break the deuterium bound until we arrive at temperatures of a tenth of a MeV, and then, once deuterium forms and is not immediately broken again, nucleosynthesis proceeds very rapidly. The helium abundance is about 25%, and BBN provides a very good estimate for it. D, He-3, Li-7 are at the level of a tenth of a billionth instead.

To understand nucleosynthesis well there are ten nuclear reactions which are important. Most of these have small uncertainties. Keith showed a plot of the helium mass fraction as a function of eta_{10}, which indicates that there is a bit of scattering in the data, but overall results are consistent. Uncertainties are small: of the order of 5% to 10% for the ratio of D/H and He-3/H abundance, while it is still of about 20% for Li-7/H.

Light elements are observed in different places in the universe. He-4 in extragalactic H-II regions, and small irregular galaxies, while Li-7 is observed in the atmosphere of dwarf halo stars: this is a small, 10^{-10} abundance. Deuterium abundance data is instead coming from quasar absorption systems, but also local measurements in meteorites. He-3 is also found in meteorites, but there is no site to get a primordial abundance of this element, and you need a model for its evolution.

D/H is all primordial, so every deuterium nucleus comes from the big bang. It is observed in Jupiter, in the interstellar medium, and in meteorites. The best observations comes from quasar absorption systems. There is only a handful of good ones though.

Keith then showed a graph of the D/H versus Si/H abundance ratios. The Si/H quantity is also called “metallicity”, and the graph is usually shown with logarithmic axes whose units are multiples of solar abundances. The two are supposed to be independent. On some scale there should be a trend. Dust clouds in the line of sight of quasars might cause a bias: deuterium absorbs light and one gets a nuclear shift due to the mass relative to hydrogen. The BBN prediction is 10^5 D/H = 2.74^{+0.26}_{-0.16} while it is observed that on average 10^5 D/H = 2.82 \pm 0.21. This is in greast agreement with WMAP results.

He-4 is also primordial. It is measured in low-metallicity extragalactic H-II regions, together with O/H and N/H, the relative oxygen and nitrogen abundances. One plots He-4 abundance as a function of the O/H ratio, and then does a regression towards zero O/H: Oxygen is not primordial, so you have to extrapolate the He-4 abundance to zero to obtain its primordial value. One thus gets the value 0.2421 \pm 0.0021. This ratio has a tiny error. It looks great: better than it should! Indeed, understanding the systematics is not easy. What was done to extract the He-4 abundance was to assume intensity and equal width for hydrogen and helium, then determine hydrogen reddening and its underlying absorption. One needs to make corrections for those effects.
There are six helium lines that can be used to get the abundance, but one also needs to know the electron density, the temperature, and the underlying helium absorption. In the end one reduces the data and arrives to the value 0.2495 with a total 0.0092 error. The He-4 prediction from WMAP is instead 0.2494 \pm 0.0005.

Lithium, instead, is problematic. It is observed in dwarf stars with low metallicity. Iron abundance relative to the solar one, Fe/H, can be put on the x axis, and Lithium abundance on the y axis. Old Li/H determinations used to sit at 1.2E-10, but a lot of measurements were carried out over the years, and they typically were all of the order of one to two, in 10^-10 units. As metallicity increases, time increases, so one plots Li/H vs Fe/H to see the evolution in the former, and at low metallicity one measures it.

Among the possible sources for the discrepancy observed, one is stellar depletion. One sees a lack of dispersion in the data: dispersion is consistent with the observational error, quite small. It is very unlikely that all stars destroy lithium consistently within a factor of 3. You can model different stars, but there are a hundred of measurements. One can also play with nuclear rates, but it is hard to do. The BBN predicted value is 4E-10, while observations point to a smaller value, from one to two units of E-10.

One must note at this point that particles in the universe with lifetimes of the order of seconds can change the lithium-7 abundance (decrease it, thus bringing it closer to the observed value), if they have lifetimes of 1000 seconds, while they increase the Li-6 abundance. If a next-to-lightest supersymmetric particle decayed into a gravitino, the stau would form a bound state with helium-4, and change many things. From these studies one obtains “preferred” values of the SUSY parameter space, usually plotted in the gaugino versus scalar mass plot. High gaugino masses, around 2-3 TeV, are favored in this scenario. Gluino masses are of about 9 TeV for a 3 TeV gaugino mass. Scalar masses are not so heavy, but it would be admittedly very hard for the LHC to see a signal of these particles if nature had chosen this point of parameter space.

Going into more exotic alternatives, one may mention that also a varying electromagnetic fine structure constant \alpha would upset the balance for the determination of the freeze-out and He-4 abundance. One can try varying all Yukawa couplings, including a dependence of \Lambda_{QCD} on \alpha, consider effects on neutron to proton mass ratio and lifetime, and the deuterium binding energy. Those variations can be tracked with variations of the fine structure constant.

Since helium has a large uncertainty, you can see how it varies with \Delta h/h. For positive variations it gets you down to make Li-7 in agreement with observations: -1.8 \times 10^{-5} < dh/h <2.1 \times 10^{-5}. It needs to be pointed out that this value is not in agreement with the measured variation of the fine structure constant from quasars. These are different times of course, so you could still hypothesize a oscillating variation with time of \alpha. You can make models with a varying \alpha that has a positive variation at BBN times and negative at quasar times.

In conclusion, deuterium and helium abundances are in very good agreement with observed values. There are issues to be resolved there too, however. In Li-7, there are two problems: Li-7 BBN is high compared to observations, and Li-6 is low (but it has a large uncertainty).

Liam Mc Allister: Inflation in String Theory May 23, 2008

Posted by dorigo in cosmology, news, physics.
Tags: , ,
comments closed

Here we go with another report from PPC 2008. This one is on the talk by Liam Mc Allister from yesterday afternoon session. In this case, I feel obliged to warn that my utter ignorance of the subject discussed makes it quite probable that my notes contain nonsensical statements. I apologize in advance, and hope that what I manage to put together is still of any use to you, dear reader.

The main idea discussed in Liam’s talk is the following: if we detect primordial tensor perturbations in the cosmic microwave background (CMB) we will know that the inflaton -the scalar particle responsible for the inflation epoch- moved more than a Planck distance in field space. Understanding such a system requires confronting true quantum gravity questions. String theory provides a tool to study this.

Inflation predicts scalar fluctuations in the CMB temperature. These evolve to create approximately scale-invariant fluctuations, which are also approximately gaussian. The goal we set to ourselves is to use cosmological observations to probe physics at the highest energy scales.

The scalar field \phi has a potential which drives acceleration. Acceleration is prolonged if V(\phi) is rather flat. How reasonable is that picture ? This is not a macroscopic model. What is \phi ? The simplest inflation models often invoke smooth potentials over field ranges larger than the Planck mass. In an effective field theory with a cutoff \Lambda one writes the potential with powers of the ratio \phi/\Lambda. Flatness is then imposed over distances \Delta \phi > \Lambda. But \Lambda must be smaller than the Planck mass, except in a theory of quantum gravity.

So one needs to assume something about quantum gravity to write a potential. It is too easy to write an inflation model, so it is not constrained enough to be predictive. We need to move to some more constrained scenario.

Allowing an arbitrary metric on the kinetic term, and an arbitrary number of fields in the lagrangian, the potential is very model-dependent. The kinetic term has higher derivative terms. One can write the kinetic term of the scalar fields with a metric tensor G. G is the metric on some manifold, and can well depend on the field themselves. An important notion is that of the field range.

Liam noted that the prospects for excitement in theory and experiments are coupled. If the parameter n_s is smaller than 1, there are no tensors and no non-gaussianity, and in that case we may never get more clues about the inflaton sector than we have right now. We will have to be lucky, but the good thing is that if we are, we are both ways. Observationally non-minimal scenarios are often theoretically non-minimal: detectable tensors require a large field range, and this requires a high-energy input. If anything goes well it will do so both experimentally and theoretically.

String theory lives in 10 dimensions. To connect to 4D reality string theory, we compactify the 6 additional dimensions. Additional dimensions are small otherwise we would not see a newtonian law of gravity, since gravity would propagate too much away from our brane.

Moduli correspond to massless scalar fields in 4-dimensions. Size and shape moduli for the Calabi-Yau manifold. Light scalars with gravitational strength couplings absorb energy during inflation. They can spoil
the pattern of big bang nucleosynthesis (BBN) and overclose the universe. The solution is therefore that sufficiently massive fields decay before BBN, so they are harmless for it (however, if they decay to gravitinos they may still be harmful).

The main technical extension: D-branes, by Polchinski in 1995. If you take a D-brane and you wrap it in the compact space, it takes energy that creates a potential for the moduli. It makes the space rigid.
The tension of D-branes makes distorting the space cost energy. This creates a potential for the moduli.

Any light scalars that do not couple to the SM are called moduli. Warped D-brane inflation: it implies warped throats. A CAlabi-Yau space is distorted to make a throat. This is a Randall-Sundrum region. It is the way by which string theory realizes it. A D-3 brane and an anti-D3 brane attract each other.

The tensor-to-scalar ratio is large only if the field is moving over planckian distances, \Delta \phi/M_p. That is the diameter of the field space. It is ultraviolet-sensitive but not too much so.
In our framework, observable tensors in CMB mean that there has been trans-planckian field variation.

Can we compute the (\Delta \phi /M_p)_{max} in a model of string inflation ? Liam says we can.
Planckian distances can be computed in string theory using the geometry. The field \phi is the position in the throat, so \Delta \phi is the length of the throat. It is reduced to a problem in geometry. The field range is computed to (\Delta \phi/M_p)^2 < 4/N, where N is the number of colors in Yang-mills theory associated to the throat region. N is at least a few hundred!
So the parameter r_{CMB} is small with respect to the threshold for detection in the next decade, since r_{CMB}/0.009 < 4/N.

N has to be large for us to be using supergravity. You can conceive a configuration with N not large,
but then we cannot compute it. It is not in the regime of honest physics, in that case. There are boundaries
in the space of string parameters. So we are constraining ourselves in a region where we can make computations. It would be very interesting to find a string theory that gives a large value of r.

Liam’s conclusions were that inflation in string theory is developing rapidly and believable predictions are starting to become available. In D-brane inflation, the computation of field range in Planck units shows that detectable tensors are virtually impossible.

Simona Murgia: Dark Matter searches with GLAST May 23, 2008

Posted by dorigo in astronomy, cosmology, physics, science.
Tags: , ,
comments closed

Now linked by Peter Woit’s blog with appreciative words, I cannot escape my obligation to continue blogging on the talks I have been listening at PPC2008. So please find below some notes from Simona’s talk on the GLAST mission and its relevance for dark matter (DM) searches.

Glast will observe gamma rays in the energy range 20 MeV to 300 GeV. A better flux sensitivity than earlier experiments such as EGRET and AGILE. It is a 5-year mission, with a final goal of 10 years. It will orbit at 565 km of altitude, with a 25.6° inclination with the terrestrial equator. It has a large area telescope (LAT), for pair conversions. Features a precision Silicon strip tracker, endowed with 18 xy tracking planes with tungsten layers interleaved. The tracker is followed by a small calorimeter with CsI crystals. Tracker surrounded by anti-coincidence detector, 89 plastic scintillator tiles. The segmented design avoids self-veto problems. The total payload of GLAST is 2000 kg.

GLAST has four times the field of view of EGRET, and it covers the whole sky in two orbits (3 hours). The broad energy range has never been explored at this sensitivity. The energy resolution is about 10%, and the point-spread function is 7.2 arcminutes above 10 GeV. More than 30x better sensitivity than previous searches below 10 GeV, x100 at higher energy.

EGRET cataloged 271 sources of gamma rays, GLAST expects to do thousands. Active galactic nuclei, gamma ray bursts, supernova remnants, pulsars, galaxies, clusters, x-ray binaries. There is very small gamma ray attenuation below 10 GeV, so GLAST can probe cosmological distances.

Simona asked herself, what is the nature of DM? There are several models out. GLAST will investigate the existence of weakly interacting massive particles (WIMPS) through two-photon annihilation. Not an easy task, for there are large uncertainties in the signal and in the background. The detection of a DM signal from GLAST would be complementary to others.

Gamma rays may come from neutral pions emitted in \chi \chi annihilation. These give a continuum spectrum. Instead, direct annihilation to two photons is expected to have a branching ratio of 10^{-3} or less, but the latter would provide a line in the spectrum, a spectacular signal.

Other models provide an even more distinctive gamma spectrum. With the gravitino as a lightest supersymmetric particle, it would have a very long lifetime, and it could decay into photon and neutrino: this yields a enhanced line, and then a continuum spectrum at lower energy.

Instrumental backgrounds mostly come from charged particles (protons, electrons, positrons). Also neutrons, and earth albedo photons. These dominate the flux from cosmic photons. But less than one in hundred thousand survives the photon selection. Above a few GeV, background contamination is required to be less than 10% of the isotropic photons measured by EGRET.

Searches for WIMP annihilations can be done in the galactic center or complementary in the galactic halo. In the latter case there is no source crowding, but significant uncertainties in the astrophysical backgrounds. The 3-sigma signal on <\sigma v> as a function of mass of the WIMP goes below 10^{-26} cm^2 s^{-1} with 5 years of exposure.

Simona then mentioned that one can also search for DM satellites: simulations predict a substructure of DM in the galactic halo. The annihilation spectrum predicted is different from a power law. The emission is expected to be constant in time. Considering a 100 GeV WIMP, with sigma v = 2.3 \times 10^{-26}, annihilating into a b-quark pair, with extragalactic background and diffuse galactic, it is generically observable at 5-sigma level in one year. To search for these, you first scan the sky, and then when you have found something you can concentrate on observing it.

Also, dwarf galaxies can be studied. The mass to light ratio there is high, and it is thus a promising place to look for a annihilation signal. The 3-sigma sensitivity of GLAST for 5 years data goes down to 10^-26 and below for WIMP mass in the tens of GeV range.

To search for lines in the spectrum, you search in a annulus between 20 and 35 degrees in galactic latitude, removing a 15° band from the galactic disk. It is a very distinctive spectral signature. A better sensitivity is achieved if the location of the line is known beforehand (if discovered by the LHC, for instance). A 200 GeV line can be seen at 5-sigma in 5 years.

GLAST can also look for cosmological WIMPs at all redshifts. There is a spectral distorsion caused by integration over redshift. The reach of GLAST is a bit higher here, 10^-25. One can do better if there is a high concentration of DM in substructures.

Three space telescopes May 22, 2008

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , , , , ,
comments closed

This morning at PPC2008 the audience heard three different talks on proposed space missions to measure dark energy through supernova surveys at high redshift and weak lensing observations. I am going to give some highlights of the presentations below.

The first presentation was by Daniel Holz on “The SuperNova Acceleration Probe“, SNAP.

SNAP is all about dark energy (DE). Supernovae show there is acceleration in the universe. However, to measure precisely the amount of DE in the universe one is required to determine the distance versus redshift of supernovae of type 1A. These are collapsing carbon-oxygen white dwarfs at the Chandrasekhar limit, i.e. with exactly the right amount of mass: they make a very well understood explosion when they die, and they can thus be used as standard candles to measure distance and their redshift tells us how much they are receding from us. The precision required to extract information on DE is at the level of a few % or so, which is very difficult to do. So one needs a great control of systematics. One also wants to distinguish DE from modified gravity models, which accommodate some of the observed features of the universe by hypothesizing that the strength of gravity is not exactly the same as one goes to very large distance scales.

Separating out the different models is not easy. Supernovae allow to determine the integrated expansion – how much the universe accelerated since its origin; the origin of growth of structure in the universe is not measured there. The way snap is approaching this is by combining SN measurements with weak lensing. Weak lensing is the deviation of photons from a distant source when passing through large amounts of mass, like a cluster of nearby galaxies.

SNAP is a space telescope (see picture). Its strong point is the width of the field of view it can imagine at a time: 0.7 squared degrees of sky, much larger than that of the Hubble space telescope. Also, it provides for lots of color information: it goes into the infrared, and it has 9 filters to get spectral information from the observed objects.

SNAP aims at obtaining 2000 supernovae of type 1A at redshift z<1.7, and to do a weak lensing survey over 4000 square degrees. It is a 1.8m telescope, diffraction limited. The bottom line for SNAP is to measure \Omega_{DE} to 0.4%, and the parameters w_0 to 1.6% and w' to 9%.

After Holz, R. Kirshner described “Destiny, the Dark Energy Space Telescope“. Destiny is a similar concept to the previous one. Its science goal is to determine the expansion history of the universe to 1% accuracy.
It is a 1.65m wide telescope – slightly smaller than SNAP (1.8m). The program has to cost 600 million dollars or less including cost. It is receiving a green light from its funding agencies, NASA and DOE. It will operate from the same location as SNAP -a lagrangian orbit point called “L2″, which is not affected much by gravitational effects from the earth and the moon. Kirshner made me laugh out loud as he added that despite the location is the same as that of SNAP, it will not be too crowded a place, since SNAP won’t be there.

Kirshner explained that the project is very conservative. They do not need low redshift SN measurement from space. From space one can work in near IR, something that can only be done there. It complements well with ground-based telescopes. A very distinctive difference with SNAP is that Destiny is an imaging spectrograph. It takes a spectrum of every object in the field every time.
For SNAP you need to make a choice to what object to take a spectrum. The resolution in wavelength is \lambda/(\Delta \lambda=75, equivalent as having 75 filters of spectral energy distribution.

The Destiny philosophy is to keep it simple, stupid. It is a satellite for which every piece has flown in space previously: nothing we do not know we can do already. It uses the minimal instrument required to do the job. And it does in space what must be done in space, without taking the job away from ground based observations. Also, a point which was emphasized is that there will be no time-critical operations: it is a highly automated, fixed program. No need for 24/7 crews on earth to decide what star to take spectra of.

DE measurements require to measure both the acceleration and deceleration epochs, z<>0.8. On the
ground magnitudes accessible for SN are lower than 24, on space they are higher, at z>0.8.
To take a picture of the time derivative of the equation of state, you need to measure over the
jerk region in the distance-to-redshift diagram, where the curve changes from acceleration to deceleration, at z about 1. This you do from space.

Kirshner explained that the sky is dark at night in the optical wavelengths, but at IR it is ten to a hundred times brighter. In space you go down by a factor of 100 in brightness. But there is also absorption in the atmosphere, mostly water vapor. From the ground you can work at small intervals of wavelength: these for the infrared lie at 1.4 micrometers and 1.85 micrometers. In space you can look at the entire range.

Finally, Daniel Eisenstein discussed “The Advanced Dark Energy Physics Telescope” (ADEPT)

He started by explaining that baryon acoustic oscillations are a standard ruler we can use to measure cosmological distances. You are seeing sound waves coming from the early universe. Recombination time at z=1000, 400,000 years after the big bang. After recombination, the universe becomes neutral, phase of oscillation at recombination time affects last-time amplitude. Before recombination, the universe is ionized, photons provide enormous pressure and restoring force, and perturbations oscillate as acoustic waves. These perturbations propagate as sound waves.

Overdensity is an overpressure that launches a sound spherical wave. This travels at 57% of the speed of light. It travels out until at time of recombination, the wave stalls, and it deposits the gas perturbation at 150 Mpc. Overdensity in shell (gas) and in the center both seed the formation of galaxies. So the aim is looking for a bump of perturbation at distance scales of 150 Megaparsecs. The acoustic signature is carried by pairs of galaxies separated by 150 Megaparsecs (Mpc). Nonlinearities push galaxies around by 3 to 10 Mpc. Broadens the peak, making it hard to measure the scale. Some of the broadening can be recovered by measuring the large scale structure which acted to broaden the peak: it is a perturbation which can be corrected for.

The most serious concern is that the peak would shift. A small effect. Most of the motion is random. Less than 1%. One can run large volumes of universe in cosmological n-body simulations, and find shifts of 0.25% to 0.5%. Also these shifts can be predicted and removed.

To measure the peak of baryon acoustic oscillations there is one program, BOSS, the next phase of SDSS which will run from 2008 to 2014, providing a definitive study of low-redshift <0.7 acoustic oscillations.
Instead, ADEPT will take a survey of three fourths of the sky for redshifts 1<z<2 from a 1.3 meter space telescope, with slitless IR spectroscopy of the H-alpha line. 100 million redshifts will be taken. ADEPT is designed for maximum synergy with ground-based dark energy programs -a point Kirshner had also made for Destiny. ADEPT will measure angular diameter distance from BAO and expansion rate too. It will be a huge galaxy redshift survey.

By hearing these three talks I was under the impression that cosmologists have become a bit too careful with the design of their future endeavours. If we use technology that is old now to design experiments that will fly in five years, are we not going against the well-working paradigm of advancing technology through the push of needs from new, advanced experiments ? I do understand that space telescopes are not particle detectors, and if something breaks they cannot be taken apart and serviced at will; however, it is a sad thing to see so little will to be bold. A sign of the funding-poor times ?

Vernon Barger: perspectives on neutrino physics May 22, 2008

Posted by dorigo in cosmology, news, physics, science.
Tags: , , ,
comments closed

Yesterday morning Barger gave the first talk at PPC 2008, discussing the status and the perspectives of research in physics and cosmology with neutrinos. I offer my notes of his talk below.

Neutrinos mix among each other and have mass. There is a matrix connecting flavor and mass eigenstates, and the matrix is parametrized by three angles and a phase. To these can be addded two majorana phases for the neutrinoless double beta decay: \phi_2 and \phi_3. Practically speaking these are unmeasurable.

What do we know about these parameters ? We are at the 10th year anniversary of the groundbreaking discovery of SuperKamiokande. This was then confirmed by other neutrino experiments: MACRO, K2K, MINOS. The new result by MINOS has a 6.5 sigma significance in the low energy region. This allows to measure the mass difference precisely. It is found that (\Delta m^2)_{12}  = 2.4 \times 10^{-3} eV squared at \sin^2 \theta_{23}=1.00. The mixing angle is maximal, but we do not really know because there is a substantial error on it.

Solar neutrino oscillations are a mystery that existed for years. The flux of solar neutrinos was calculated by Bahcall, and there was a deficit. The deficit has an energy structure, as measured by the Gallium, Chlorine, and SuperK and SNO experiments by looking in neutrinos coming from different reactions -because of the different energy thresholds of the detectors: pp interactions, Beryllium 7, and Boron 8 neutrinos.
The interpretation of the data, which evolved over time, is now that the solar mixing angle is quite large, and what happens is that the high energy neutrinos sent here are created in the center of the sun, but they make a transition in matter, an adiabatic transition to a state \nu_2 which travels to the earth. This happens to the matter-dominated higher energy neutrinos. The vacuum dominated ones at lower energy have a different phenomenology.

There is a new result from Borexino, they measured neutrinos from the Beryllium line, and they reported a result consistent with others. Borexino is going to measure with 2% accuracy the deficit, and if KamLand has enough purity they can also go down to about 5% accuracy.

Kamland data provides a beautiful confirmation of the solution of the solar neutrino problem: solar parameters are measured precisely. M^2_{21} vs \tan^2 \theta_{12}. The survival probability as a function of distance versus neutrino energy has a beautiful oscilation. The result only assumes CPT invariance. The angle \theta_{12} is 34.4° with 1° accuracy, and (\Delta_m^2)_{12} = 7.6 \times 10^{-5} eV squared.

There is one remaining angle to determine, \theta_{13}. From reactor experiments you expect that the probability for electron neutrino disappearance is zero in short baseline experiments. Chooz had a limit at \theta_{13} < 11.5 degrees. There are experiments that have sensitivities on \sin^2 \theta_{13} of 0.02 (Double CHooz, DB, Reno). The angle is crucial because iti s a gateway to CP violation. If the angle is zero CP violation is not accessible in this sector.

What do we expect theoretically on \theta_{13} ? There are a number of models to interpret the data. Predictions cluster around 0.01. Half of the models will be tested with the predicted accuracies of planned experiments.

There is a model called tri-bimaximal mixing: a composition analogous to the quark model basis of neutral pions, eta and eta’ mesons. An intriguing matrix: it could be a new symmetry of nature, possibly softly broken with a slightly non-zero value of the angle \theta_x. Or, it could well be an accident.

So, we need to find what \theta_{13} is. It is zero in the Tri-binaximal mixing. We then need to measure hte mass hierarchy: is it normal ( the third state much heavier than the other two) or inverted (the lightest much lighter than the others) ? Also, is CP violated ?

Neutrinoless double-beta decay can tell us if the neutrinos are Majorana particles. In the inverted hierarchy, you measure the average mass versus the sum. There is a lot of experimental activity going on here.

Cosmic microwave has put a limit on sum of masses at 0.6 eV. By doing 21cm tomography one can measure neutrino masses with precision of 0.007 eV. If this is realizable, it could individually measure the masses of these fascinating particles.

Barger then mentioned the idea of mapping the universe with neutrinos: the idea is that active galactic nuclei (AGN) produce hadronic interactions with pions decaying to neutrinos, and there is a whole range of experiments looking at this. You could study the neutrinos coming from AGN and their flavor composition.
Another late-breaking development is that Auger has shown a correlation of ultra high-energy cosmic rays with AGNs in the sky: the cosmic rays seem to arrive directly from the nuclei of active galaxies. Auger found a higher correlation with sources at 100 Mpc, but falling off at higher distances. Cosmic rays are already probing the AGN, and this is very good news for neutrino experiments.

Then he discussed neutrinos from the sun: annihilation of weakly interacting massive particles (WIMPS), dark matter candidates, can give neutrinos from WW, ZZ, ttbar production. The idea is that the sun captured these particles gravitationally during its history, and the particles annihilate in the center of the sun, letting neutrinos escape with high energy. The ICECUBE discovery potential is high if the spin-dependent cross section for WIMP interaction in the sun is sufficiently high.

In conclusion, we have a rich panorama of experiments that all make use of neutrinos as probes of exotic phenomena as well as processes which we have to measure better to gain understanding of fundamental physics as well as gather information about the universe.

Denny Marfatia’s talk on Neutrinos and Dark Energy May 22, 2008

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , , ,
comments closed

Denny spoke yesterday afternoon at PPC 2008. Below I summarize as well as I can those parts of his talk that were below a certain crypticity threshold (above which I cannot even take meaningful notes, it appears).

He started with a well-prepared joke: “Since at this conference the tendency is to ask questions early, Any questions ?“. This caused the hilarity of the audience, but indeed, as I noted in another post, the setting is relaxed and informal, and the audience interrupts loosely the speaker. Nobody seemed to complain so far…

Denny started by stating some observational facts: the dark energy (DE) density is 2.4 \times 10^{-3} eV^4, and the neutrino mass difference \Delta M^2 is of the same order of magnitude. This coincidence of scale mighti imply that neutrinos coupled to a light scalar might explain why \Omega_{DE} has a similar value to \Omega_M, i.e. why we observe a rather similar amount of dark energy and matter in the universe.

But he noted that there is more than just one coincidence problem. In fact DE density and other densities have ratios which are small values. Within a factor of 10 the components are equal.
Why not consider these coincidences to have some fundamental origin ? Perhaps neutrino and DE densities are related. It is easy to play with this hypothesis with neutrinos because we understand them the least!

We can have the neutrino mass made a variable quantity. Imagine a fluid, the scalar, a quintessence scalar, and the potential is M_\nu n_\nu + V(m_\nu). There is a ansatz to be made: the effective potential is stationary with respect to the neutrino mass.

So one makes a decreasing neutrino mass at the minimum of the potential, with a varying potential. Some consequences of this model are given by the expression w = -1 + m_\nu n_\nu/V_{eff}. W can thus deviate from -1. It is like quintessence without a light scalar.
Neutrino masses can be made to vary with their number density: if w is close to -1, the effective potential has to scale with the neutrino contribution. Neutrinos are then most massive in empty space, and they are lighter when they cluster.
This could create an intriguing conflict between cosmological and terrestrial probes of neutrino mass. The neutrino masses vary with matter density if the scalar induces couplings to matter. There should be new matter effects in neutrino oscillations. One could also see temporal variations in the fine structure constant.

If neutrinos are light, DE becomes a cosmological constant, w = -1, and we cannot distinguish it from other models. Also, light neutrinos do not cluster, so the local neutrino mass will be the same as the background value; and high redshift data and tritium beta decay will be consistent because neither will show evidence for neutrino mass.

So one can look at the evidence for variations in time of the fine structure constant. The status of measurements of transition frequencies in atomic clocks give the limit \delta \alpha/\alpha < 5 \times 10^{-15}.

The abundance ratio of Sm 149 to Sm147 at the natural reactor in Oklo shows no variation in the last 1.7 billion years, with a limit \delta \alpha/\alpha < 10^{-7}.
Resonant energy for neutron capture.

Meteoritic data (at a redshift z<0.5) constrain the beta decay rate of Rhenium 187 back to the time of solar system formation (4.6 Gy), \delta \alpha/ \alpha = 8 \pm 8 \times 10^{-7}.

Going back to 0.5<z<4, transition lines in quasars QSO spectra indicate a value \delta \alpha/ \alpha = -0.57 \pm 0.10 \times 10^{-5}. Found to be varying at 5-sigma level! The lines have an actual splitting which is different. Result not confirmed, depends on the magnesium isotopic ratio assumed. People say you are just measuring chemical evolution of magnesium in these objects: the three isotope ratio might be different from what is found in our sun, and this would mess up the measurement.

Then, there is a measurement of CMB (at z=1100) which determines \delta \alpha/\alpha<0.02 from the temperature of decoupling, depending on binding energy.
Also primordial abundances from Big-bang nucleosynthesis (at z=10 billion) allow one to find \delta \alpha/\alpha<0.02. Bounds on a variation of the fine structure constant at high redshift are thus very loose.

One can therefore hypothesize a phase transition in \alpha: it was done by Anchordoqui, Barger, Goldberg, Marfatia recently. The bottomline is to construct the model such that when the neutrino density crosses the critical value as the universe expands, \alpha will change.

The first assumption is the following: M has a unique stationary point. Additional stationary points are possible, but for nonrelativistic neutrinos with subcritical neutrino density, only one minimum, fixed, and no evolution.

For non-relativistic neutrinos, with supercritical density, there is a window of instability.

You expect neutrinos at the critical density at some redshift. The instability can be avoided if the growth-slowing effects provided by cold dark matter dominate over the acceleron-neutrino coupling. Requiring no variation of \alpha up to z=0.5, and then a step in \delta \alpha, and enforcing that the signal in quasar spectra is reproduced, one gets results which are consistent with CMB and BBN.

A review of yesterday’s afternoon talks: non-thermal gravitino dark matter and non-standard cosmologies May 21, 2008

Posted by dorigo in cosmology, news, physics, science.
Tags: , , ,
comments closed

In the afternoon session at PPC2008 yesterday there were several quite interesting talks, although they were not easy for me to follow. I give a transcript of two of the presenations below, for my own record as well as for your convenience. The web site of the conference is however quite quick in putting the talk slides online, so you might want to check it if some of what is written below interests you.

Ryuichiro Kitano talked about “Non-thermal gravitino dark matter“. Please accept my apologies if you find the following transcript confusing: you are not alone. Despite my lack of understanding of some parts of it, I decided to put it online anyway, in the hope that I will have the time one day to read a couple of papers and understand some of the details discussed…

Ryuichiro started by discussing the standard scenario for SUSY dark matter, with a WIMP neutralino. This is a weakly interacting, massive, stable particle. In general, one has a mixing between bino, wino, and the higgsinos, and that is what we call neutralino. In the early universe it follows a Boltzmann distribution, then there is a decoupling phase when the process inverse to production becomes negligible, so at freeze-out the number density of the neutralino goes with T^3. The final abundance is computed by equating two terms at time of decoupling, to get <\sigma v> = (n^2_\chi -n^2_{eq}.

In this mechanism there are some assumptions. The neutralino is a LSP: it is stable. The second assumption is that the universe is radiation-dominated at time of decoupling. A third assumption says that there is no entropy production below T=O(100 GeV), otherwise relative abundances would be modified. Are these assumptions reasonable ? Assumption one restricts to gravity mediation. There is almost always a moduli problem. This is inconsistent with assumptions 2 and 3. If you take instead anomaly mediation, wino LSP and it gives too small abundances. We thus need a special circumstance for the standard mechanism to work.

The moduli/gravitino problem: in gravity mediation scenario, there is always a singlet scalar field which obtains a mass throuhg SUSY breaking. S is a singlet under any symmetry, and it gives a mass dump to the gaugino. Its potential cannot be stabilized, and it gets mass only through SUSY breaking. Therefore, there exists a modulus field. We need to include it to consider the cosmological history, because it has implications.

During inflation, the S potential is deformed because S gets mass only from SUSY breaking. So, the initial value of the moduli will be modified. Once S-domination happens it is a cosmological disaster. If the gravitino is not LSP, it decays with a lifetime of the order of one year, and it destroys the standard picture of big bang nucleosynthesis (BBN). if the decay is forbidden, it is S
to have a lifetime of O(1y), still a disaster. Inconsistent with neutralino DM scenario, or better, gravity mediation is inconsistent.

So we need some special inflation model which does not couple to the S field; a very low scale inflation such that deformation of S potential is small; and a lucky initial condition such that S-domination does not happen. Is there a good cosmological scenario that does not require such conditions ?

Non-thermal gravitino DM is a natural and simple solution to the problem. Gauge mediation offers the possibility. SUSY breaking needs to be fixed in the scenario. Most of it has the same effective lagrangian. This implies two parameters in addition to the others: the height of the potential (describing how large is the breaking) and the curvature m^4/\Lambda^4. In this framework, the gravitino is the LSP.

In non-thermal gravitino dark matter scenario, the mechanism can produce the DM candidate. After inflation, S oscillation starts. We have a potential for it, there is a quadratic term. Second step is decay. The S coupling to superparticles are proportional to their mass. S-Gravitino coupling is instead suppressed. Gives a smaller branching ratio to gravitino. Good for the gravitino abundance. Also the shorter lifetime as compared to gravity mediation is good news for BBN.

The decay of S to a bino pair must be forbidden to preserve BBN abundances. So S \to hh is the dominant decay mode if it is open. If we calculate the decay temperature, we find good matches with BBN and it is perfect for DM as far as its abundance is concerned.

There are two parameters: height of potentials and curvature. We have to explain the size of gaugino mass and fix one of the parameters. Gravitino abundance is explained if gravitino mass is about 1 GeV. Baryon abundances however have to be produced by other means.

Step three is gravitino cooling. Are they cold ? THey are produced by the decay of 100 GeV particles, relativistic, but their distribution is non thermal. It slows down by redshift. These must be non-relativistic at time of structure formation.

If we think of SUSY cosmology we should be careful about the consistency with the underlying mode, of gravity mediation. Gauge mediation provides viable cosmology with non-thermally
gravitino DM.

Next, Paolo Gondolo gave a thought-provoking talk on “Dark matter and non-standard cosmologies“. Again, I do not claim that the writeup below makes full sense -not to me, but maybe to you it does.

Paolo started by pointing out the motivations for his talk: they come directly from the previous talks, the problem with the gravitino and with the moduli. One might need to modify usual cosmology before nucleosynthesis. Another motivation is more phenomenological. The standard results on neutralino DM are presented in standard parameter space $M_0 – M_{1/2}$, and one gets a very narrow band due to constraints of dark matter from cosmology. These constraints come from primordial nucleosynthesis. They assume that neutralinos were produced thermally, decoupled at a later time and remained with a residual abundances. This might not be true, and if it isn’t, the whole parameter space might still be consistent with cosmological constraints.

[This made me frown: isn't the SUSY parameter space still large enough ? Do we really need to revitalize parts not belonging to the hypersurface allowed by WMAP and other constraints ?]

The above occurs just by changing the evolution of the universe before nucleosynthesis. By changing $\tan \beta$ you can span a wider chunk of parameters, but that is because you look at a projection. Cosmological constraints give a n-1 hypersurface. One can extend it outside of it. But this comes at the price of more parameters. Don’t we have enough parameters already?

Cosmological density of neutralinos may differ from usual thermal value because of non-thermal production or non-standard cosmologies. J.Barrow, in 1982, wrote of massive particles as a probe of the early universe. So it is an old idea. It continues in 1990, with a paper by Kamionkowski and Turner: Thermal relics: do we know their abundances ?

So let us review the relic density in standard cosmology, and the effect of non-standard ones. In standard cosmology the Friedmann equation governes the evolution of the scale factor a, and the dominant dependence of \rho on a determines the expansion rates. Today we are matter-dominated, and we were radiation-dominated before, because \rho scales with
different powers of the scale factor: now \rho = a^{-3}, but before it went a^{-4}. Before radiation domination there was reheating, and before, the inflation era. At early times, neutralinos are produced in e+e- production, and mu+mu-. Then production freezes out, and here they say neutralino anihilations ceases, but it really is the production which ceases. Annihilation continues at smaller rates until today so that we can look for it, but it is production that stops. Number of neutralinos per photons is constant at freeze out. The higher the annihilation rate, the lower the final density. There is an inverse proportionality.

The freeze-out occurs during the radiation-dominated epoch, T = 1/20th of the particle mass, so we have much higher temperature than that of a matter-dominated universe. Freeze-out occurs before BBN. We are making an assumption of the evolution of the universe before BBN. What can we do in non-standard scenarios ? We can decrease the density of particles, by producing photons after freeze-out (entropy dilution). Increasing photons you get lower final density. One can also increase the density of particles by creating them non-thermally, from decays. Another way is to make the universe expand faster during freeze-out, for instance in quintessence models.

The second mechanism works because if we change the expansion rate we need a higher density. What if instead we want to keep the standard abundance ? We want to produce WIMPS in a thermal mechanism. We need a standard Hubble expansion rate down to T=m/23. down to freeze-out. A plot of m/T_{max} versus <\sigma v> shows that production is aborted at m/T>23.

How can we produce entropy to decrease the density of neutralinos after freeze-out ? We add massive particles that decay or annihilate late, for example a next-to-LSP. We end up increasing the photon temperature and entropy, while the neutrino temperature is unaffected.

We can increase the expansion rate at freeze-out by adding energy in the Universe, adding a scalar field, or modify the Friedmann equation by adding an extra dimension. In alternative, produce more particles by decays.

In braneworld scenarios, matter is confined to the brane, and gravitons propagate in the bulk. It gives extra terms in the Friedmann equation, proportional to the square of the density and the Planck mass squared. We can get different densities. For example, in the plane of m_0 versus gravitino mass, the Wino is usually not a good candidate for DM but here it is in Randall-Sundrum type II scenarios. We can resuscitate models of SUSY that people think are ruled out by cosmology.

Antiprotons from WIMP annihilation in the galactic halo constrain the RS model II. The 5-dimensional Planck constant M5 has different constraints, antiprotons give bounds >1000 TeV.

Non-thermal production from gravitational acceleration: at the end of inflation acceleration was so high you could create massive particles. They can have density of dM if mass is of order of Hubble parameter. Non-thermal production from particle decays is another non-standard case which is not ruled out.

Then there is the possibility of neutralino production from a decaying scalar field: In string theories, the extra dimensions may be compactified as orbifolds or Calabi-Yau manifolds. The surface shown is a solution of equation such as z_1^5 + z_2^5=1, with z complex numbers. The size and shape of the compactified dimensions are parametrized by moduli fields \phi_1, \phi_2, \phi_3… The values of the moduli fields fix the coupling constants.

Two new parameters are needed to evade the cosmological constraints to SUSY models: reheating temperature T_rh of the radiation when phi decays. It is >5 MeV from BBN constraints. Other parameter is the number of neutralinos produced per phi decay divided by phi mass, b/m_phi. b depends on the choice of the Kahler potential, superpotential, and gauge kinetic function, so on the high energy theory: the compactification , the fluxes etc. you put in your model. By lowering the reheating temperature you can decrease the density of particles. But the higher b/m, the higher the density you can get. So you can get almost any DM density you want.

Neutralino can be cold dark matter candidates anywhere in MSSM parameter space, provided one allows for these other parameters to vary.

If you work with non-standard cosmology, the constraints are transferred from the low-energy to the high-energy theory. The discovery of neutralino DM which is non thermal may open an experimental window on string theory.

[And it goes without saying that I find this kind of syllogism a wide stretch!].

Short summary from an intense day at PPC08 – part 1 May 21, 2008

Posted by dorigo in cosmology, news, physics, science.
Tags: , , , ,
comments closed

Besides my talk, which opened the morning session, there were a number of interesting talks today at PPC 2008, the conference on the interconnection between particle physics and cosmology which is being held at the New Mexico University in Albuquerque. I will give some quick highlights below from the morning session, reserving the right of providing more information on some of them later on, as I will have time to reorganize my notes and go back to the talks slides for a second look; a summary of selected afternoon talks will have to wait tomorrow. In the meantime, you can find the slides of the talks at this link.

I. Sarcevic talked about “Ultra-high energy cosmic neutrinos“. Neutrinos are stable neutral particles, so cosmic neutrinos point back to their astrophysical point source and bring to us information from processes that are otherwise obscured by material in the line of sight. Extragalactic neutrinos have large energies, and they can probe particle physics and astrophysics, since they can escape from extreme environments and they point back to the sources.

Among the sources, active galactic nuclei are the most powerful sources of energy. The AGN flux is the largest below 10^10 GeV. There is a correlation important to discover between photons and neutrinos: if photons come from the hadronic interaction p \gamma \to p \pi^0 \to \gamma \gamma, they can be observed together with neutrinos yielded by p \gamma \to n pi^+ \to n \gamma \to p \pi^+ when the charged pions decay to neutrino of electron and muon kinds. A fraction of these may then also give tau neutrinos from oscillations. The sources of pion decays have a ratio of 1:2:0 between electron, muon, and tau neutrinos, but the neutrino oscillation changes this double ratio.

Experiments are looking for muon tracks (ICECUBE, RICE), electromagnetic and hadronic showers (ICECUBE and others). To determine the energy flux that reaches the detector we need to consider propagation through earth and ice. Tau neutrinos will give different contributions from muon ones because of the short tau lifetime and the regeneration effect.

Nicolao Fornengo spoke on “Dark Matter direct and indirect detection”. We know that we have non-baryonic cold dark matter (DM), which points to extensions of the standard model (SM), new elementary particles. The evidence is multi-fold: dynamics of galaxy clusters, rotational curves, weak lensing, structure formation, energy budget from cosmological determinations.

We are concerned with dark matter inside galaxies. This can be made of a new class of particles, WIMPS for instance – weakly interacting massive particles. We need to know how these new particles are distributed. The halo in the galaxy has uncertainties, a thermal component, some round spherical distribution. From the velocity distribution it can be thermal, or non-thermal (for instance related to the merging history of the galaxy).

We need to exploit specific signals to disentangle dark matter signals from backgrounds. It is important ot quantify the uncertainties for the signal if we want to do that. What we do is a multi-channel search for DM: we have in fact different possibilities. The first is the direct detection of DM related to the fact that we are in the halo, so the particle can interact with the nuclei of a detector, and a signal in this case is nuclear recoil due to scattering. There are specific signatures, annual modulation or directionality of the recoil in order to correlate with the direction of the detector in its motion in the galaxy.

The other class of searches are typically called indirect searches; they rely on the self-annihilation of these particles with themselves, which produce anything allowed by the process, neutrinos, photons, or antimatter.

In annihilation to neutrinos, you can have a signal which you can correlate with the density of the galaxy, and spectral features in order to disentangle signal from backgrounds. For photons you can produce a line, since they decay at rest, and the line is very much suppressed – for neutralino it occurs at one loop level, but it would be a very good signature.

You can also annihilate into matter and antimatter, and produce cosmic rays. Also, you can have neutrino fluxes from the center of the Sun or Earth. Spectral features can also be used ot discriminate these signals from atmospheric neutrino backgrounds.

Let us start with direct detection. Upper limits are compared to some theoretical models. The latest CDMS upper bound is compared to results from a isothermal sphere of DM. The exclusion depends on the uncertainty on the shape of the local density and the velocity distribution functions. A spherical halo with a non isotropic velocity distribution function can give a different limit.

For neutralino masses higher than 50 GeV you have a realization of supersymmetry (SUSY). On the lower mass side the model is a minimal SUSY model with implemented a different parameter to loosen the LEP bounds on neutralino mass. If instead we compare with the DAMA result, we have models for neutralino and models for sneutrino which have allwoed solution in the region where there is a signal.

The question which arises is the following: Is there a candidate of something that agrees with both DAMA and CDMS ? You can have the DM candidate, the lightest sneutrino interacts with a nucleus producing a heavier state. The \chi_1 couples with the Z only off-diagonal. You have a kinematical bound related to the threshold of the experiment such that scattering is possible if the difference in mass is smller than a function of the masses and the velocity of the particle. For sneutrinos the suppression factor for a germanium nucleus divided by the suppression factor for a iodine nucleus, you can have situation where they are very different (germanium is the constituent of CDMS, iodine the target in DAMA). The same point could satisfy the DAMA region and the CDMS bound.

Neutrinos from earth and sun: idea is that you can accumulate your particles by gravitational capture, these produce annihilation, and neutrinos emitted are found in neutrino telescopes. Typically these calculations were done with fluxes of \nu_\mu; but they can be obtained by oscillation as well. The correction due to oscillation for annihilations in the earth or in the sun is large. For the sun at very high energy there is absorption.

How much is the signal of upgoing muons affected depending on mass ? If you produce Z or W in the earth you are not much affected by propagation, while in the sun for large mass the flux is reduced by a large amount.

One can try to exploit some energy spectrum of atmospheric neutrinos versus DM annihilation.
Antimatter signals are due to the fact that particles annihilate in the halo, produce proton-antiprotons.

Backgrounds are in the disk. One can model diffusion and propagation in the galaxy, and solve the diffusion equation, a bunch of parameters to fix using cosmic ray data. Then you can make a prediction for signal and background. The predictions for the spectrum in energy show a difference, although there are large uncertainties in signal fluxes. Better data on cosmic rays will fix the parameters.

For antiprotons, there is not much room for an excess in the lowest energy bins over the background, and not a big handle on the energy spectrum to distinguish signal from backgrounds. In this case one can only set constraints. The uncertainty in the theoretical estimates for SUSY is large in cross section. Firm exclusion of points is not possible unless you make strict assumptions on the propagation parameters.

One interesting signal is an antideuteron signal: in the process of annihilations you can produce antineutrons with antiprotons, and in turn produce antideuterons. It is nice because you have a good handle to detect it with respect to backgrounds. The uncertainty on the background (you produce anti-D from standard processes) is on the nuclear processes, for the signal instead the situation is the other way around: propagation gives a much wider uncertainty. Nevertheless, for antideuteron the signal and backgrounds have very different spectra. At low energy there is good discrimination (background is suppressed in that kinematical region). No detection in space so far for antideuterons, but there are proposals. An experiment called GAP can work on a balloon flight, and the expected sensitivity could access the signal. By taking a gaugino non-universal model with MSSM, the coverage in the parameter space for one-event sensitivity cuts into the space of models. Models not excluded by antiproton searches can predict up to 100 events for a long balloon flight.

For cosmic-ray positrons, you have production through annihilation and backgrounds. There are uncertainties at low energy because of propagation, below 10 GeV. How much can you boost your signal because of clumps of DM in the halo ? You do not expect for positrons a very big enhancement.

In summary, as far as direct detection we have annual modulation and in the rate they have a modulation. This can be indeed due to a DM particle. If interpreted that way, is this compatible with the SUSY candidate ? Yes, compatible with neutralino, harder for minimal SUGRA. It is also compatible with sneutrino. In models where you give mass to the neutrino with induced majorana-like mass terms. On the other hand you have CDMS, XENON etc, which try to reduce the background and have upper bounds. If we compare the models, they exclude some configurations. In one specific example the two sets could be working at the same time.

For indirect detection we have many possibilities. Antideuterons would be the best chance. When GAPS will fly, it could exploit a strong feature at low energy, and a good chance of finally to have a signal detected. Antiprotons at the moment do not show a very good spectral feature, but we can have potentially strong bounds. Positrons may possess spectral features but typically they require some overdensity to match the data. Gamma rays may have good spectral features, like lines. GLAST will be able to test this. Neutrinos from earth and sun could be found through their energy and angle features. One possibility could be to find the tau neutrino component, a virtually background-free signal.

Maurizio Giannotti talked on “New physics and stellar evolution“. Stellar cooling can provide bounds that are much better than experiments in near future.

The golden age for astrophyscs and stellar evolution was the late 50′s, since the role of weak interactions was recognized. The reference paper for neutrino pysics is by Feynman and Gell-mann of 1957, V-A theory of weak interactions. Indeed, stars can provide a test of weak interactions theory. Already in 1963 Bernstein and colleggues showed that the stellar bound on neutrino magnetic moment was better than the experimental one at that time.

Can we use stars to test physics above the ew scale ? Yes. If there is new physics (NP) at the electroweak (EW) scale this will appear in stars. Electroweak physics enters at 4th power of Gf, while NP will bring about a different scaling law. So we have to use other scales, energy scales like the temperature of the stars or masses. All these are much smaller than the EW scale.

Orthopositronium experiments: orthopositronium is an electron-positron bound state of spin 1. It thus cannot decay to 2 photons. Main decay is to 3 photons. Lifetime is 150 ns, longer than spin 0 state. It is a clean bound state of pure leptons, non strong interaction, only electromagnetic. In fact there is a little bit of weak interactions, but their contribution is small. So one can hypothesize that one gamma makes itself invisible through its disappearance into extra dimensions. The goal is to measure the invisible width of orthopositronium to a part in 10^-8. The partial width of orthopositronium to two neutrinos is less than 10^-17 of the three photons mode.

For stars, the energy loss through photon decay into the extra dimensions would delay the ignition of helium in the core of a red giant. The new energy loss rate must not exceed the standard loss through plasmon decay by more than a factor 2-3. The decay provides extra cooling in stars.
The delay of the Helium flash tells us that for one extra dimension, the scale of the extra dimension is k>10^{21} TeV, for n=3 extra dimensions the bound is softer, k>10^2 TeV.

To keep the scales in the model close to the EW scale one either needs a large number of extra dimensions, or very high scales.

A terrestrial experiment sensitive to invisible decay modes should have the following sensitivities to provide analogous bounds on k: B< 2x10^{-24+1.75n} for red giant stars.

In conclusion, stars offfer a variety of interesting environments to test physics beyond the SM. Bounds from astrophysics can be much better than the experimental ones. The models which try to confine the photon on the brane through gravity only are severely constrained by stellar evolution considertions. In this case, the sensitivity required in the orthopositronium experiment to provide the same bound is beyond any possibility in the near future. The result is that the number of extra compact dimensions must be 4 or greater, in
order to keep the scale of extradimensions close to the electroweak scale.

Follow

Get every new post delivered to your Inbox.

Join 100 other followers