jump to navigation

Liam Mc Allister: Inflation in String Theory May 23, 2008

Posted by dorigo in cosmology, news, physics.
Tags: , ,
comments closed

Here we go with another report from PPC 2008. This one is on the talk by Liam Mc Allister from yesterday afternoon session. In this case, I feel obliged to warn that my utter ignorance of the subject discussed makes it quite probable that my notes contain nonsensical statements. I apologize in advance, and hope that what I manage to put together is still of any use to you, dear reader.

The main idea discussed in Liam’s talk is the following: if we detect primordial tensor perturbations in the cosmic microwave background (CMB) we will know that the inflaton -the scalar particle responsible for the inflation epoch- moved more than a Planck distance in field space. Understanding such a system requires confronting true quantum gravity questions. String theory provides a tool to study this.

Inflation predicts scalar fluctuations in the CMB temperature. These evolve to create approximately scale-invariant fluctuations, which are also approximately gaussian. The goal we set to ourselves is to use cosmological observations to probe physics at the highest energy scales.

The scalar field \phi has a potential which drives acceleration. Acceleration is prolonged if V(\phi) is rather flat. How reasonable is that picture ? This is not a macroscopic model. What is \phi ? The simplest inflation models often invoke smooth potentials over field ranges larger than the Planck mass. In an effective field theory with a cutoff \Lambda one writes the potential with powers of the ratio \phi/\Lambda. Flatness is then imposed over distances \Delta \phi > \Lambda. But \Lambda must be smaller than the Planck mass, except in a theory of quantum gravity.

So one needs to assume something about quantum gravity to write a potential. It is too easy to write an inflation model, so it is not constrained enough to be predictive. We need to move to some more constrained scenario.

Allowing an arbitrary metric on the kinetic term, and an arbitrary number of fields in the lagrangian, the potential is very model-dependent. The kinetic term has higher derivative terms. One can write the kinetic term of the scalar fields with a metric tensor G. G is the metric on some manifold, and can well depend on the field themselves. An important notion is that of the field range.

Liam noted that the prospects for excitement in theory and experiments are coupled. If the parameter n_s is smaller than 1, there are no tensors and no non-gaussianity, and in that case we may never get more clues about the inflaton sector than we have right now. We will have to be lucky, but the good thing is that if we are, we are both ways. Observationally non-minimal scenarios are often theoretically non-minimal: detectable tensors require a large field range, and this requires a high-energy input. If anything goes well it will do so both experimentally and theoretically.

String theory lives in 10 dimensions. To connect to 4D reality string theory, we compactify the 6 additional dimensions. Additional dimensions are small otherwise we would not see a newtonian law of gravity, since gravity would propagate too much away from our brane.

Moduli correspond to massless scalar fields in 4-dimensions. Size and shape moduli for the Calabi-Yau manifold. Light scalars with gravitational strength couplings absorb energy during inflation. They can spoil
the pattern of big bang nucleosynthesis (BBN) and overclose the universe. The solution is therefore that sufficiently massive fields decay before BBN, so they are harmless for it (however, if they decay to gravitinos they may still be harmful).

The main technical extension: D-branes, by Polchinski in 1995. If you take a D-brane and you wrap it in the compact space, it takes energy that creates a potential for the moduli. It makes the space rigid.
The tension of D-branes makes distorting the space cost energy. This creates a potential for the moduli.

Any light scalars that do not couple to the SM are called moduli. Warped D-brane inflation: it implies warped throats. A CAlabi-Yau space is distorted to make a throat. This is a Randall-Sundrum region. It is the way by which string theory realizes it. A D-3 brane and an anti-D3 brane attract each other.

The tensor-to-scalar ratio is large only if the field is moving over planckian distances, \Delta \phi/M_p. That is the diameter of the field space. It is ultraviolet-sensitive but not too much so.
In our framework, observable tensors in CMB mean that there has been trans-planckian field variation.

Can we compute the (\Delta \phi /M_p)_{max} in a model of string inflation ? Liam says we can.
Planckian distances can be computed in string theory using the geometry. The field \phi is the position in the throat, so \Delta \phi is the length of the throat. It is reduced to a problem in geometry. The field range is computed to (\Delta \phi/M_p)^2 < 4/N, where N is the number of colors in Yang-mills theory associated to the throat region. N is at least a few hundred!
So the parameter r_{CMB} is small with respect to the threshold for detection in the next decade, since r_{CMB}/0.009 < 4/N.

N has to be large for us to be using supergravity. You can conceive a configuration with N not large,
but then we cannot compute it. It is not in the regime of honest physics, in that case. There are boundaries
in the space of string parameters. So we are constraining ourselves in a region where we can make computations. It would be very interesting to find a string theory that gives a large value of r.

Liam’s conclusions were that inflation in string theory is developing rapidly and believable predictions are starting to become available. In D-brane inflation, the computation of field range in Planck units shows that detectable tensors are virtually impossible.

Highlights from the morning talks at PPC08 May 19, 2008

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , , , , , ,
comments closed

The conference on the Interconnections between particle physics and cosmology, PPC2008, started this morning in the campus of the University of New Mexico. The conference features a rather relaxed, informal setting where speakers get a democratic 30′ each (plus 5′ for questions), and they do not frown at the repeated interruptions to their talks by questions from a self-forgiving audience.

This morning I listened to six talks, and I managed to not fall asleep during any. Quite a result, if you take into account the rather long trip I had yesterday, and the barely 4 hours of sleep I could manage tonight. This is a sign that the talks were interesting to me. Or at least that I need to justify to myself having traveled 22 hours and spending a week in a remote, desertic place (sorry Carl).

Here is a list of the talks, with very brief notes (which, my non-expert readers will excuse me, I cannot translate to less cryptic lingo due to lack of time):

  • The first talk was by Eiichiro Komatsu, from Austin, who discussed the “WMAP 5-year results and their implications for inflation“. Eiichiro reviewed the mass of information that can be extracted from WMAP data, and the results one can obtain on standard cosmology from the combination of WMAP constraints and other inputs from baryon acoustic oscillations (which one derives from the distribution of galaxies in the universe), supernovae, HST data and the like. He discussed the flatness of the universe (it is very flat, although not perfectly so), the level of non-gaussianicity in the distribution of primordial fluctuations (things are about as gaussian as they can), the adiabaticity relation between radiation and matter (which can be tested by cross-correlations in the power spectrum), and scale invariance (when n_s is found to be smaller than one at 2-sigma level, and if combined with additional input from omega_baryons can go as low as 3.4-sigma below 1).
  • Riccardo Cerulli then talked about the “Latest results from the DAMA-LIBRA collaboration“. I discussed these results in a post about a month ago, and Riccardo did not show anything I had not already seen, although his presentation was much, much better than the one I had listened to in Venice. In short, DAMA members believe their signal, which now stands out at 8.2 standard deviations, and they stand by it. Riccardo insisted on the model-independence of the result, while confronted with several questions by an audience that wouldn’t be convinced about the solidity of the analysis and less so about the interpretation in terms of a dark matter candidate. DAMA has collected so far a statistics of 0.53 tons x year, and is still taking data. I wonder if they are after a day-night variation or what, since it does not make much sense to increase a signal whose nature is -this is sure by now- of systematic nature.
  • Rupak Mahapatra talked just after Riccardo about the “First 5-tower results from CDMS”, another direct search for dark matter candidates. I also discussed the results of their work in a recent post (I am surprised to be able to say that and rather proud of it), so I will also not indulge in the details here. Basically, they can detect both the phonons from the nuclear recoil of a WIMP in their germanium detector, and the charge signal. Their detectors are disks of germanium operated at 40 millikelvins. ON the phonon side there are four quadrants of athermal phonon sensors, where a small energy release from the phonon disrupts cooper pairs and the change in resistivity is easily detected. On the charge side, two concentric electrodes give energy measurement and veto capability. The full shebang is well shielded, with exotic materials such as old lead from 100-old ships fished out of the ocean (old lead is not radioactive anymore). The experiment tunes cuts of their signal region to accept about half event from backgrounds. They observed zero events, and set stringent limits on the mass-cross section plane of a WIMP candidate. They plan to upgrade their device to a 1000 kg detector, which will make many things easier on the construction side, but which will run into non-rejectable neutron backgrounds at some point.
  • Alexei Safonov talked about the “Early physics with CMS“. Alexei discussed the plans of LHC for the years 2008 and 2009, and the results in terms of collected luminosity that we can expect for CMS and ATLAS, plus the expectations for analyses of SUSY and other searches. He was quite down-to-earth on the predictions, saying that the experiments are unlikely to produce very interesting results before the end of 2009. In 2008 we expect to collect 40 inverse picobarns of 10 TeV collisions, while in 2009 from 7 months of running starting in June the expectation is of about 2.4 inverse femtobarns. It goes without saying that it is quite likely that data collected until the end of 2009 might be insufficient even for a standard model Higgs boson discovery.
  • Teruki Kamon talked about “Measuring DM relic density at the LHC and perspectives on inflation“. He pointed to a recent paper at the beginning of his talk: hep-ph/0802.2968. Teruki took in consideration the coannihilation region of SUSY, where there is a small mass difference \Delta M between neutralino and stau, making the two likely to interact and creating a particular phenomenology. This region of the parameter space at high tan(beta) can be accessed by searches for tau pairs, which arise at the end of the gluino-squark decay chain. Through a measurement of tau pair masses and endpoints the mass of SUSY particles can be determined with 20 GeV accuracy. In particular, the ratio of gluino to neutralino masses can be measured rather well. With just 10 inverse femtobarns of data Teruki claims that one can get a very small error on the two parameters M_0 and M_{1/2}. A final plot showing the resulting constraints on \Omega_\chi versus \Delta M raised some eyebrows, because it showed an ellipse while the model dependence on \Delta M is exponential (the suppression of the coannihilation goes as e^{-\Delta M/20}) and one would thus expect a fancier contour of constraints. In any case, if nature has chosen this bizarre manifestation, LHC experiments are likely to measure things accurately with a relatively small bounty of data.
  • U. Oberlack was the last one to talk, discussing “Dark matter searches with the XENON experiment“, another setup for direct dark matter detection. Xenon as a detector medium is interesting because it has ten isotopes which allow sensitivity to spin-independent and spin-dependent interactions of WIMPS with nuclei. In principle, if one detected a signal, changing the isotope mixture would make the measurement sensitive to the details of the interaction. Liquid xenon has a high atomic number, so it is self-shielding from backgrounds. The experiment is located in the gran sasso laboratories in Italy, and it has taken data with a small “proof of principle” setup which nevertheless allowed to obtain meaningful limits on the mass versus cross section plane. They plan to make a much larger detector, with a ton of xenon: since they can detect the position of their signals, and have a fiducial region which is basically free of backgrounds, scaling up the detector size is an obvious improvement since the fiducial region increases quickly. He showed a nice plot of the cross section sensitivity of different experiments versus time, where one sees three main trends in the past, depending on the technology on which experiments have been based. xenon as a medium appears to be producing a much better trend of sensitivity versus time, and one expects it will dominate the direct searches in the next future.

I will complement the above quick writeup with links to the talk slides as they become available…