jump to navigation

Liam Mc Allister: Inflation in String Theory May 23, 2008

Posted by dorigo in cosmology, news, physics.
Tags: , ,
comments closed

Here we go with another report from PPC 2008. This one is on the talk by Liam Mc Allister from yesterday afternoon session. In this case, I feel obliged to warn that my utter ignorance of the subject discussed makes it quite probable that my notes contain nonsensical statements. I apologize in advance, and hope that what I manage to put together is still of any use to you, dear reader.

The main idea discussed in Liam’s talk is the following: if we detect primordial tensor perturbations in the cosmic microwave background (CMB) we will know that the inflaton -the scalar particle responsible for the inflation epoch- moved more than a Planck distance in field space. Understanding such a system requires confronting true quantum gravity questions. String theory provides a tool to study this.

Inflation predicts scalar fluctuations in the CMB temperature. These evolve to create approximately scale-invariant fluctuations, which are also approximately gaussian. The goal we set to ourselves is to use cosmological observations to probe physics at the highest energy scales.

The scalar field \phi has a potential which drives acceleration. Acceleration is prolonged if V(\phi) is rather flat. How reasonable is that picture ? This is not a macroscopic model. What is \phi ? The simplest inflation models often invoke smooth potentials over field ranges larger than the Planck mass. In an effective field theory with a cutoff \Lambda one writes the potential with powers of the ratio \phi/\Lambda. Flatness is then imposed over distances \Delta \phi > \Lambda. But \Lambda must be smaller than the Planck mass, except in a theory of quantum gravity.

So one needs to assume something about quantum gravity to write a potential. It is too easy to write an inflation model, so it is not constrained enough to be predictive. We need to move to some more constrained scenario.

Allowing an arbitrary metric on the kinetic term, and an arbitrary number of fields in the lagrangian, the potential is very model-dependent. The kinetic term has higher derivative terms. One can write the kinetic term of the scalar fields with a metric tensor G. G is the metric on some manifold, and can well depend on the field themselves. An important notion is that of the field range.

Liam noted that the prospects for excitement in theory and experiments are coupled. If the parameter n_s is smaller than 1, there are no tensors and no non-gaussianity, and in that case we may never get more clues about the inflaton sector than we have right now. We will have to be lucky, but the good thing is that if we are, we are both ways. Observationally non-minimal scenarios are often theoretically non-minimal: detectable tensors require a large field range, and this requires a high-energy input. If anything goes well it will do so both experimentally and theoretically.

String theory lives in 10 dimensions. To connect to 4D reality string theory, we compactify the 6 additional dimensions. Additional dimensions are small otherwise we would not see a newtonian law of gravity, since gravity would propagate too much away from our brane.

Moduli correspond to massless scalar fields in 4-dimensions. Size and shape moduli for the Calabi-Yau manifold. Light scalars with gravitational strength couplings absorb energy during inflation. They can spoil
the pattern of big bang nucleosynthesis (BBN) and overclose the universe. The solution is therefore that sufficiently massive fields decay before BBN, so they are harmless for it (however, if they decay to gravitinos they may still be harmful).

The main technical extension: D-branes, by Polchinski in 1995. If you take a D-brane and you wrap it in the compact space, it takes energy that creates a potential for the moduli. It makes the space rigid.
The tension of D-branes makes distorting the space cost energy. This creates a potential for the moduli.

Any light scalars that do not couple to the SM are called moduli. Warped D-brane inflation: it implies warped throats. A CAlabi-Yau space is distorted to make a throat. This is a Randall-Sundrum region. It is the way by which string theory realizes it. A D-3 brane and an anti-D3 brane attract each other.

The tensor-to-scalar ratio is large only if the field is moving over planckian distances, \Delta \phi/M_p. That is the diameter of the field space. It is ultraviolet-sensitive but not too much so.
In our framework, observable tensors in CMB mean that there has been trans-planckian field variation.

Can we compute the (\Delta \phi /M_p)_{max} in a model of string inflation ? Liam says we can.
Planckian distances can be computed in string theory using the geometry. The field \phi is the position in the throat, so \Delta \phi is the length of the throat. It is reduced to a problem in geometry. The field range is computed to (\Delta \phi/M_p)^2 < 4/N, where N is the number of colors in Yang-mills theory associated to the throat region. N is at least a few hundred!
So the parameter r_{CMB} is small with respect to the threshold for detection in the next decade, since r_{CMB}/0.009 < 4/N.

N has to be large for us to be using supergravity. You can conceive a configuration with N not large,
but then we cannot compute it. It is not in the regime of honest physics, in that case. There are boundaries
in the space of string parameters. So we are constraining ourselves in a region where we can make computations. It would be very interesting to find a string theory that gives a large value of r.

Liam’s conclusions were that inflation in string theory is developing rapidly and believable predictions are starting to become available. In D-brane inflation, the computation of field range in Planck units shows that detectable tensors are virtually impossible.

Simona Murgia: Dark Matter searches with GLAST May 23, 2008

Posted by dorigo in astronomy, cosmology, physics, science.
Tags: , ,
comments closed

Now linked by Peter Woit’s blog with appreciative words, I cannot escape my obligation to continue blogging on the talks I have been listening at PPC2008. So please find below some notes from Simona’s talk on the GLAST mission and its relevance for dark matter (DM) searches.

Glast will observe gamma rays in the energy range 20 MeV to 300 GeV. A better flux sensitivity than earlier experiments such as EGRET and AGILE. It is a 5-year mission, with a final goal of 10 years. It will orbit at 565 km of altitude, with a 25.6° inclination with the terrestrial equator. It has a large area telescope (LAT), for pair conversions. Features a precision Silicon strip tracker, endowed with 18 xy tracking planes with tungsten layers interleaved. The tracker is followed by a small calorimeter with CsI crystals. Tracker surrounded by anti-coincidence detector, 89 plastic scintillator tiles. The segmented design avoids self-veto problems. The total payload of GLAST is 2000 kg.

GLAST has four times the field of view of EGRET, and it covers the whole sky in two orbits (3 hours). The broad energy range has never been explored at this sensitivity. The energy resolution is about 10%, and the point-spread function is 7.2 arcminutes above 10 GeV. More than 30x better sensitivity than previous searches below 10 GeV, x100 at higher energy.

EGRET cataloged 271 sources of gamma rays, GLAST expects to do thousands. Active galactic nuclei, gamma ray bursts, supernova remnants, pulsars, galaxies, clusters, x-ray binaries. There is very small gamma ray attenuation below 10 GeV, so GLAST can probe cosmological distances.

Simona asked herself, what is the nature of DM? There are several models out. GLAST will investigate the existence of weakly interacting massive particles (WIMPS) through two-photon annihilation. Not an easy task, for there are large uncertainties in the signal and in the background. The detection of a DM signal from GLAST would be complementary to others.

Gamma rays may come from neutral pions emitted in \chi \chi annihilation. These give a continuum spectrum. Instead, direct annihilation to two photons is expected to have a branching ratio of 10^{-3} or less, but the latter would provide a line in the spectrum, a spectacular signal.

Other models provide an even more distinctive gamma spectrum. With the gravitino as a lightest supersymmetric particle, it would have a very long lifetime, and it could decay into photon and neutrino: this yields a enhanced line, and then a continuum spectrum at lower energy.

Instrumental backgrounds mostly come from charged particles (protons, electrons, positrons). Also neutrons, and earth albedo photons. These dominate the flux from cosmic photons. But less than one in hundred thousand survives the photon selection. Above a few GeV, background contamination is required to be less than 10% of the isotropic photons measured by EGRET.

Searches for WIMP annihilations can be done in the galactic center or complementary in the galactic halo. In the latter case there is no source crowding, but significant uncertainties in the astrophysical backgrounds. The 3-sigma signal on <\sigma v> as a function of mass of the WIMP goes below 10^{-26} cm^2 s^{-1} with 5 years of exposure.

Simona then mentioned that one can also search for DM satellites: simulations predict a substructure of DM in the galactic halo. The annihilation spectrum predicted is different from a power law. The emission is expected to be constant in time. Considering a 100 GeV WIMP, with sigma v = 2.3 \times 10^{-26}, annihilating into a b-quark pair, with extragalactic background and diffuse galactic, it is generically observable at 5-sigma level in one year. To search for these, you first scan the sky, and then when you have found something you can concentrate on observing it.

Also, dwarf galaxies can be studied. The mass to light ratio there is high, and it is thus a promising place to look for a annihilation signal. The 3-sigma sensitivity of GLAST for 5 years data goes down to 10^-26 and below for WIMP mass in the tens of GeV range.

To search for lines in the spectrum, you search in a annulus between 20 and 35 degrees in galactic latitude, removing a 15° band from the galactic disk. It is a very distinctive spectral signature. A better sensitivity is achieved if the location of the line is known beforehand (if discovered by the LHC, for instance). A 200 GeV line can be seen at 5-sigma in 5 years.

GLAST can also look for cosmological WIMPs at all redshifts. There is a spectral distorsion caused by integration over redshift. The reach of GLAST is a bit higher here, 10^-25. One can do better if there is a high concentration of DM in substructures.

Dinner with Gordie Kane May 23, 2008

Posted by dorigo in personal, physics, science.
Tags: , , ,
comments closed

Yesterday evening the conference banquet of PPC 2008 was held at Yanni’s, a nice restaurant on Central Avenue in Albuquerque. I was lucky to sit at a table together with quite interesting company of several distinguished colleagues. Most notably, to my right sat Gordie Kane, with whom I had a interesting discussion about the expectations for Supersymmetry at the LHC and on the promises that String Theory may one day go as far as to explain really fundamental things such as why quarks have the mass we observe, why the CKM matrix elements are what they are, and why the other queue is always faster.

Gordie was really surprised by my 1000$ bet against new physics discoveries at the LHC. He was willing to take it himself, but I said I am already exposed with Distler and Watts. He is positive on the fact that LHC experiments will find Supersymmetry, and in general he has a very optimistic attitude which is infectious. I went as far as to say I would be willing to buy string theory if one showed me there are prospects for really explaining things such as those I listed above, and after a further glass of wine I invited him to offer a guest post on this blog where he could discuss the matter, or more loosely the reasons to be optimistic about new physics being just around the corner. He said he will do it, although he is now quite busy. So, expect a guest post by none less than Gordie Kane in here in the matter of a month or two…

For the time being, I can just offer the following picture, taken by Mandeep Gill on my request during the banquet:

Denny Marfatia’s talk on Neutrinos and Dark Energy May 22, 2008

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , , ,
comments closed

Denny spoke yesterday afternoon at PPC 2008. Below I summarize as well as I can those parts of his talk that were below a certain crypticity threshold (above which I cannot even take meaningful notes, it appears).

He started with a well-prepared joke: “Since at this conference the tendency is to ask questions early, Any questions ?“. This caused the hilarity of the audience, but indeed, as I noted in another post, the setting is relaxed and informal, and the audience interrupts loosely the speaker. Nobody seemed to complain so far…

Denny started by stating some observational facts: the dark energy (DE) density is 2.4 \times 10^{-3} eV^4, and the neutrino mass difference \Delta M^2 is of the same order of magnitude. This coincidence of scale mighti imply that neutrinos coupled to a light scalar might explain why \Omega_{DE} has a similar value to \Omega_M, i.e. why we observe a rather similar amount of dark energy and matter in the universe.

But he noted that there is more than just one coincidence problem. In fact DE density and other densities have ratios which are small values. Within a factor of 10 the components are equal.
Why not consider these coincidences to have some fundamental origin ? Perhaps neutrino and DE densities are related. It is easy to play with this hypothesis with neutrinos because we understand them the least!

We can have the neutrino mass made a variable quantity. Imagine a fluid, the scalar, a quintessence scalar, and the potential is M_\nu n_\nu + V(m_\nu). There is a ansatz to be made: the effective potential is stationary with respect to the neutrino mass.

So one makes a decreasing neutrino mass at the minimum of the potential, with a varying potential. Some consequences of this model are given by the expression w = -1 + m_\nu n_\nu/V_{eff}. W can thus deviate from -1. It is like quintessence without a light scalar.
Neutrino masses can be made to vary with their number density: if w is close to -1, the effective potential has to scale with the neutrino contribution. Neutrinos are then most massive in empty space, and they are lighter when they cluster.
This could create an intriguing conflict between cosmological and terrestrial probes of neutrino mass. The neutrino masses vary with matter density if the scalar induces couplings to matter. There should be new matter effects in neutrino oscillations. One could also see temporal variations in the fine structure constant.

If neutrinos are light, DE becomes a cosmological constant, w = -1, and we cannot distinguish it from other models. Also, light neutrinos do not cluster, so the local neutrino mass will be the same as the background value; and high redshift data and tritium beta decay will be consistent because neither will show evidence for neutrino mass.

So one can look at the evidence for variations in time of the fine structure constant. The status of measurements of transition frequencies in atomic clocks give the limit \delta \alpha/\alpha < 5 \times 10^{-15}.

The abundance ratio of Sm 149 to Sm147 at the natural reactor in Oklo shows no variation in the last 1.7 billion years, with a limit \delta \alpha/\alpha < 10^{-7}.
Resonant energy for neutron capture.

Meteoritic data (at a redshift z<0.5) constrain the beta decay rate of Rhenium 187 back to the time of solar system formation (4.6 Gy), \delta \alpha/ \alpha = 8 \pm 8 \times 10^{-7}.

Going back to 0.5<z<4, transition lines in quasars QSO spectra indicate a value \delta \alpha/ \alpha = -0.57 \pm 0.10 \times 10^{-5}. Found to be varying at 5-sigma level! The lines have an actual splitting which is different. Result not confirmed, depends on the magnesium isotopic ratio assumed. People say you are just measuring chemical evolution of magnesium in these objects: the three isotope ratio might be different from what is found in our sun, and this would mess up the measurement.

Then, there is a measurement of CMB (at z=1100) which determines \delta \alpha/\alpha<0.02 from the temperature of decoupling, depending on binding energy.
Also primordial abundances from Big-bang nucleosynthesis (at z=10 billion) allow one to find \delta \alpha/\alpha<0.02. Bounds on a variation of the fine structure constant at high redshift are thus very loose.

One can therefore hypothesize a phase transition in \alpha: it was done by Anchordoqui, Barger, Goldberg, Marfatia recently. The bottomline is to construct the model such that when the neutrino density crosses the critical value as the universe expands, \alpha will change.

The first assumption is the following: M has a unique stationary point. Additional stationary points are possible, but for nonrelativistic neutrinos with subcritical neutrino density, only one minimum, fixed, and no evolution.

For non-relativistic neutrinos, with supercritical density, there is a window of instability.

You expect neutrinos at the critical density at some redshift. The instability can be avoided if the growth-slowing effects provided by cold dark matter dominate over the acceleron-neutrino coupling. Requiring no variation of \alpha up to z=0.5, and then a step in \delta \alpha, and enforcing that the signal in quasar spectra is reproduced, one gets results which are consistent with CMB and BBN.

A review of yesterday’s afternoon talks: non-thermal gravitino dark matter and non-standard cosmologies May 21, 2008

Posted by dorigo in cosmology, news, physics, science.
Tags: , , ,
comments closed

In the afternoon session at PPC2008 yesterday there were several quite interesting talks, although they were not easy for me to follow. I give a transcript of two of the presenations below, for my own record as well as for your convenience. The web site of the conference is however quite quick in putting the talk slides online, so you might want to check it if some of what is written below interests you.

Ryuichiro Kitano talked about “Non-thermal gravitino dark matter“. Please accept my apologies if you find the following transcript confusing: you are not alone. Despite my lack of understanding of some parts of it, I decided to put it online anyway, in the hope that I will have the time one day to read a couple of papers and understand some of the details discussed…

Ryuichiro started by discussing the standard scenario for SUSY dark matter, with a WIMP neutralino. This is a weakly interacting, massive, stable particle. In general, one has a mixing between bino, wino, and the higgsinos, and that is what we call neutralino. In the early universe it follows a Boltzmann distribution, then there is a decoupling phase when the process inverse to production becomes negligible, so at freeze-out the number density of the neutralino goes with T^3. The final abundance is computed by equating two terms at time of decoupling, to get <\sigma v> = (n^2_\chi -n^2_{eq}.

In this mechanism there are some assumptions. The neutralino is a LSP: it is stable. The second assumption is that the universe is radiation-dominated at time of decoupling. A third assumption says that there is no entropy production below T=O(100 GeV), otherwise relative abundances would be modified. Are these assumptions reasonable ? Assumption one restricts to gravity mediation. There is almost always a moduli problem. This is inconsistent with assumptions 2 and 3. If you take instead anomaly mediation, wino LSP and it gives too small abundances. We thus need a special circumstance for the standard mechanism to work.

The moduli/gravitino problem: in gravity mediation scenario, there is always a singlet scalar field which obtains a mass throuhg SUSY breaking. S is a singlet under any symmetry, and it gives a mass dump to the gaugino. Its potential cannot be stabilized, and it gets mass only through SUSY breaking. Therefore, there exists a modulus field. We need to include it to consider the cosmological history, because it has implications.

During inflation, the S potential is deformed because S gets mass only from SUSY breaking. So, the initial value of the moduli will be modified. Once S-domination happens it is a cosmological disaster. If the gravitino is not LSP, it decays with a lifetime of the order of one year, and it destroys the standard picture of big bang nucleosynthesis (BBN). if the decay is forbidden, it is S
to have a lifetime of O(1y), still a disaster. Inconsistent with neutralino DM scenario, or better, gravity mediation is inconsistent.

So we need some special inflation model which does not couple to the S field; a very low scale inflation such that deformation of S potential is small; and a lucky initial condition such that S-domination does not happen. Is there a good cosmological scenario that does not require such conditions ?

Non-thermal gravitino DM is a natural and simple solution to the problem. Gauge mediation offers the possibility. SUSY breaking needs to be fixed in the scenario. Most of it has the same effective lagrangian. This implies two parameters in addition to the others: the height of the potential (describing how large is the breaking) and the curvature m^4/\Lambda^4. In this framework, the gravitino is the LSP.

In non-thermal gravitino dark matter scenario, the mechanism can produce the DM candidate. After inflation, S oscillation starts. We have a potential for it, there is a quadratic term. Second step is decay. The S coupling to superparticles are proportional to their mass. S-Gravitino coupling is instead suppressed. Gives a smaller branching ratio to gravitino. Good for the gravitino abundance. Also the shorter lifetime as compared to gravity mediation is good news for BBN.

The decay of S to a bino pair must be forbidden to preserve BBN abundances. So S \to hh is the dominant decay mode if it is open. If we calculate the decay temperature, we find good matches with BBN and it is perfect for DM as far as its abundance is concerned.

There are two parameters: height of potentials and curvature. We have to explain the size of gaugino mass and fix one of the parameters. Gravitino abundance is explained if gravitino mass is about 1 GeV. Baryon abundances however have to be produced by other means.

Step three is gravitino cooling. Are they cold ? THey are produced by the decay of 100 GeV particles, relativistic, but their distribution is non thermal. It slows down by redshift. These must be non-relativistic at time of structure formation.

If we think of SUSY cosmology we should be careful about the consistency with the underlying mode, of gravity mediation. Gauge mediation provides viable cosmology with non-thermally
gravitino DM.

Next, Paolo Gondolo gave a thought-provoking talk on “Dark matter and non-standard cosmologies“. Again, I do not claim that the writeup below makes full sense -not to me, but maybe to you it does.

Paolo started by pointing out the motivations for his talk: they come directly from the previous talks, the problem with the gravitino and with the moduli. One might need to modify usual cosmology before nucleosynthesis. Another motivation is more phenomenological. The standard results on neutralino DM are presented in standard parameter space $M_0 – M_{1/2}$, and one gets a very narrow band due to constraints of dark matter from cosmology. These constraints come from primordial nucleosynthesis. They assume that neutralinos were produced thermally, decoupled at a later time and remained with a residual abundances. This might not be true, and if it isn’t, the whole parameter space might still be consistent with cosmological constraints.

[This made me frown: isn’t the SUSY parameter space still large enough ? Do we really need to revitalize parts not belonging to the hypersurface allowed by WMAP and other constraints ?]

The above occurs just by changing the evolution of the universe before nucleosynthesis. By changing $\tan \beta$ you can span a wider chunk of parameters, but that is because you look at a projection. Cosmological constraints give a n-1 hypersurface. One can extend it outside of it. But this comes at the price of more parameters. Don’t we have enough parameters already?

Cosmological density of neutralinos may differ from usual thermal value because of non-thermal production or non-standard cosmologies. J.Barrow, in 1982, wrote of massive particles as a probe of the early universe. So it is an old idea. It continues in 1990, with a paper by Kamionkowski and Turner: Thermal relics: do we know their abundances ?

So let us review the relic density in standard cosmology, and the effect of non-standard ones. In standard cosmology the Friedmann equation governes the evolution of the scale factor a, and the dominant dependence of \rho on a determines the expansion rates. Today we are matter-dominated, and we were radiation-dominated before, because \rho scales with
different powers of the scale factor: now \rho = a^{-3}, but before it went a^{-4}. Before radiation domination there was reheating, and before, the inflation era. At early times, neutralinos are produced in e+e- production, and mu+mu-. Then production freezes out, and here they say neutralino anihilations ceases, but it really is the production which ceases. Annihilation continues at smaller rates until today so that we can look for it, but it is production that stops. Number of neutralinos per photons is constant at freeze out. The higher the annihilation rate, the lower the final density. There is an inverse proportionality.

The freeze-out occurs during the radiation-dominated epoch, T = 1/20th of the particle mass, so we have much higher temperature than that of a matter-dominated universe. Freeze-out occurs before BBN. We are making an assumption of the evolution of the universe before BBN. What can we do in non-standard scenarios ? We can decrease the density of particles, by producing photons after freeze-out (entropy dilution). Increasing photons you get lower final density. One can also increase the density of particles by creating them non-thermally, from decays. Another way is to make the universe expand faster during freeze-out, for instance in quintessence models.

The second mechanism works because if we change the expansion rate we need a higher density. What if instead we want to keep the standard abundance ? We want to produce WIMPS in a thermal mechanism. We need a standard Hubble expansion rate down to T=m/23. down to freeze-out. A plot of m/T_{max} versus <\sigma v> shows that production is aborted at m/T>23.

How can we produce entropy to decrease the density of neutralinos after freeze-out ? We add massive particles that decay or annihilate late, for example a next-to-LSP. We end up increasing the photon temperature and entropy, while the neutrino temperature is unaffected.

We can increase the expansion rate at freeze-out by adding energy in the Universe, adding a scalar field, or modify the Friedmann equation by adding an extra dimension. In alternative, produce more particles by decays.

In braneworld scenarios, matter is confined to the brane, and gravitons propagate in the bulk. It gives extra terms in the Friedmann equation, proportional to the square of the density and the Planck mass squared. We can get different densities. For example, in the plane of m_0 versus gravitino mass, the Wino is usually not a good candidate for DM but here it is in Randall-Sundrum type II scenarios. We can resuscitate models of SUSY that people think are ruled out by cosmology.

Antiprotons from WIMP annihilation in the galactic halo constrain the RS model II. The 5-dimensional Planck constant M5 has different constraints, antiprotons give bounds >1000 TeV.

Non-thermal production from gravitational acceleration: at the end of inflation acceleration was so high you could create massive particles. They can have density of dM if mass is of order of Hubble parameter. Non-thermal production from particle decays is another non-standard case which is not ruled out.

Then there is the possibility of neutralino production from a decaying scalar field: In string theories, the extra dimensions may be compactified as orbifolds or Calabi-Yau manifolds. The surface shown is a solution of equation such as z_1^5 + z_2^5=1, with z complex numbers. The size and shape of the compactified dimensions are parametrized by moduli fields \phi_1, \phi_2, \phi_3… The values of the moduli fields fix the coupling constants.

Two new parameters are needed to evade the cosmological constraints to SUSY models: reheating temperature T_rh of the radiation when phi decays. It is >5 MeV from BBN constraints. Other parameter is the number of neutralinos produced per phi decay divided by phi mass, b/m_phi. b depends on the choice of the Kahler potential, superpotential, and gauge kinetic function, so on the high energy theory: the compactification , the fluxes etc. you put in your model. By lowering the reheating temperature you can decrease the density of particles. But the higher b/m, the higher the density you can get. So you can get almost any DM density you want.

Neutralino can be cold dark matter candidates anywhere in MSSM parameter space, provided one allows for these other parameters to vary.

If you work with non-standard cosmology, the constraints are transferred from the low-energy to the high-energy theory. The discovery of neutralino DM which is non thermal may open an experimental window on string theory.

[And it goes without saying that I find this kind of syllogism a wide stretch!].

My talk on new results from CDF May 20, 2008

Posted by dorigo in cosmology, news, personal, physics, science.
Tags: , , ,
comments closed

This morning I gave my seminar at PPC08, and I was able to record it on my camera. So, rather than giving a transcript, I could in principle get away easily by pasting here a simple link to my presentation in .mpg format. However, I will be able to do that only as I go back home next week, since transferring 500mbytes via wireless is not something I want to entertain myself with. I am thus going to put here a few of the slides, commenting them as I did during my talk, and I will update the post next week to include the link to the video file. For now, you can get a pdf file with all the slides here.

Note: This post is dedicated to Louise Riofrio, who kindly mentions my talk in her wonderful blog today…

I started with the usual mention of the experimental apparata: the first is the Tevatron collider (see slide below), which has delivered 4 inverse femtobarns of 2-TeV proton-antiproton interactions to the CDF and D0 detectors so far. The inverse femtobarn is a unit of measure of integrated luminosity L, which tells you how many events N are produced for a process with a given cross section \sigma: since the total proton-antiproton cross section is about 60 millibarns, 4 inverse femtobarns correspond to N=\sigma L = 0.06 b \times 4 \times 10^{15} b^{-1} = 2.4 \times 10^{14}, or a total of 240 trillion proton-antiproton collisions. I mentioned that the Tevatron expects to double the delivered luminosity if it will be allowed to run through year 2010, and I promised to show what that means for the precision of some critical measurements and searches.

Next I discussed the CDF detector (see slide below). I pointed out that its original design dates back to the year 1980, and that it was constructed to discover the top quark – something it achieved in 1995, but it has produced an enormous amount of excellent measurements in addition. CDF is a magnetic spectrometer where a inner tracking system made of 7 layers of silicon microstrip sensors is embedded in a large drift chamber, and the two are contained within a 1.4 Tesla solenoid. Outside the magnet are electromagnetic and hadronic calorimeters, surrounded by muon chambers.

I discussed only a few results on top physics. The top quark is a remarkable particle: the heaviest of all known elementary bodies, it decays before having time to hadronize since its natural width is an order of magnitude larger than the scale of quantum chromodynamical interactions. So we can study its properties free from the hassle of non-calculable soft QCD effects. The large mass of the top begs the question: why is it so large ? Why is its yukawa coupling very close to unity ? Is some form of new physics hiding in the phenomenology of top production and decay ?

Production of top quark pairs at the Tevatron occurs mainly by quark-antiquark annihilation, the diagram shown on the left in the slide below. Since each top almost always decays to a W boson and a b-quark, one can classify the final states according to the decay products of the two W bosons. In the upper right graph one can see how the possible final states break down in terms of relative rates: the most probable -and most background-ridden- is the all-hadronic final state, where both W bosons decay to quark pairs, and you get six hadronic jets from the top pairs. I also discussed single top production mechanisms, shown in the diagrams at the bottom: these have a comparable production rate, but they are much harder to extract from the data due to larger backgrounds.

I then showed a summary of top pair cross section measurements, mentioning that there are by now dozens of different determinations. The average has a uncertainty of 12%, and its agreement with NNLO predictions shows that the technology of perturbative QCD calculations is in good shape.

After discussing one measurement of top cross section in detail, I went on with mass determinations. The technology of these measurements has improved greatly in the past few years, and by now the top mass is measured by CDF with a 1.1% uncertainty. The average with D0 has been carried out on results obtained with 2/fb of data, and it produces M_t = 172.6 \pm 1.4  GeV, which is a 0.8% uncertainty. The top quark mass is important for the building of models of new physics beyond the standard model, and of course it provides a stringent constraint to electroweak fits of standard model observables. It is foreseen that the full Run II dataset in 2010 will allow the Tevatron to reach a 1 GeV precision on the top mass, or even slightly better than that.

I next showed the combined measurement of single top production cross section. A very complicated and advanced technique using evolved neural networks and genetic algorithms allows to optimize the measurement, and a 3.7-sigma evidence for the signal is obtained by CDF. This is a unlucky chance, since the expected sensitivity exceeded 5-sigma. But we have already collected enough data to grant the canonical “observation-level” significance in the near future, as the data is analyzed. I also mentioned that the future measurements will allow to determine the V_{tb} matrix element with a precision of 7%.

As far as new results on B physics are concerned, I only showed a couple. One is about the discovery of the exhilarating \Xi _b baryons, which are seen through the exclusive decay chain to J/psi mesons and \Xi baryons, with the latter in turn producing two pions and a proton with two separate vertices. The other is the new CDF limit on the branching fraction of B_s mesons to muon pairs, a process heavily suppressed in the standard model, which is enhanced in SUSY models. I discussed the latter result elsewhere recently.

I then presented two searches for supersymmetry recently performed in CDF: one for chargino-neutralino production in events with three leptons and missing transverse energy, and another for squarks and gluino signatures in jets and missing transverse energy. I also discussed these results recently in this blog, in a recent series of posts about dark matter searches at colliders.

I showed results on W mass and width measurements, on which CDF has the best measurements in the world. In particular, the W mass measurement has been produced with only 200 inverse picobarns of data, which is a twentieth of the data we have collected this far: CDF may reach below 20 MeV on the accuracy of W mass determination.

Diboson production has been observed in all its manifestations by CDF: the latest ones were WZ and ZZ production, which may give rise to spectacularly clean events (see the event display shown in the slides below: a perfect WZ candidate). The measurements of cross section are in excellent agreement with SM predictions.

Finally, I discussed the current limits on Higgs boson production. I think I have discussed this particular topic frequently enough in this blog to allow myself to skip a description of my slides here. I concluded my talk mentioning the string of successes of CDF in the recent past, and the prospects for precision SM measurements and reach of Higgs searches. I pointed out that CDF is the longest lasting physics experiment ever (yeah yeah, if we exclude the pitch drop experiment)…

There were several questions by the audience, most of them centered on Higgs boson limits and searches. I was of course happy to answer them, in particular to show that the results have kept improving more than the increase of luminosity they relied upon. In conclusion, it is always a great pleasure to present CDF results… A remarkable experiment indeed!

Highlights from the morning talks at PPC08 May 19, 2008

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , , , , , ,
comments closed

The conference on the Interconnections between particle physics and cosmology, PPC2008, started this morning in the campus of the University of New Mexico. The conference features a rather relaxed, informal setting where speakers get a democratic 30′ each (plus 5′ for questions), and they do not frown at the repeated interruptions to their talks by questions from a self-forgiving audience.

This morning I listened to six talks, and I managed to not fall asleep during any. Quite a result, if you take into account the rather long trip I had yesterday, and the barely 4 hours of sleep I could manage tonight. This is a sign that the talks were interesting to me. Or at least that I need to justify to myself having traveled 22 hours and spending a week in a remote, desertic place (sorry Carl).

Here is a list of the talks, with very brief notes (which, my non-expert readers will excuse me, I cannot translate to less cryptic lingo due to lack of time):

  • The first talk was by Eiichiro Komatsu, from Austin, who discussed the “WMAP 5-year results and their implications for inflation“. Eiichiro reviewed the mass of information that can be extracted from WMAP data, and the results one can obtain on standard cosmology from the combination of WMAP constraints and other inputs from baryon acoustic oscillations (which one derives from the distribution of galaxies in the universe), supernovae, HST data and the like. He discussed the flatness of the universe (it is very flat, although not perfectly so), the level of non-gaussianicity in the distribution of primordial fluctuations (things are about as gaussian as they can), the adiabaticity relation between radiation and matter (which can be tested by cross-correlations in the power spectrum), and scale invariance (when n_s is found to be smaller than one at 2-sigma level, and if combined with additional input from omega_baryons can go as low as 3.4-sigma below 1).
  • Riccardo Cerulli then talked about the “Latest results from the DAMA-LIBRA collaboration“. I discussed these results in a post about a month ago, and Riccardo did not show anything I had not already seen, although his presentation was much, much better than the one I had listened to in Venice. In short, DAMA members believe their signal, which now stands out at 8.2 standard deviations, and they stand by it. Riccardo insisted on the model-independence of the result, while confronted with several questions by an audience that wouldn’t be convinced about the solidity of the analysis and less so about the interpretation in terms of a dark matter candidate. DAMA has collected so far a statistics of 0.53 tons x year, and is still taking data. I wonder if they are after a day-night variation or what, since it does not make much sense to increase a signal whose nature is -this is sure by now- of systematic nature.
  • Rupak Mahapatra talked just after Riccardo about the “First 5-tower results from CDMS”, another direct search for dark matter candidates. I also discussed the results of their work in a recent post (I am surprised to be able to say that and rather proud of it), so I will also not indulge in the details here. Basically, they can detect both the phonons from the nuclear recoil of a WIMP in their germanium detector, and the charge signal. Their detectors are disks of germanium operated at 40 millikelvins. ON the phonon side there are four quadrants of athermal phonon sensors, where a small energy release from the phonon disrupts cooper pairs and the change in resistivity is easily detected. On the charge side, two concentric electrodes give energy measurement and veto capability. The full shebang is well shielded, with exotic materials such as old lead from 100-old ships fished out of the ocean (old lead is not radioactive anymore). The experiment tunes cuts of their signal region to accept about half event from backgrounds. They observed zero events, and set stringent limits on the mass-cross section plane of a WIMP candidate. They plan to upgrade their device to a 1000 kg detector, which will make many things easier on the construction side, but which will run into non-rejectable neutron backgrounds at some point.
  • Alexei Safonov talked about the “Early physics with CMS“. Alexei discussed the plans of LHC for the years 2008 and 2009, and the results in terms of collected luminosity that we can expect for CMS and ATLAS, plus the expectations for analyses of SUSY and other searches. He was quite down-to-earth on the predictions, saying that the experiments are unlikely to produce very interesting results before the end of 2009. In 2008 we expect to collect 40 inverse picobarns of 10 TeV collisions, while in 2009 from 7 months of running starting in June the expectation is of about 2.4 inverse femtobarns. It goes without saying that it is quite likely that data collected until the end of 2009 might be insufficient even for a standard model Higgs boson discovery.
  • Teruki Kamon talked about “Measuring DM relic density at the LHC and perspectives on inflation“. He pointed to a recent paper at the beginning of his talk: hep-ph/0802.2968. Teruki took in consideration the coannihilation region of SUSY, where there is a small mass difference \Delta M between neutralino and stau, making the two likely to interact and creating a particular phenomenology. This region of the parameter space at high tan(beta) can be accessed by searches for tau pairs, which arise at the end of the gluino-squark decay chain. Through a measurement of tau pair masses and endpoints the mass of SUSY particles can be determined with 20 GeV accuracy. In particular, the ratio of gluino to neutralino masses can be measured rather well. With just 10 inverse femtobarns of data Teruki claims that one can get a very small error on the two parameters M_0 and M_{1/2}. A final plot showing the resulting constraints on \Omega_\chi versus \Delta M raised some eyebrows, because it showed an ellipse while the model dependence on \Delta M is exponential (the suppression of the coannihilation goes as e^{-\Delta M/20}) and one would thus expect a fancier contour of constraints. In any case, if nature has chosen this bizarre manifestation, LHC experiments are likely to measure things accurately with a relatively small bounty of data.
  • U. Oberlack was the last one to talk, discussing “Dark matter searches with the XENON experiment“, another setup for direct dark matter detection. Xenon as a detector medium is interesting because it has ten isotopes which allow sensitivity to spin-independent and spin-dependent interactions of WIMPS with nuclei. In principle, if one detected a signal, changing the isotope mixture would make the measurement sensitive to the details of the interaction. Liquid xenon has a high atomic number, so it is self-shielding from backgrounds. The experiment is located in the gran sasso laboratories in Italy, and it has taken data with a small “proof of principle” setup which nevertheless allowed to obtain meaningful limits on the mass versus cross section plane. They plan to make a much larger detector, with a ton of xenon: since they can detect the position of their signals, and have a fiducial region which is basically free of backgrounds, scaling up the detector size is an obvious improvement since the fiducial region increases quickly. He showed a nice plot of the cross section sensitivity of different experiments versus time, where one sees three main trends in the past, depending on the technology on which experiments have been based. xenon as a medium appears to be producing a much better trend of sensitivity versus time, and one expects it will dominate the direct searches in the next future.

I will complement the above quick writeup with links to the talk slides as they become available…