jump to navigation

Post summary – April 2009 May 1, 2009

Posted by dorigo in astronomy, Blogroll, cosmology, internet, news, personal, physics, science, social life.
Tags: , , , , , , , , , , , ,
comments closed

As the less distracted among you have already figured out, I have permanently moved my blogging activities to www.scientificblogging.com. The reasons for the move are explained here.

Since I know that this site continues to be visited -because the 1450 posts it contains draw traffic regardless of the inactivity- I am providing here monthly updates of the pieces I write in my new blog here. Below is a list of posts published last month at the new site.

The Large Hadron Collider is Back Together – announcing the replacement of the last LHC magnets

Hera’s Intriguing Top Candidates – a discussion of a recent search for FCNC single top production in ep collisions

Source Code for the Greedy Bump Bias – a do-it-yourself guide to study the bias of bump fitting

Bump Hunting II: the Greedy Bump Bias – the second part of the post about bump hunting, and a discussion of a nagging bias in bump fitting

Rita Levi Montalcini: 100 Years and Still Going Strong – a tribute to Rita Levi Montalcini, Nobel prize for medicine

The Subtle Art of Bump Hunting – Part I – a discussion of some subtleties in the search for new particle signals

Save Children Burnt by Caustic Soda! – an invitation to donate to Emergency!

Gates Foundation to Chat with Bloggers About World Malaria Day – announcing a teleconference with bloggers

Dark Matter: a Critical Assessment of Recent Cosmic Ray Signals – a summary of Marco Cirelli’s illuminating talk at NeuTel 2009

A Fascinating New Higgs Boson Search by the DZERO Experiment – a discussion on a search for tth events recently published by the Tevatron experiment

A Banner Worth a Thousand Words - a comment on my new banner

Confirmed for WCSJ 2009 – my first post on the new site

Things I should have blogged on last week April 13, 2009

Posted by dorigo in cosmology, news, physics, science.
Tags: , , , , ,
comments closed

It rarely happens that four days pass without a new post on this site, but it is never because of the lack of things to report on: the world of experimental particle physics is wonderfully active and always entertaining. Usually hiatuses are due to a bout of laziness on my part. In this case, I can blame Vodafone, the provider of the wireless internet service I use when I am on vacation. From Padola (the place in the eastern italian Alps where I spent the last few days) the service is horrible, and I sometimes lack the patience to find the moment of outburst when bytes flow freely.

Things I would have wanted to blog on during these days include:

  • The document describing the DZERO search of a CDF-like anomalous muon signal is finally public, about two weeks after the talk which announced the results at Moriond. Having had in my hands a unauthorized draft, I have a chance of comparing the polished with the unpolished version… Should be fun, but unfortunately unbloggable, since I owe some respect to my colleagues in DZERO. Still, the many issues I raised after the Moriond seminar should be discussed in light of an official document.
  • DZERO also produced a very interesting search for t \bar t h production. This is the associated production of a Higgs boson and a pair of top quarks, a process whose rate is made significant by the large coupling of top quarks and Higgs bosons, by virtue of the large top quark mass. By searching for a top-antitop signature and the associated Higgs boson decay to a pair of b-quark jets, one can investigate the existence of Higgs bosons in the mass range where the b \bar b decay is most frequent -i.e., the region where all indirect evidence puts it. However, tth production is invisible at the Tevatron, and very hard at the LHC, so the DZERO search is really just a check that there is nothing sticking out which we have missed by just forgetting to look there. In any case, the signature is extremely rich and interesting to study (I had a PhD doing this for CMS a couple of years ago), thus my interest.
  • I am still sitting on my notes for Day 4 of the NEUTEL2009 conference in Venice, which included a few interesting talks on gravitational waves, CMB anisotropies, the PAMELA results, and a talk by Marco Cirelli on dark matter searches. With some effort, I should be able to organize these notes in a post in a few days.
  • And new beautiful physics results are coming out of CDF. I cannot anticipate much, but I assure you there will be much to read about in the forthcoming weeks!

Neutrino Telescopes day 2 notes March 12, 2009

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , , , , , , ,
comments closed

The second day of the “Neutrino Conference XIII” in Venice was dedicated to, well, neutrino telescopes. I have written down in stenographical fashion some of the things I heard, and I offer them to those of you who are really interested in the topic, without much editing. Besides, making sense of my notes takes quite some time, more than I have of it tonight.

So, I apologize for spelling mistakes (the ones I myself recognize post-mortem), in addition to the more serious conceptual ones coming from missed sentences or errors caused by my poor understanding of English, of the matter, or of both. Also, I apologize to those of you who would have preferred a more succint, readable account: As Blaise Pascal once put it, “I have made this letter longer than usual, because I lack the time to make it short“.

NOTE: the links to slides are not working yet – I expect that the conference organizers will fix the problem tomorrow morning.

Christian Spiering: Astroparticle Physics, the European strategy ( Slides here)

Spiering gave some information about two new bodies, European organizations: ApPEC and ASPERA. ApPEC has two committees offering advice to national funding agencies, improve links and communication between the astroparticle physics community and scientific programmes of organizations like CERN, ESA etc. Aspera was launched in 2006, to give a roadmap for APP in Europe.Close coordination with ASTRONET, and links to CERN strategy bodies.

Roadmapping: science case, overview of the status, some recommendations for convergence. Second thing, a critical assessment of the plans, a calendar for milestones, coordinated with ASTRONET.

For dark matter and dark energy searches, Christian displayed a graph showing the cross section of WIMPS as a function of time, the reach of present-day experiments. In 2015 we should reach cross sections of about 10^-10 picobarns. We are now at some 10^-8 with our sensitivity. The reach depends on background, funding and infrastructure. Idea is to go toward a 2-ton-scale zero-background detectors. Projects: Zeplin, Xenon, others.

In an ideal scenario, LHC observations of new particles at weac scale could place these observations in a well-confined particle physics context, direct detection would be supported by indirect signatures. In case of a discovery, smoking-gun signatures of direct detection such as directionality and annual variations would be measured in detail.

Properties of neutrinos: direct mass measurement efforts are KATRIN and Troitzk. Double beta decay experiments are Cuoricino, Nemo-3, Gerda, Cuore, et al. The KKGH group claimed a signal of masses of a few tenths of eV, but normal hierarchy implies 10^-3 eV for the lightest neutrino mass of the same order. Experiments are expected to be in operation (cuoricino, nemo-3) or start by 2010-2011. Supernemo will start in 2014.

A large infrastructure for proton decay is advised. For charged cosmic rays, depending on which part of the spectrum one looks, there are different kinds of physics contributing and explorable.

The case for Auger-North is strong, high-statistics astronomy with reasonably fast data collection is needed.

For high-energy gamma rays, the situation has seen an enormous progress over the last 15 years. Mostly by imaging atmospheric Cherenkov telescopes (IACT). Whipple, Hegra, CAT, Cangaroo, Hess, Magic, Veritas. Also, wide-angle devices. For existing air Cherenkov telescopes, there are Hess and Magic running, very soon Magic will go into Magic-II. Whipple runs a monitoring telescope.

There are new plans for MACE in India, something between Magic and Hess. CTA and AGIS are in their design phase.

Aspera’s recommendations: the priority of VHE gamma astrophysics is CTA. They recommend design and prototyping of CTA and selection of sites, and proceeding decidedly towards start of deployment in 2012.

For point neutrino sources, there has been tremendous progress in sensitivity over the last decade. A factor of 1000 within 15 years in sensitivity to fluxes. IceCube will deliver what has promised, within 2012.

For gravitational waves, there is LISA and VIRGO. The frequency tested of LISA is in the 10^-2 Hz, VIRGO will go to 100-10000 Hz. The reach is of several to several hundred sources per year. The Einstein telescope, a graviwaves detector underground, could access thousand of sources per year. Einstein will construct starting in 2017. The conclusions: Einstein is the long-term future project of ground-based gravitational wave astronomy in Europe. A decision on funding will come after first detections with enhanced LIGO and virgo, but is most likely after collecting about a year of data.

In summary,the budget will increase by a factor of more than two in the next decade. Km3net, megaton, CTA, ET will be the experiments taking the largest share. We are moving into regions with a high discovery potential. An accelerated increase of sensitivity in nearly all fields.

K.Hoffmann, Results from IceCube and Amanda, and prospects for the future ( slides here)

IceCube will become the first instrumented cubic km neutrino telescope. Amanda-II consists of 677 optical modules embedded in the ice at depths of 1500-2000 m. It has been a testbed for icecube and for deploying optical modules. Icecube has been under construction for the last several years, Strings of PMT tubes have been deployed in the ice during the last few years. 59 of them are operating.

The rates: IC40 has 110 neutrino events per day. Getting close to 100% live time. 94% in January. IceCube has the largest effective area for muons, long track length. The range of sensitivity in energy is to TeV-PeV range.

Ice properties are important to understand. A dust logger measures dust concentration, which is connected to the attenuation length of light in ice. There is a thick layer of dust sitting at a depth of 2000m, clear ice above, and very clear ice below. They have to understand the light yield and propagation well.

Of course one of the most important parameters is the angular resolution. As the detector got larger, it improved. One of the more exciting things this year was to see the point spread function go peak at less than one degree with long track lengths for muons.

To see the Moon for a telescope is always reassuring. They did it, a >4 sigma effect for cosmic rays impinging on the Moon.

With the waveforms they have in IceCube, the energy reconstruction has muons that are non-minimum ionizing. They reconstruct energy by number of photons along the track. Can achieve some energy resolution, progress in understanding how to reconstruct energy.

First results from point-source searches. The 40-string configuration data will be analyzed soon. The point sources are sought with a unbinned likelihod search. Taking into account energy variable in point source search. They expect point sources to have higher energy spectrum than atmospheric neutrinos. From 5114 neutrino candidates in 276 days, they found one hot spot in the sky, with a significance after trial factor accounting that is of about 2.2 sigma. There are variables next year that will be less sensitive to dust model, so they might be able to say more about that one soon.

For a 7-years data, 3.8 year livetime, the hottest spot has a significance of 3.3 sigma. With one year of data, icecube 22 will already be more sensitive than Amanda. Icecube and Antares are complementary, since icecube looks at northern declination and antares is looking at the southern declinations. The point source flux sensitivity is down to 10^-11 GeV cm-2 s-1.

For GRBs, one can use a triggered search, that is an advantage, and latest results give for 400 bursts a limit. From IceCube22, a unbinned search similar to the one of the point source search, gives an exclusion power expected to 10^-1 GeV per cm^2 (in E^2 dN/dE units), in most of the energy range.

The naked-eye GRB of March 19, 2008, had detector in test mode, only 9 of 22 strings taking data. Bahcall predicted flux peaks at 10^6 GeV with a flux of 10^-1, but the limit found is 20 times higher.

Finally, they are looking for WIMPS. A search was recently sent for publication by the 22-string IceCube. 104 days of livetime. Can reach down well.

Atmospheric neutrinos are also a probe for violations of Lorentz invariance -possibly from Quantum Gravity effects. The survival probability depends on energy, assuming maximal mixing their sensitivity is down to a part in 10^-28. They are looking for a change in what one would expect for flavor oscillation. Atmospheric neutrinos are produced, depending on where they are produced they traverse more of the core of the Earth. So one gets a neutrino beam with different baselines, based on energy, and you would see a difference in the neutrino oscillation probability. The neutrino oscillation parameter will be energy dependent.

In the future they would like to see a high-energy extension. Ice is the only medium where one can see a coherent radio signal and an optical one, and acoustic too. Past season was very successful, with the addition of 19 new strings. Many analyses of 22-string configuration are complete. ANalysis techinques being refined to exploit size, energy threshold, and technology used. Underway to develop tech to build GZK scale nu detector after IceCube is complete.

Vincenzo Flaminio, Results from Antares ( slides here)

Potential sources of galactic neutrinos can be SN remnants, pulsars, microquasars, and extragalactic ones are gamma-ray bursts and active galactic nuclei. A by-product of Antares is an indirect search for dark matter, results are not ready yet.

Neutrinos from supernovas: these act as particle accelerators, can give hadrons and gammas from neutral pion decays. Possible sources are those found by Auger, or for example the TeV photons which come from molecular clouds.

Antares is an array of photomultiplier tubes that look at Cherenkov light produced by muon crossing the detector. The site is south of France, the galactic center is visible for 75% of the time. The collaboration comprises 200 physicists from many european countries. The control room in Toulon is more comfortable than the Amanda site (and this wins the understatement prize of the conference).

The depth in water is 2500m. All strings are connected via cables on the seabed. 40km long electro-optical cable connects ashore. Time resolution monitored by LED beacon in each detector storey. A sketch of the detector is shown below.

Deployment started in 2005, in 2006 first line installed. Finished one year ago. In addition there is an acoustic storey and several monitoring instruments. Biologists and oceanographers are interested in what is done, not just neutrino physicists.

The detector positioning is an important point, because lines move because of sea currents. Installed a large number of transmitters along the lines, use information to reconstruct minute-by-minute the precise position of the lines.

Collect triggers at 10 Hz rate with 12 lines. Detected 19 million muons with first 5 lines, 60 with the full detector.

First physics analyses are going on. Select up-going neutrinos, low S/N ratio with atmospheric muons is avoided this way. Rate is of the order of two per day using multi-line configuration.

Conclusions: Antares has successfully reached the end of construction phase. Data taking is ongoing, analyses in progress on atmospheric muons and neutrinos, cosmic neutrino sources, dark matter, neutrino oscillations, magnetic monopoles, etcetera.

David Saltzberg, Overview of the Anita experiment ( slides here)

Anita flies at 120,000 ft above the ice. It is the eyepiece of the telescope. The objective is the large amount of ice of the Antarctica. Tested with 8 metric tons of ice to test effect for detection. Done at SLAC. Observe radio pulses from the ice. A wake-field radio signal is detected. It goes up and down in less than a nanosecond, due to its Cherenkov nature. It is called Askaryan effect. You can observe the number of particles in the shower, and the measured field effect does track the number of particles in the shower. The signal is 100% polarized linearly. Wavelength is bigger than the size of the shower, so it is coherent. At a PeV there are more radio quanta emitted than optical ones.

They will use this at very high energy, looking for GZK-induced neutrinos. The GZK converts protons into neutrinos, 50 MPc around sources.

The energy is at the level of 10^18 eV or higher, proper time is 50 milliseconds, longest baseline neutrino experiment possible.

Anita has a GPS antenna for detection, and orientation which needs a fraction of a degree resolution. Solar powered. Antennas are pointed down 10 degrees.

This 50-page document describes the instrument.

Lucky coincidences: 70% of world’s fresh water is in antarctica, and it is the most quiet radio place. The place selects itself, so to speak.

They made a flight with a live time of 17.3 days, but this one never flew above the thickest ice, which is where most of the signal should be coming from.

The Askaryan effect gets distorted by antenna detection, electronics, and thermal noise. The triggering works like any multi-level trigger. Sufficient energy in one antenna, same for neighbors. L3 goes down to 5 Hz from a start of 150 kHz. L2 does coincidence between adjacent L1 triggers.

They put a transmitter underground to get pulses to be detected. Cross-correlation between antennas do interferometry, and gets position of source. The resolution obtained on elevation is an amazing 0.3 degrees, and for azimuth it is 0.7 degree resolution. The ground pulsers make even very small effects stand out. Even 0.2 degree tilt of detector can be spotted by looking at errors in elevation as a function of azimuth.

First pass of analysis of data: 8.2M hardware triggers. 20,000 of those point well to ice. After requiring upcoming plane waves, isolated from camps and other events, remain a few events. Could be some residual man-made noise. Background estimate: thermal noise, which is well simulated, and gives less than one event after all cuts, and anthropogenic impulsive noise, like iridium phones, spark plugs, discharge from structures.

Results: having seen zero vertical polarization events surviving cuts, constraints on GZK production models. Best result to date in the energy range from 10^10 to 10^13 GeV.

Anita 2 has 27 million better triggers, over deeper ice, 30 days afloat. Still to be analyzed. Anita 1 is doing a 2nd pass deep analysis of the data. Anita 2 has better data, expect factor 5-10 more GZK sensitivity from it.

Sanshiro Enomoto, Using neutrinos to study the Earth: Geoneutrinos. ( slides here)

Geoneutrinos are generated by beta decay chain of natural isotopes (U,TH,K). These all yield antineutrinos. With an organic scintillator, they are detected by inverse-beta decay reaction yielding a neutron and a positron. The threshold is at 1.8 MeV. Uranium and Thorium contribute in this energy range, while the Potassium yield is below it. Only U-238 can be seen.

Radiogenic heat dominates Earth energetics. Measured terrestrial heat flow is of 44 TW, and the radiogenic heat is 3TW. The only direct geochemical probe: deepest borehole reaches only 12 km, and rock samples down to 200 km underground. Heat release from the surface peaks off America in the Pacific and in south Indian ocean. Estimate is of 20 TW from radioactive heat, 8 from U, 8 from Th, 3 from K. Core heatflow from solidification etc. is estimated at 5-15 TW, secular cooling 18+-10 TW.

Kamland has seen 25 events above backgrounds, consistent with expectations.

I did not take further notes of this talk, but was impressed by some awesome plots of Earth planisferes with all sources of neutrino backgrounds, to figure out which is the best place for a detector studying geo-neutrinos. Check the slides for them…

Michele Maltoni, synergies between future atmospheric and long-baseline neutrino experiments ( slides here)

A global six-parameter fit of neutrino parameters was shown, including solar, atmospheric, rector, and accelerator neutrinos, but not SNO-III yet. There is a small preference for non-zero theta_13, coming completely from the solar sector; as pointed out by G.Fogli, we do not find a non-zero theta_13 angle from atmospheric data. All we can do is point out that there might be something interesting, suggest experiments to do their own analyses fast.

The question is: in this picture, were many experiments contribute, is there space left for relevance of atmospheric neutrinos ? Which is the role of atmospheric neutrino measurements ? Do we need them at all ?

At first sight, there is not much left for atmospheric neutrinos. Mass determination is dominated by MINOS, theta_13 is dominated by CHOOZ, atmospheric data dominate in determination of mixing angle, atmospheric neutrino measurements have highest statistics, but with the coming of next generation this is going to change. There is symmetry in sensitivity shape of other experiments to some of the parameters. On the other hand, when you include atmospheric data, the symmetry is broken in theta_13, which distinguishes between normal and inverted hierarchy.

Determination of the octant in \sin^2 \theta_{23} and \Delta m^2_{31}. Also, the introduction of atmospheric data introduces a modulation in the \delta_{CP} - \sin \theta_{13} plot. Will this usefulness continue in the future ?

Sensitivity to theta_13: apart from hints mentioned so far, atmospheric neutrinos can observe theta_13 through matter effects, MSW. In practice, the sensitivity is limited by statistics: at E=6 GeV the ATM flux is already suppressed; background comes from \nu_e \to \nu_e events which strongly dilute the \nu_\mu \to \nu_e events. Also, the resonance occurs only for neutrinos OR antineutrinos, but not both.

As far as resolution goes, MegaTon detectors are still far in the future, but Long-baseline experiments are starting now.

One concludes that the sensitivity to theta_13 is not competitive with dedicated LBL and reactor experiments.

Completely different is the issue with other properties, since the issue of the resonance can be exploited once theta_13 can be measured. resonant enhancement of neutrino (antineutrino) oscillations for a normal (inverted) hierarchy; mainly visible for high energy, >6 GeV. The effect can be observed if detector can discriminate charge, or, if no charge discrimination is possible, if the number of neutrinos and antineutrinos is different.

Sensitivity to the hierarchy depends on charge discrimination for muon neutrinos. Sensitivity to the octant: in the low-energy region (E<1 GeV), for theta_13=0 the excess of \nu_e flux for theta_23 in one or the other side. Otherwise, there are lots of oscillations, but the effect persitst on the average. It is also present for both neutrinos and antineutrinos. At high energy, E>3 GeV, for theta_13 the MSW resonance produces an excess of electron-neutrino events. Resonance occurs only for one kind of neutrino (neutrino vs antineutrino).

So in summary one can detect many features with atmospheric neutrinos, but only with some particular characteristics of the detector (charge discr, energy resolution…).

Without atmospheric data, only K2K can say something on the neutrino hierarchy for low theta_13.

LBL experiments have poor sensitivity due to parameter degeneracies. Atmospheric neutrinos contribute in this case. The sensitivity to the octant is almost completely dominated by atmospheric data, with only minor contributions by LBL measurements.

One final comment: there might be hints of neutrino hierarchy in high-energy data. If theta_13 is really large, there can be some sensitivity to neutrino mass hierarchy. So the idea is to have a part of the detectors with increased photo-coverage, and use the rest of the mass as a veto: the goal is to lower the energy threshold as much as possible, to gain sensitivity to neutrino parameters with large statistics.

Atmospheric data are always present in any long-baseline neutrino detector: ATM and LBL provide complementary information on neutrino parameters, information in particular on hierarchy and octant degeneracy.

Stavros Katsanevas, Toward an European megaton neutrino observatory ( slides here)

Underground science: interdisciplinary potential at all scales. Galactic supernova neutrinos, galactic neutrinos, SN relics, solar neutrinos, geo-neutrinos, dark matter, cosmology -dark energy and dark matter.

Laguna: aimed at defining and realizing this research programme in Europe. Includes a majority of European physicists interested in the construction ove very massive detectors realized in one of the three technologies using liquids: water, liquid argon, and liquid scintillator.

Memphys, Lena, Glacier. Where could we put them ? The muon flux goes down with the overburden, so one has to examine the sites by their depth. In Frejus there is a possibility to put a detector between road and train tracks. Frejus rock is not hard but not either soft. Hard rock can become explosive because of stresses, and is not good. Another site is Pyhasalmi in Finland, but there the rock is hard.

Frejus is probably the only place where one can put water Cherenkov detectors. For liquid Argon, we have ICARUS (hopefully starting data taking in May), others (LANNDD, GLACIER, etc.). Glacier is a 70 m tank, with several novel concepts. A safe LNG tank, developed for many years by petrochemical industry. R&D includes readout systems and electronics, safety, HV systems, LAr purification. Must think about getting an intermediate scale detector.

The physics scope: a complementary program, a lot of reach in Memphis in searches for positron-pizero decays of protons, better for kzero in liquid argon. Proton lifetime expectations are at 10^36 years.

By 2013-2014 we will know whether sinsquared theta13 is larger than zero.

European megaton detector community (3 liquids) in collaboration with its industrial partners is currently addressing common issues (sites, safety, infrastructures, non-accelerator physics potential) in the context of LAGUNA (EUI FP8) Cost estimates will be ready by July 2010.

David Cowan, The physics potential of Icecube’s deep core sub-array ( slides here)

A new sub-array in ice-cube, called deep-core: ICDC. Originally conceived as a way to improve the sensitivity to WIMPs. Denser sub-arrays to lower the energy threshold, they give one order of magnitude decrease in the low-energy reach. There are six special strings plus seven nearby icecube strings The vertical spacing is of 7 meters, with 72 meter horizontal interstring spacing: a x10 density with respect to IceCube.

The effective scattering length in deep ice, which is very clear, is longer than 40 meters. This gives a better possibility to do a calorimetric measurement.

The deep core is at the bottom center. They take the top modules in each string as an active veto for backgrounds coming from muon events going down. On the sides, three layers of IC strings also provide a veto. These beat down the cosmic background a lot.

The ICDC veto algorithms: one runs online, finds event light intensity, the weighted center of gravity, and the time. They do a number of things and come up with a 1:1 S/N ratio. So ICDC improves the sensitivity to WIMPs, neutrino sources in the southern sky, oscillations. For WIMPs, an annihilation can occur in the center of the Earth or Sun. Annihilations to bbbar pairs or tau-tau pairs gives soft neutrinos, while ones into W boson pairs yield hard ones. This way, they extend the reach to masses of less than 100 GeV, at cross sections of 10^-40 cm^2.

In conclusion, ICDC can analyze data at lower neutrino energy than previously thought possible. It improves overlap with other experiments. It provides for a powerful background rejection, and it has sufficient energy resolution to do a lot of neutrino oscillation studies.

Kenneth Lande, Projects in the US: a megaton detector at Homestake ( slides here)

DUSEL at Homestake, in South Dakota. There are four tanks of water Cherenkov in the design. Nearby there’s the old site of the chlorine experiment. Shafts a km apart.

DUSEL will be an array of 100-150 kT fiducial mass Cerenkov detectors, at 1300 km distance from FNAL. The beam goes from 0.7 MW to 2.0 MW as the project goes along. Eventually add 100 kT of argon. A picture below shows a cutaway view of the facility.

Goals are accelerator-based theta_13, look at neutrino mass hierarchy, CP violation through delta_CP. For non-accelerator, the program includes studies of proton decay, relic SN neutrinos, prompt SN neutrinos, atmospheric neutrinos, and solar neutrinos. They can build up to 70m-wide tanks, but settled to 50-60m. The plan is to build three modules.

Physics-wise, the fnal beam has oscillated and disappeared at energy around 4 GeV. Rate is of 200,000 CC events per year assuming 2MW power (no oscillation, raw events). Neutrino appearance (electron kind) for nu and antinu as a function of energy gives oscillation, and mass hierarchy.

Reach in theta_13 is below 10^-2. For nucleon decay: looking in the range of 10^34. 300 kT per 10 y means 10^35 proton-years. Sensitive also to K-nu mode of decay, at the level of 8×10^33.

DUSEL can choose the overburden. A deep option can go deeper than Sudbury.

US power reactors are far from Homestake. Typical distance is 500 km. The neutrino flux from reactors is 1/30 of that of SK.

For a SN in our galaxy they expect about 100,000 events in 10 seconds. For a SN in M31 they expect about 10-15 events in a few seconds.

Detector construction: excavation, installing water-tight liner… Financial timetable is uncertain. At the moment water is being pumped down. Rock studies can start in September.

And that would be all for today… I heard many other talks, but cannot bring myself to comment on those. Please check http://neutrino.pd.infn.it/NEUTEL09/the conference site for the slides of the other talks!

No CHAMPS in CDF data January 12, 2009

Posted by dorigo in news, physics, science.
Tags: , , , ,
comments closed

A recent search for long-lived charged massive particles in CDF data has found no signal in 1.0 inverse femtobarns of proton-antiproton collisions produced by the Tevatron collider at Fermilab.

Most subnuclear particles we know have very short lifetimes: they disintegrate into lighter bodies by the action of strong, or electromagnetic, or weak interactions. In the first case the particle is by necessity a hadron- one composed of quarks and gluons-, and the strength of the interaction that disintegrates it is evident by the fact that the life of the particle is extremely short:  we are talking about a billionth of a trillionth of a second, or even less time. In the second case, the electromagnetic decay takes longer, but still in most instances a ridiculously small time; the neutral pion, for instance, decays to two photons (\pi^\circ \to \gamma \gamma) in about 8 \times 10^{-17} seconds: eighty billionths of a billionth of a second. In the third case, however, the weakness of the interaction manifests itself in decay times that are typically long enough that the particle is indeed capable of traveling for a while.

Currently, the longest-living subnuclear particle is the neutron, which lives about 15 minutes before undergoing the weak decay n \to p e \nu, the well-studied beta-decay process which is at the basis of a host of radioactive phenomena. The neutron is very lucky, however, because its long life is not only due to the weakness of virtual W-boson exchange, but also by the fact that this particle happens to have a mass just a tiny bit larger than the sum of the bodies it must decay into: this translates in a very, very small “phase space” for the decay products, and a small phase space means a small decay rate.

Of course, we have only discussed unstable particles so far: but the landscape of particle physics includes also stable particles, i.e. the proton, the electron, the photon, and (as far as we know) the neutrinos. We would be very surprised if this latter set included particles we have not discovered yet, but we should be more possibilistic.

A stable, electrically-neutral massive particle would be less easy to detect than we could naively think. In fact, most dark-matter searches aimed at detecting a signal of a stable massive particle are tuned to be sensitive to very small signals: if a gas of neutralinos pervaded the universe, we might be unaware of their presence until we looked at rotation curves of galaxies and other non-trivial data, and even then, a direct signal in a detector would require extremely good sensitivity, since a stable neutral particle would be typically very weakly interacting, which means that swarms of such bodies could easily  fly through whatever detector we cook up unscathed. Despite that, we of course are looking for such things, with CDMS, DAMA, and other dark-matter-dedicated experiments.

The existence of a charged massive stable particle (CHAMP for friends), however, is harder to buy. An electrically-charged particle does not go unseen for long: its electromagnetic interaction is liable to betray it easily. However, there is no need to require that a CHAMP is by force THE reason of missing mass in the universe. These particles could be rare, or even non-existent in the Universe today, and in that case our only chance to see them would be in hadron-collision experiments, where we could produce them if the energy and collision rate are sufficient.

What would happen in the event of a creation of a CHAMP in a hadron collision is that the particle would slowly traverse the detector, leaving a ionization trail. A weak-interacting CHAMP (and to some extent even a strongly-interacting one) would not interact strongly with the heavy layers of iron and lead making up the calorimeter systems of which collider experiments are equipped, and so it would be able to punch through, leaving a signal in the muon chambers before drifting away. What we could see, if we looked carefully, would be a muon track which ionizes the gas much more than muons usually do -because massive CHAMPS are heavy, and so they kick atoms around as they traverse the gas. Also, the low velocity of the particle (be it clear, here “low” means “only few tenths of the speed of light”!) would manifest itself in a delay in the detector signals as the particle traverses them in succession.

CDF has searched for such evidence in its data, by selecting muon candidates and determining whether their crossing time and ionization is compatible with muon tracks or not. More specifically, by directly measuring the time needed for the track to cross the 1.5 meter-radius of the inner tracker, and the particle momentum, the mass of the particle can be inferred. That is easier said than done, however: a muon takes about 5 nanoseconds to traverse the 1.5 meters of the tracker, and to discern a particle moving half that fast, one is requred to measure this time interval with a resolution better than a couple of nanoseconds.

The CDF Time-Of-Flight system (TOF) is capable of doing that. One just needs to determine the production time with enough precision, and then the scintillation bars which are mounted just outside of the tracking chamber (the COT, for central outer tracker) will measure the time delay. The problem with this technique, however, is that the time resolution has a distinctly non-Gaussian behaviour, which may introduce large backgrounds when one selects tracks compatible with a long travel time. The redundancy of CDF comes to the rescue: one can measure the travel time of the particles through the tracker by looking at the residuals of the track fit. Let me explain.

A charged particle crossing the COT leaves a ionization trail. These ions are detected by 96 planes of sense wires along the path, and from the pattern of hit wires the trajectory can be reconstructed. However, each wire will have recorded the released charge at a different time, because they are located at different distances from the track, and the ions take some time to drift in the electric field before their signal is collected. The hit time is used in the fit that determines the particle trajectory: residuals of these time measurements after the track is fit provide a measurement of the particle velocity. In fact, a particle moving slowly creates ionization signals that are progressively delayed as a function of radius; these residuals can be used to determine the travel time.

The resulting measurement has a three-times-worse precision than that coming from the dedicated TOF system (fortunately, I would say, otherwise the TOF itself would be a rather useless tool); however, the uncertainty on this latter measurement has a much more Gaussian behaviour! This is an important asset, since by requiring that the two time measurements are consistent with one another, one can effectively remove the non-Gaussian behavior of the TOF measurement.

By combining crossing time -i.e. velocity- and track momentum measurement, one may then derive a mass estimate for the particle. The distribution of reconstructed masses for CHAMP candidates is shown in the graph below. Overimposed to the data, the distribution expected for a 220-GeV CHAMP signal has been overlaid. It is clear to see that the mass resolution provided by the method is rather poor: despite of that, a high-mass charged particle would be easy to spot if it were there.

One note of warning about this graph: the distribution above shows masses ranging all the way from 0 to 100 GeV, but that does not mean that these tracks have similar masses: the vast majority of tracks are real muons, for which the velocity is underestimated due to instrumental effects: in a sense, the very shape of the curve describes the resolution of the time measurement provided by the analysis.

The absence of tracks compatible with a mass larger than 120 GeV in the data allows to place model-independent limits on the CHAMP mass. Weak-interacting CHAMPS are excluded, in the kinematic region |\eta|<0.7 covered by the muon chambers, and with P_T>40 GeV, if they are produced with a cross section larger than 10 fb. For strongly-interacting CHAMPS the search considers the case of a scalar top R-hadron, a particle which is predicted by Supersymmetric theories when the stable stop quark binds together with an ordinary quark. In that case, the 95% CL limit can be set at a mass of 249 GeV.

It is interesting to note that this analysis, while not using the magnitude of the ionization left by the track in the gas chamber (the so-called dE/dx on which most past searches of CHAMPS have been based, e.g. in CDF (Run I) and ALEPH) to identify the CHAMP signal candidates, still does use the dE/dx to infer the (background) particle species when determining the resolution of the time measurement from COT residuals. So the measurement shows once more how collider detectors really benefit from the high redundancy of their design!

[Post scriptum: I discuss in simple terms the ionization energy loss in the second half of this recent post.]

It only remains to congratulate with the main authors of this search, Thomas Phillips (from Duke University) and Rick Snider (Fermilab), for their nice result, which is being sent for publication as we speak. The public web page of the analysis, which contains more plots and an abstract, can be browsed here.

Some posts you might have missed in 2008 January 5, 2009

Posted by dorigo in cosmology, personal, physics, science.
Tags: , , , , , , , , , , ,
comments closed

To start 2009 with a tidy desk, I wish to put some order in the posts about particle physics I wrote in 2008. By collecting a few links here, I save from oblivion the most meaningful of them -or at least I make them just a bit more accessible. In due time, I will update the “physics made easy” page, but that is work for another free day.

The list below collects in reverse chronological order the posts from the first six months of 2008; tomorrow I will complete the list with the second half of the year. The list does not include guest posts nor conference reports, which may be valuable but belong to a different list (and are linked from permanent pages above).

June 17: A description of a general search performed by CDF for events featuring photons and missing transverse energy along with b-quark jets – a signature which may arise from new physics processes.

June 6: This post reports on the observation of the decay of J/Psi mesons to three photons, a rare and beautiful signature found by CLEO-c.

June 4 and June 5 offer a riddle from a simple measurement of the muon lifetime. Readers are given a description of the experimental apparatus, and they have to figure out what they should expect as the result of the experiment.

May 29: A detailed discussion of the search performed by CDF for a MSSM Higgs boson in the two-tau-lepton decay. Since this final state provided a 2.1-sigma excess in 2007, the topic deserved a careful look, which is provided in the post.

May 20: Commented slides of my talk at PPC 2008, on new results from the CDF experiment.

May 17: A description of the search for dimuon decays of the B mesons in CDF, which provides exclusion limits for a chunk of SUSY parameter space.

May 02 : A description of the search for Higgs bosons in the 4-jet final state, which is dear to me because I worked at that signature in the past.

Apr 29: This post describes the method I am working on to correct the measurement of charged track momenta by the CMS detector.

Apr 23, Apr 28, and May 6: This is a lengthy but simple, general discussion of dark matter searches with hadron colliders, based on a seminar I gave to undergraduate students in Padova. In three parts.

Apr 6 and Apr 11: a detailed two-part description of the detectors of electromagnetic and hadronic showers, and the related physics.

Apr 05: a general discussion of the detectors for LHC and the reasons they are built the way they are.

Mar 29: A discussion of the recent Tevatron results on Higgs boson searches, with some considerations on the chances for the consistence of a light Higgs boson with the available data.

Mar 25: A detailed discussion on the possibility that more than three families of elementary fermions exist, and a description of the latest search by CDF for a fourth-generation quark.

Mar 17: A discussion of the excess of events featuring leptons of the same electric charge, seen by CDF and evidenced by a global search for new physics. Can be read alone or in combination with the former post on the same subject.

Mar 10: This is a discussion of the many measurements obtained by CDF and D0 on the top-quark mass, and their combination, which involves a few subtleties.

Mar 5: This is a discussion of the CDMS dark matter search results, and the implications for Supersymmetry and its parameter space.

Feb 19: This is a divulgative description of the ways by which the proton structure can be studied in hadron collisions, studying the parton distribution functions and how these affect the scattering measurements in proton-antiproton collisions.

Feb 13: A discussion of luminosity, cross sections, and rate of collisions at the LHC, with some easy calculations of the rate of multiple hard interactions.

Jan 31: A summary of the enlightening review talk on the standard model that Guido Altarelli gave in Perugia at a meeting of the italian LHC community.

Jan 13: commented slides of the paper seminar gave by Julien Donini on the measurement of the b-jet energy scale and the p \bar p \to Z X \to b \bar b X cross section, the latter measured for the first time ever at a hadron machine. This is the culmination of a twelve-year effort by me and my group.

Jan 4: An account of the CDF search for Randall-Sundrum gravitons in the ZZ \to eeee final state.

Arkani-Hamed: “Dark Forces, Smoking Guns, and Lepton Jets at the LHC” December 11, 2008

Posted by dorigo in news, physics, science.
Tags: , , , , ,
comments closed

As we’ve been waiting for the LHC to turn on and turn the world upside down, some interesting data has been coming out of astrophysics, and a lot of striking new signals could show up. This motivates theoretical investigations on the origins of dark matter and related issues, particularly in the field of Supersymmetry.

Nima said he wanted to tell the story from the top-down approach: what all the
anomalies were, what motivated his and his colleagues’ work. But instead, he offered a parable as a starter.

Imagine there are creatures made of dark matter: ok, dark matter does not clump, but anyway, leaving disbelief aside, let’s imagine there are these dark astrophysicists, who work hard, make measurements, and eventually see that 4% of the universe dark to them, they can’t explain the matter budget of the universe. So they try to figure out what’s missing. A theorist comes out with a good idea: a single neutral fermion. This is quite economical, and this theory surely receives a lot of subscribers. But another theorist envisions that there is a totally unknown gauge theory, with a broken SU(2)xU(1) group, three generations of fermions, the whole shebang… It seems crazy, but this guy has the right answer!

So, we really do not know what’s in the dark sector. It could be more interesting than just a single neutral particle. Since this is going to be a top-down discussion, let us imagine the next most complicated thing you might imagine: Dark matter could be charged. If the gauge symmetry was exact, there would be some degenerate gauge bosons. How does this stuff have contact with the standard model ?

Let us take a mass of a TeV: everything is normal about it, and the coupling that stuff from this dark U(1) group can have is a kinetic mixing between our SM ones and these new gauge fields, a term of the form 1/2 \epsilon F_{\mu \nu}^{dark} F^{\mu \nu} in the Lagrangian density.

In general, any particle at any mass scale will induce a loop mixing through their hypercharge above the weak scale. All SM particles get a tiny charge under the new U(1)’. The coupling can be written as kinetic mixing term, and it will be proportional to their electric charge. The size of the coupling could be in the 10^-3, 10^-4 range.

This construct would mess up our picture of dark matter, and a lot about our
cosmology. But if there are higgses under this sector, we have the usual problem of hierarchy. We know the simplest solution to the hierarchy is SUSY. So we imagine to supersymmetrize the whole thing. There is then a MSSM in our sector, and a whole SUSY dark sector. Then there is a tiny kinetic mixing between the two. If the mixing is 10^-3, from the breaking of symmetry at a mass scale of about 100 GeV, the breaking induced in the DM world would be of radiative origin, through loop diagrams, at a few GeV mass scale.

So the gauge interaction in the DM sector is broken at the Gev scale. A bunch
of vectors, and other particles, right next door. Particles would couple
to SM ones proportionally to charge at levels of 10^-3 – 10^-4. This is dangerous since the suppression is not large. The best limits to such a scenario come from e+e- factories. It is really interesting to go back and look at these things in BaBar and other experiments: existing data on tape. We might discover something there!

All the cosmological inputs have difficulty with the standard WIMP scenario. DAMA, Pamela, Atic are recently evidenced anomalies that do not fit with our
simplest-minded picture. But they get framed nicely in our picture instead.

The scale of these new particles is more or less fixed at the GeV region. This has an impact in every way that you look at DM. As for the spectrum of the theory, there is a splitting in masses, given by the coupling constant \alpha in the DM sector times the mass in the DM sector: a scale of the order \alpha M.  It is radiative. There are thus MeV-like splittings between the states. And there are new bosons with GeV masses that couple to them. These vectors couple off-diagonally to the DM. This is a crucial fact, sinply because if you have N states, their gauge interaction is a subpart of a rotation between them. The only possible interaction that these particles can have with the vector is off-diagonal. That gives a cross section comparable to the weak scale.

The particles annihilate into the new vectors, which eventually have to decay. They would be stable, but there is a non-zero coupling to our world, so what do they decay into ? Not to proton-antiproton pairs, but electrons, or muon pairs. These features are things that are hard to get with ordinary WIMPS.

And there is something else to expect: these particles move slowly, have long range interaction, geometric cross sections, and they may go into excited states. Their splitting is of the order of the MeV, which is not different from the kinetic energy in the galaxy. So with the big geometric cross section they have, you expect them not to annihilate but excite. They decay back by emitting e+e- pairs. So that’s a source of low-energy e- and e+: that explains an integral excess in these particles from cosmic rays.

If they hit a nucleus, the nucleus has a charge, the vector is light, and thus the cross section is comparable to Z and H exchange. So the collision is not elastic, it changes the nature of the particle. This changes the analysis you would do, and it is possible for DAMA to be consistent with the other experiments.

Of course, the picture drawn above is not the most minimal possible thing, to
imagine that dark matter is charged and has gauge interactions is a quite far-fetched thing in fact. But it can give you a correlated explanation to the cosmological inputs.

Now, why does this have the potential of making life so good at the LHC ? Because we can actually probe this sector sitting next door, particularly in the SUSY picture. In fact, SUSY fits nicely in the picture, while being motivated elsewhere.

This new “hidden” sector has been studied by Strassler and friends in Hidden valley models. It is the leading way by means of which you can have a gauge sector talking to our standard model.

The particular sort of hidden valley model we have discussed is motivated if you take the hints from astrophysics seriously. Now what does it do to the LHC ? GeV particles unseen for thirty years….  But that is because we have to pay a price, the tiny mixing.

Now, what happens with SUSY is nice: if you produce superpartners you will always go into this sector. The reason is simple: normally particles decay into the LSP, which is stable. But now it cannot be stable any longer, because the coupling will give a mixing between gaugino in our sector and photino in their sector. Thus, the LSP will decay to lighter particle in the other sector, producing other particles. These particles are light, so they come out very boosted. They make a Higgs boson in the other sector, which decays to a W pair, and finally ends up with the lightest vector in the other sector: it ends up as an electron-positron pair in this sector.

There is a whole set of decays that gives lots of leptons, all soft in their sector. They are coming from the decay of a 100 GeV particle. The signature could be jets of leptons. Every SUSY event will contain two. Two jets of leptons, with at least two, if not many more, leptons with high-Pt, but featuring small opening angles and invariant masses. That is the smoking gun. As for lifetime, these leptons are typically prompt, but they might also have a lifetime. However the preferred situation is that they would not be displaced, they would be typically prompt.

Predictions for SUSY particle masses! September 2, 2008

Posted by dorigo in cosmology, news, science.
Tags: , , , ,
comments closed

Dear reader, if you are not a particle physicist you might find this post rather obscure. I apologize to you in that case, and I rather prefer to direct you to some easier discussion of Supersymmetry than attempting to shed light for you on the highly technical information discussed below:

  • For an introduction, see here.
  • For dark matter searches at colliders, see a three-part post here and here and here.
  • Other dark matter searches and their implications for SUSY are discussed here.
  • For a discussion of the status of Higgs searches and the implications of SUSY see here and here.
  • For a discussion of the implications for supersymmetry of the g-2 measurement, see here;
  • A more detailed discussion can be found in a report of a seminar by Massimo Passera on the topic, here and here.
  • For B \to \mu \mu searches and their impact on SUSY parameter space, see here.
  • For other details on the subject, see this search result.
  • And for past rumors on MSSM Higgs signals found at the Tevatron, have a look at these links.

If you have some background in particle physics, instead, you should definitely give a look at this new paper, appeared on August 29th in the arxiv. Like previous studies, it uses a wealth of experimental input coming from precision Standard Model electroweak observables, B physics measurements, and cosmological constraints to determine the allowed range of parameters within two constrained models of Supersymmetry -namely, the CMSSM and the NUHM1. However, the new study does much more than just turning a crank for you. Here is what you get in the package:

  1. direct -and more up-to-date- assessments of the amount of data which LHC will need to wipe these models off the board, if they are incorrect;
  2. a credible spectrum of the SUSY particle masses, for the parameters which provide the best agreement between experimental data and the two models considered;
  3. a description of how much will be known about these models as soon as a few discoveries are made (if they are), such as the observation of an edge in the dilepton mass distribution extracted by CMS and ATLAS data;
  4. a sizing up of the two models, CMSSM and NUHM1 -which are just special cases of the generic minimal supersymmetric extension of the standard model. Their relative merit in accommodating the current value of SM parameters is compared;
  5. most crucially, a highly informative plot showing just how much we are going to learn on the allowed space of SUSY parameters from future improvements in a few important observables.

So, if you want to know what is currently the best estimate of the gluino mass: it is very high, above 700 GeV in the CMSSM and a bit below 600 for the NUHM1. The lightest Higgs boson, instead, is -perhaps unsurprisingly- lying very close to the lower LEP II limit, in the 115 GeV ballpark (actually, even a bit lower than that, but that is a detail – read the paper if you want to know more about that). The LSP is instead firmly in the 100 GeV range. For instance, check the figure below, showing the best fit for the CMSSM (which, by the way, implies M_0 = 60 GeV, M_{1/2}=310 GeV, A_0 = 240 GeV, and \tan \beta =11).

The best plots are however the two I attach below: they represent a commendable effort to make things simpler for us. Really a highly distilled result of the gazillions of CPU-intensive computations which went into the determination of the area of parameter space that current particle physics measurements are allowing. In them, you can read out the relative merit of future improvements in a few of the most crucial measurements in electroweak physics, B physics, and cosmology, as far as our knowledge of MSSM parameters are concerned. The allowed area in the space of two parameters -m_0 \div m_{1/2} as well as m_0 \div \tan \beta, at 95% confidence level, is studied as a function of the variation in the total uncertainty on five quantities: the error on the gyromagnetic ratio of the muon, \Delta (g-2)_\mu, the uncertainty in the radiative decay b \to s \gamma, the uncertainty in cold dark matter \Omega h^2, the branching fraction of B \to \tau \nu decays, and the W boson mass M_W.

Extremely interesting stuff! one learns that future improvements in the measurement of the dark matter fraction will yield NO improvement in the constraints of the MSSM parameter space. In a way, dark matter does point to a sparticle candidate, but WMAP has already measured it too well!

Another point to make from the graphs above is that of the observables listed the W boson mass is the one whose uncertainty is going to be reduced sizably very soon -that is where we expect to be improving matters most in the near future, of course if LHC does not see SUSY before! Instead, the b \to s \gamma branching fraction uncertainty might actually turn out to need larger uncertainties than those assumed in the paper, making the allowed MSSM parameter space larger rather than smaller. As for the muon g-2, things can go in both directions there as well, as more detailed estimates of the current uncertainties are revised. These issues are discussed in detail in the paper, so I have better direct you to it rather than inserting my own misunderstandings.

Finally, the current fits slightly favor the NUHM1 scenario (the single-parameter Non-Universal Higgs Model) over the CMSSM. The NUHM1 scenario includes one further parameter, governing the difference between the soft SUSY-breaking contribution to M_H^2 and to squark and sleptons masses. The overall best-fit \chi^2 is better, and this implies that the additional parameter is used successfully by the fitter. The lightest Higgs boson mass also comes up at a “non-excluded” value of 118 GeV, higher than for the best fit point of the CMSSM.

Events with photons, b-jets, and missing Et June 17, 2008

Posted by dorigo in news, physics, science.
Tags: , , , ,
comments closed

A recent analysis by CDF, based on 2 inverse femtobarns of data (approximately 160 trillion proton-antiproton collisions) has searched for events featuring a rare mixture of striking objects: high-energy photons, significant missing transverse energy, and energetic b-quark jets. Photons at a proton-antiproton collider are by themselves a sensitive probe of several new physics processes, and the same can be said of significant missing energy. The latter, in fact, is the single most important signature of supersymmetric decays, since the latter usually feature a non-interacting, neutral particle, as I had a chance of explaining in a lot of detail in two posts on the searches for dark matter at colliders (see here for part 1, here for part 2, and here for part 3). Add b-quark jets to boot, and you are looking at a very rare signature within the standard model, but one that may in fact be due to hypothetical exotic processes.

The idea of such a signature-based search is simple: verify whether the sum of standard model processes account for the events observed, without having to be led by any specific model for new physics. The results are much easier to interpret in terms of models that theorists might not have cooked up yet. A specific process which could provide the three sought objects together is not hard to find, in any case: in supersymmetric models where a photino decays radiatively emitting a photon and turning into a Higgsino -a lightest particle which escapes the detector, one gets both photons and missing energy; the additional b-jet is then the result of the decay of an accompanying chargino.

If the above paragraph makes no sense to you, worry not. Just accept that there are possible models of new physics where such a trio of objects arise rather naturally in the final state.

However, there is another, much more intriguing, motivation for the search described below. So let me open a parenthesis.

In Run I, CDF observed a single striking, exceedingly rare event which contained two high-energy electrons, two high-energy photons, and significant missing transverse energy. A unexplicable event by all means! Below you can see a cut-away view of the calorimeter energy deposits: pink bars show electromagnetic energy (both electrons and photons leave their energy in the electromagnetic portion of the calorimeter), but photon candidates have no charged track pointing at them. The event possesses almost nothing else, except for the large transverse energy imbalance, as labeled.

The single event shown above was studied with unprecedented detail, and some doubts were cast on the nature of one of the two electron signals. Despite that, the event remained basically unexplained: known sources were conservatively estimated at a total of 1 \pm 1 millionth of an event! A definitive answer on it was thought would be given by the larger dataset that the Tevatron Run II would soon provide. You can read a very thorough discussion of the characteristics of the infamous ee \gamma \gamma \not E_t event in a paper on diphoton events published in 1999 by CDF.

Closing the parenthesis, we can only say that events with photons and missing transverse energy are hot! So, CDF looked at them with care, by defining each object with simple cuts -such that theorists can understand them. No kidding: if an analysis makes complicated selections, a comparison with theoretical models coming after the fact becomes hard to achieve.

The cuts are indeed straightforward. A photon has to be identified with transverse energy above 25 GeV in the central calorimeter. Two jets are also required, with E_T>15 GeV and |\eta|<2.0; Rapidity \eta is just a mesure of how forward the jet is going; a rapidity of 2.0 corresponds to about 30 degrees away from the beam line, if I remember correctly. Selecting these events leads to about 2 million events! These are dominated by strong interactions where a photon is faked by a hadronic jet.

The standard selection is tightened by requiring the presence of missing transverse energy above 25 GeV. Missing transverse energy is measured as the imbalance in the energy flowing in the plane transverse to the beam axis; 25 GeV are usually already a significant amount, which is hard to fake by jets whose energy has been under- or overestimated. The two jets are also required to be well separated between each other and from the photon, and this leads to 35,463 events: missing Et has killed alone about 98% of our original dataset. But missing Et is most of the times due to a jet fluctuation, even above 25 GeV: thus it is further required that it is not pointing along the direction of a jet in the azimuthal angle (the one describing the direction in the plane orthogonal to the beam, which for missing transverse energy is indeed defined). A cut \Delta \Phi >0.3 halves the sample, which now contains 18,128 events.

Finally, a b-tagging algorithm is used to search for the secondary vertex B mesons produce inside the jet cones. Only 617 events survive the requirement that at least a jet is b-tagged. These events constitute our “gold mine” and they are interpreted as a sum of standard model processes, to the best of our knowledge.

One last detail is needed: not all the b-tagged jets are originated from real b-quarks! A sizable part of them is due to charm quarks and even lighter ones. To control the fraction of real b-quarks in the sample, one can study the invariant mass of the system of charged tracks which are fit together to a secondary vertex inside the jet axis. The invariant mass of the tracks is larger for b-jets, because b-quarks weigh much more than lighter ones, and their decay products reflect that difference. Below, you can see the “vertex mass” for b-tagged jets in a loose control sample of data (containing photons and jets with few further cuts): the fraction of b-jets is shown by the red histogram, while the blue and green ones are the charm and light-quark components. Please also note the very characteristic “step at about 2 GeV, which is due to the maximum mass of charmed hadrons.

The vertex mass fit in the 617 selected events allows to extract the fractions of events due to real photons accompanied by b-jets, c-jets, and fake b-tags (light quark jets). In addition, one must account for fake photon events. Overall, the background prediction is extracted by a combination of methods, well-tested by years of practice in CDF. The total prediction is of 637 \pm 54 \pm 128 events (the uncertainties are statistical and systematic, respectively), in excellent agreement with observed counts. A study of the kinematics of the events, compared with the sum of predicted backgrounds, provides a clear indication that Standard Model processes account very well for their characteristics. No SUSY appears to be lurking!

Below you can see the missing transverse energy distribution for the data (black points) and a stack of backgrounds (with pink shading for the error bars on background prediction).

Below, a similar distribution for the invariant mass of the two jets.

A number of kinematic distirbutions such as those shown above is available in the paper describing the preliminary results. Interested readers can also check the public web site of the analysis.

Simona Murgia: Dark Matter searches with GLAST May 23, 2008

Posted by dorigo in astronomy, cosmology, physics, science.
Tags: , ,
comments closed

Now linked by Peter Woit’s blog with appreciative words, I cannot escape my obligation to continue blogging on the talks I have been listening at PPC2008. So please find below some notes from Simona’s talk on the GLAST mission and its relevance for dark matter (DM) searches.

Glast will observe gamma rays in the energy range 20 MeV to 300 GeV. A better flux sensitivity than earlier experiments such as EGRET and AGILE. It is a 5-year mission, with a final goal of 10 years. It will orbit at 565 km of altitude, with a 25.6° inclination with the terrestrial equator. It has a large area telescope (LAT), for pair conversions. Features a precision Silicon strip tracker, endowed with 18 xy tracking planes with tungsten layers interleaved. The tracker is followed by a small calorimeter with CsI crystals. Tracker surrounded by anti-coincidence detector, 89 plastic scintillator tiles. The segmented design avoids self-veto problems. The total payload of GLAST is 2000 kg.

GLAST has four times the field of view of EGRET, and it covers the whole sky in two orbits (3 hours). The broad energy range has never been explored at this sensitivity. The energy resolution is about 10%, and the point-spread function is 7.2 arcminutes above 10 GeV. More than 30x better sensitivity than previous searches below 10 GeV, x100 at higher energy.

EGRET cataloged 271 sources of gamma rays, GLAST expects to do thousands. Active galactic nuclei, gamma ray bursts, supernova remnants, pulsars, galaxies, clusters, x-ray binaries. There is very small gamma ray attenuation below 10 GeV, so GLAST can probe cosmological distances.

Simona asked herself, what is the nature of DM? There are several models out. GLAST will investigate the existence of weakly interacting massive particles (WIMPS) through two-photon annihilation. Not an easy task, for there are large uncertainties in the signal and in the background. The detection of a DM signal from GLAST would be complementary to others.

Gamma rays may come from neutral pions emitted in \chi \chi annihilation. These give a continuum spectrum. Instead, direct annihilation to two photons is expected to have a branching ratio of 10^{-3} or less, but the latter would provide a line in the spectrum, a spectacular signal.

Other models provide an even more distinctive gamma spectrum. With the gravitino as a lightest supersymmetric particle, it would have a very long lifetime, and it could decay into photon and neutrino: this yields a enhanced line, and then a continuum spectrum at lower energy.

Instrumental backgrounds mostly come from charged particles (protons, electrons, positrons). Also neutrons, and earth albedo photons. These dominate the flux from cosmic photons. But less than one in hundred thousand survives the photon selection. Above a few GeV, background contamination is required to be less than 10% of the isotropic photons measured by EGRET.

Searches for WIMP annihilations can be done in the galactic center or complementary in the galactic halo. In the latter case there is no source crowding, but significant uncertainties in the astrophysical backgrounds. The 3-sigma signal on <\sigma v> as a function of mass of the WIMP goes below 10^{-26} cm^2 s^{-1} with 5 years of exposure.

Simona then mentioned that one can also search for DM satellites: simulations predict a substructure of DM in the galactic halo. The annihilation spectrum predicted is different from a power law. The emission is expected to be constant in time. Considering a 100 GeV WIMP, with sigma v = 2.3 \times 10^{-26}, annihilating into a b-quark pair, with extragalactic background and diffuse galactic, it is generically observable at 5-sigma level in one year. To search for these, you first scan the sky, and then when you have found something you can concentrate on observing it.

Also, dwarf galaxies can be studied. The mass to light ratio there is high, and it is thus a promising place to look for a annihilation signal. The 3-sigma sensitivity of GLAST for 5 years data goes down to 10^-26 and below for WIMP mass in the tens of GeV range.

To search for lines in the spectrum, you search in a annulus between 20 and 35 degrees in galactic latitude, removing a 15° band from the galactic disk. It is a very distinctive spectral signature. A better sensitivity is achieved if the location of the line is known beforehand (if discovered by the LHC, for instance). A 200 GeV line can be seen at 5-sigma in 5 years.

GLAST can also look for cosmological WIMPs at all redshifts. There is a spectral distorsion caused by integration over redshift. The reach of GLAST is a bit higher here, 10^-25. One can do better if there is a high concentration of DM in substructures.

A review of yesterday’s afternoon talks: non-thermal gravitino dark matter and non-standard cosmologies May 21, 2008

Posted by dorigo in cosmology, news, physics, science.
Tags: , , ,
comments closed

In the afternoon session at PPC2008 yesterday there were several quite interesting talks, although they were not easy for me to follow. I give a transcript of two of the presenations below, for my own record as well as for your convenience. The web site of the conference is however quite quick in putting the talk slides online, so you might want to check it if some of what is written below interests you.

Ryuichiro Kitano talked about “Non-thermal gravitino dark matter“. Please accept my apologies if you find the following transcript confusing: you are not alone. Despite my lack of understanding of some parts of it, I decided to put it online anyway, in the hope that I will have the time one day to read a couple of papers and understand some of the details discussed…

Ryuichiro started by discussing the standard scenario for SUSY dark matter, with a WIMP neutralino. This is a weakly interacting, massive, stable particle. In general, one has a mixing between bino, wino, and the higgsinos, and that is what we call neutralino. In the early universe it follows a Boltzmann distribution, then there is a decoupling phase when the process inverse to production becomes negligible, so at freeze-out the number density of the neutralino goes with T^3. The final abundance is computed by equating two terms at time of decoupling, to get <\sigma v> = (n^2_\chi -n^2_{eq}.

In this mechanism there are some assumptions. The neutralino is a LSP: it is stable. The second assumption is that the universe is radiation-dominated at time of decoupling. A third assumption says that there is no entropy production below T=O(100 GeV), otherwise relative abundances would be modified. Are these assumptions reasonable ? Assumption one restricts to gravity mediation. There is almost always a moduli problem. This is inconsistent with assumptions 2 and 3. If you take instead anomaly mediation, wino LSP and it gives too small abundances. We thus need a special circumstance for the standard mechanism to work.

The moduli/gravitino problem: in gravity mediation scenario, there is always a singlet scalar field which obtains a mass throuhg SUSY breaking. S is a singlet under any symmetry, and it gives a mass dump to the gaugino. Its potential cannot be stabilized, and it gets mass only through SUSY breaking. Therefore, there exists a modulus field. We need to include it to consider the cosmological history, because it has implications.

During inflation, the S potential is deformed because S gets mass only from SUSY breaking. So, the initial value of the moduli will be modified. Once S-domination happens it is a cosmological disaster. If the gravitino is not LSP, it decays with a lifetime of the order of one year, and it destroys the standard picture of big bang nucleosynthesis (BBN). if the decay is forbidden, it is S
to have a lifetime of O(1y), still a disaster. Inconsistent with neutralino DM scenario, or better, gravity mediation is inconsistent.

So we need some special inflation model which does not couple to the S field; a very low scale inflation such that deformation of S potential is small; and a lucky initial condition such that S-domination does not happen. Is there a good cosmological scenario that does not require such conditions ?

Non-thermal gravitino DM is a natural and simple solution to the problem. Gauge mediation offers the possibility. SUSY breaking needs to be fixed in the scenario. Most of it has the same effective lagrangian. This implies two parameters in addition to the others: the height of the potential (describing how large is the breaking) and the curvature m^4/\Lambda^4. In this framework, the gravitino is the LSP.

In non-thermal gravitino dark matter scenario, the mechanism can produce the DM candidate. After inflation, S oscillation starts. We have a potential for it, there is a quadratic term. Second step is decay. The S coupling to superparticles are proportional to their mass. S-Gravitino coupling is instead suppressed. Gives a smaller branching ratio to gravitino. Good for the gravitino abundance. Also the shorter lifetime as compared to gravity mediation is good news for BBN.

The decay of S to a bino pair must be forbidden to preserve BBN abundances. So S \to hh is the dominant decay mode if it is open. If we calculate the decay temperature, we find good matches with BBN and it is perfect for DM as far as its abundance is concerned.

There are two parameters: height of potentials and curvature. We have to explain the size of gaugino mass and fix one of the parameters. Gravitino abundance is explained if gravitino mass is about 1 GeV. Baryon abundances however have to be produced by other means.

Step three is gravitino cooling. Are they cold ? THey are produced by the decay of 100 GeV particles, relativistic, but their distribution is non thermal. It slows down by redshift. These must be non-relativistic at time of structure formation.

If we think of SUSY cosmology we should be careful about the consistency with the underlying mode, of gravity mediation. Gauge mediation provides viable cosmology with non-thermally
gravitino DM.

Next, Paolo Gondolo gave a thought-provoking talk on “Dark matter and non-standard cosmologies“. Again, I do not claim that the writeup below makes full sense -not to me, but maybe to you it does.

Paolo started by pointing out the motivations for his talk: they come directly from the previous talks, the problem with the gravitino and with the moduli. One might need to modify usual cosmology before nucleosynthesis. Another motivation is more phenomenological. The standard results on neutralino DM are presented in standard parameter space $M_0 – M_{1/2}$, and one gets a very narrow band due to constraints of dark matter from cosmology. These constraints come from primordial nucleosynthesis. They assume that neutralinos were produced thermally, decoupled at a later time and remained with a residual abundances. This might not be true, and if it isn’t, the whole parameter space might still be consistent with cosmological constraints.

[This made me frown: isn't the SUSY parameter space still large enough ? Do we really need to revitalize parts not belonging to the hypersurface allowed by WMAP and other constraints ?]

The above occurs just by changing the evolution of the universe before nucleosynthesis. By changing $\tan \beta$ you can span a wider chunk of parameters, but that is because you look at a projection. Cosmological constraints give a n-1 hypersurface. One can extend it outside of it. But this comes at the price of more parameters. Don’t we have enough parameters already?

Cosmological density of neutralinos may differ from usual thermal value because of non-thermal production or non-standard cosmologies. J.Barrow, in 1982, wrote of massive particles as a probe of the early universe. So it is an old idea. It continues in 1990, with a paper by Kamionkowski and Turner: Thermal relics: do we know their abundances ?

So let us review the relic density in standard cosmology, and the effect of non-standard ones. In standard cosmology the Friedmann equation governes the evolution of the scale factor a, and the dominant dependence of \rho on a determines the expansion rates. Today we are matter-dominated, and we were radiation-dominated before, because \rho scales with
different powers of the scale factor: now \rho = a^{-3}, but before it went a^{-4}. Before radiation domination there was reheating, and before, the inflation era. At early times, neutralinos are produced in e+e- production, and mu+mu-. Then production freezes out, and here they say neutralino anihilations ceases, but it really is the production which ceases. Annihilation continues at smaller rates until today so that we can look for it, but it is production that stops. Number of neutralinos per photons is constant at freeze out. The higher the annihilation rate, the lower the final density. There is an inverse proportionality.

The freeze-out occurs during the radiation-dominated epoch, T = 1/20th of the particle mass, so we have much higher temperature than that of a matter-dominated universe. Freeze-out occurs before BBN. We are making an assumption of the evolution of the universe before BBN. What can we do in non-standard scenarios ? We can decrease the density of particles, by producing photons after freeze-out (entropy dilution). Increasing photons you get lower final density. One can also increase the density of particles by creating them non-thermally, from decays. Another way is to make the universe expand faster during freeze-out, for instance in quintessence models.

The second mechanism works because if we change the expansion rate we need a higher density. What if instead we want to keep the standard abundance ? We want to produce WIMPS in a thermal mechanism. We need a standard Hubble expansion rate down to T=m/23. down to freeze-out. A plot of m/T_{max} versus <\sigma v> shows that production is aborted at m/T>23.

How can we produce entropy to decrease the density of neutralinos after freeze-out ? We add massive particles that decay or annihilate late, for example a next-to-LSP. We end up increasing the photon temperature and entropy, while the neutrino temperature is unaffected.

We can increase the expansion rate at freeze-out by adding energy in the Universe, adding a scalar field, or modify the Friedmann equation by adding an extra dimension. In alternative, produce more particles by decays.

In braneworld scenarios, matter is confined to the brane, and gravitons propagate in the bulk. It gives extra terms in the Friedmann equation, proportional to the square of the density and the Planck mass squared. We can get different densities. For example, in the plane of m_0 versus gravitino mass, the Wino is usually not a good candidate for DM but here it is in Randall-Sundrum type II scenarios. We can resuscitate models of SUSY that people think are ruled out by cosmology.

Antiprotons from WIMP annihilation in the galactic halo constrain the RS model II. The 5-dimensional Planck constant M5 has different constraints, antiprotons give bounds >1000 TeV.

Non-thermal production from gravitational acceleration: at the end of inflation acceleration was so high you could create massive particles. They can have density of dM if mass is of order of Hubble parameter. Non-thermal production from particle decays is another non-standard case which is not ruled out.

Then there is the possibility of neutralino production from a decaying scalar field: In string theories, the extra dimensions may be compactified as orbifolds or Calabi-Yau manifolds. The surface shown is a solution of equation such as z_1^5 + z_2^5=1, with z complex numbers. The size and shape of the compactified dimensions are parametrized by moduli fields \phi_1, \phi_2, \phi_3… The values of the moduli fields fix the coupling constants.

Two new parameters are needed to evade the cosmological constraints to SUSY models: reheating temperature T_rh of the radiation when phi decays. It is >5 MeV from BBN constraints. Other parameter is the number of neutralinos produced per phi decay divided by phi mass, b/m_phi. b depends on the choice of the Kahler potential, superpotential, and gauge kinetic function, so on the high energy theory: the compactification , the fluxes etc. you put in your model. By lowering the reheating temperature you can decrease the density of particles. But the higher b/m, the higher the density you can get. So you can get almost any DM density you want.

Neutralino can be cold dark matter candidates anywhere in MSSM parameter space, provided one allows for these other parameters to vary.

If you work with non-standard cosmology, the constraints are transferred from the low-energy to the high-energy theory. The discovery of neutralino DM which is non thermal may open an experimental window on string theory.

[And it goes without saying that I find this kind of syllogism a wide stretch!].

Follow

Get every new post delivered to your Inbox.

Join 102 other followers