jump to navigation

ICHEP blog July 12, 2010

Posted by dorigo in astronomy, Blogroll, cosmology, internet, news, physics, science.
comments closed

Just one line here to mention that since May there is a new blog out there – a temporary blog that will cover the end of July event in Paris – the International Conference on High Energy Physics -, how we get there, and the aftermath. The effort includes several well-known bloggers in high-energy physics, and is definitely worth following.

You can visit it here.

Post summary – April 2009 May 1, 2009

Posted by dorigo in astronomy, Blogroll, cosmology, internet, news, personal, physics, science, social life.
Tags: , , , , , , , , , , , ,
comments closed

As the less distracted among you have already figured out, I have permanently moved my blogging activities to www.scientificblogging.com. The reasons for the move are explained here.

Since I know that this site continues to be visited -because the 1450 posts it contains draw traffic regardless of the inactivity- I am providing here monthly updates of the pieces I write in my new blog here. Below is a list of posts published last month at the new site.

The Large Hadron Collider is Back Together – announcing the replacement of the last LHC magnets

Hera’s Intriguing Top Candidates – a discussion of a recent search for FCNC single top production in ep collisions

Source Code for the Greedy Bump Bias – a do-it-yourself guide to study the bias of bump fitting

Bump Hunting II: the Greedy Bump Bias – the second part of the post about bump hunting, and a discussion of a nagging bias in bump fitting

Rita Levi Montalcini: 100 Years and Still Going Strong – a tribute to Rita Levi Montalcini, Nobel prize for medicine

The Subtle Art of Bump Hunting – Part I – a discussion of some subtleties in the search for new particle signals

Save Children Burnt by Caustic Soda! – an invitation to donate to Emergency!

Gates Foundation to Chat with Bloggers About World Malaria Day – announcing a teleconference with bloggers

Dark Matter: a Critical Assessment of Recent Cosmic Ray Signals – a summary of Marco Cirelli’s illuminating talk at NeuTel 2009

A Fascinating New Higgs Boson Search by the DZERO Experiment – a discussion on a search for tth events recently published by the Tevatron experiment

A Banner Worth a Thousand Words - a comment on my new banner

Confirmed for WCSJ 2009 – my first post on the new site

Things I should have blogged on last week April 13, 2009

Posted by dorigo in cosmology, news, physics, science.
Tags: , , , , ,
comments closed

It rarely happens that four days pass without a new post on this site, but it is never because of the lack of things to report on: the world of experimental particle physics is wonderfully active and always entertaining. Usually hiatuses are due to a bout of laziness on my part. In this case, I can blame Vodafone, the provider of the wireless internet service I use when I am on vacation. From Padola (the place in the eastern italian Alps where I spent the last few days) the service is horrible, and I sometimes lack the patience to find the moment of outburst when bytes flow freely.

Things I would have wanted to blog on during these days include:

  • The document describing the DZERO search of a CDF-like anomalous muon signal is finally public, about two weeks after the talk which announced the results at Moriond. Having had in my hands a unauthorized draft, I have a chance of comparing the polished with the unpolished version… Should be fun, but unfortunately unbloggable, since I owe some respect to my colleagues in DZERO. Still, the many issues I raised after the Moriond seminar should be discussed in light of an official document.
  • DZERO also produced a very interesting search for t \bar t h production. This is the associated production of a Higgs boson and a pair of top quarks, a process whose rate is made significant by the large coupling of top quarks and Higgs bosons, by virtue of the large top quark mass. By searching for a top-antitop signature and the associated Higgs boson decay to a pair of b-quark jets, one can investigate the existence of Higgs bosons in the mass range where the b \bar b decay is most frequent -i.e., the region where all indirect evidence puts it. However, tth production is invisible at the Tevatron, and very hard at the LHC, so the DZERO search is really just a check that there is nothing sticking out which we have missed by just forgetting to look there. In any case, the signature is extremely rich and interesting to study (I had a PhD doing this for CMS a couple of years ago), thus my interest.
  • I am still sitting on my notes for Day 4 of the NEUTEL2009 conference in Venice, which included a few interesting talks on gravitational waves, CMB anisotropies, the PAMELA results, and a talk by Marco Cirelli on dark matter searches. With some effort, I should be able to organize these notes in a post in a few days.
  • And new beautiful physics results are coming out of CDF. I cannot anticipate much, but I assure you there will be much to read about in the forthcoming weeks!

NeuTel 09: Oscar Blanch Bigas, update on Auger limits on the diffuse flux of neutrinos April 3, 2009

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , ,
comments closed

With this post I continue the series of short reports on the talks I heard at the Neutrino Telescopes 2009 conference, held three weeks ago in Venice.

The Pierre Auger Observatory is a huge (3000 km^2) hybrid detector of ultra-energetic cosmic rays -that means ones having an energy above 10^18 eV. The detector is located in Malargue, Argentina, at 1400m above sea level.

There are four six-eyed fluorescent detectors: when the shower of particles created by a very energetic primary cosmic ray develops in the atmosphere, it excites nitrogen atoms which emit energy in fluorescent light, collected in telescope. It is a calorimetric measurement of the shower, since the number of particles in the shower gives a measurement of the energy of the incident primary particle.

The main problem of the fluorescent detection method is statistics: fluorescent detectors have a reduced duty cycle because they can only observe in moonless nights. That amounts to a 10% duty cycle. So these are complemented by a surface detector, which has a 100% duty cycle.

The surface detector is composed by water Cherenkov detectors on the ground, which can detect light with PMT tubes. The signal is sampled as a function of distance from the center of shower. The measurement depends on a Monte Carlo simulation, so there are some systematic uncertainties present in the method.

The assembly includes 1600 surface detectors (red points), surrounded by four fluorescence detectors (shown by green lines in the map above). These study the high-energy cosmics, their spectra, their arrival direction, and their composition. The detector has some sensitivity to unidentified ultra-high energy neutrinos. A standard cosmic ray interacts at the top of atmosphere, and yields an extensive air shower which has an electromagnetic component developing on the ground; but if the arrival direction of the primary is tilted with respect to the vertical, the e.m. component is absorbed when it arrives on the ground, so it contains only muons. For neutrinos, which can penetrate deep in the atmosphere before interacting, the shower will instead have a significant e.m. component regardless of the angle of incidence.

The “footprint” is the pattern of firing detectors on the ground. It encodes information on the angle of incidence. For tilted showers, the presence of an e.m. component is strong indication of neutrino shower. An elongated footprint and a wide time structure of signal is seen for tilted showers.

There is a second method to detect neutrinos. This is based on the so-called “skimming neutrinos“: the Earth-skimming mechanism occurs when neutrinos interact in the Earth, producing a charged lepton via charged current interaction. The lepton produces a shower that can be detected above the ground. This channel has better sensitivity than neutrinos interacting in the atmosphere. It can be used for tau neutrinos, due to early tau decay in the atmosphere. The distance of interaction for a muon neutrino is 500 km, for a tau neutrino is 50 km. for electrons it is 10 km. These figures apply to 1 EeV primaries. If you are unfamiliar with these ultra-high energies, 1 Eev = 1000 PeV = 1,000,000 TeV: this is roughly equivalent to the energy drawn in a second by a handful of LEDs.

Showers induced by emerging tau leptons start close to the detector, and are very inclined. So one asks for an elongated footprint, and a shower moving at the speed of light using the available timing information. The background to such a signature is of the order of one event every 10 years. The most important drawback of Earth-skimming neutrinos is the large systematic uncertainty associated with the measurement.

Ideally, one would like to produce a neutrino spectrum or an energy-dependent limit on the flux, but no energy reconstruction is available. Observed energy depends on the height at which the shower is developing, and since this is not known for penetrating particles as neutrinos, one can only give a flux limit for them. The limit is in the range of energy where GZK neutrinos should peak, but its value is an order of magnitude above the expected flux of GZK neutrinos. A differential limit in energy is much worse in reach.

The figure below shows the result for the integrated flux of neutrinos obtained by the Pierre Auger observatory in 2008 (red line), compared with other limits and with expectations for GKZ neutrinos.

Variation found in a dimensionless constant! April 1, 2009

Posted by dorigo in cosmology, mathematics, news, physics, science.
Tags: , ,
comments closed

I urge you to read this preprint by R.Scherrer (from Vanderbilt University), which appeared yesterday on the arxiv. It is titled “Time variation of a fundamental dimensionless constant“, and I believe it might have profound implications in our understanding of cosmology, as well as theoretical physics. I quote the incipit of the paper below:

“Physicists have long speculated that the fundamental constants might not, in fact, be constant, but instead might vary with time. Dirac was the first to suggest this possibility [1], and time variation of the fundamental constants has been investigated numerous times since then. Among the various possibilities, the fine structure constant and the gravitational constant have received the greatest attention, but work has also been done, for example, on constants related to the weak and strong interactions, the electron-proton mass ratio, and several others.”

Many thanks to Massimo Pietroni for pointing out the paper to me this morning. I am now collecting information about the study, and will update this post shortly.

Neutrino Telescopes day 2 notes March 12, 2009

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , , , , , , ,
comments closed

The second day of the “Neutrino Conference XIII” in Venice was dedicated to, well, neutrino telescopes. I have written down in stenographical fashion some of the things I heard, and I offer them to those of you who are really interested in the topic, without much editing. Besides, making sense of my notes takes quite some time, more than I have of it tonight.

So, I apologize for spelling mistakes (the ones I myself recognize post-mortem), in addition to the more serious conceptual ones coming from missed sentences or errors caused by my poor understanding of English, of the matter, or of both. Also, I apologize to those of you who would have preferred a more succint, readable account: As Blaise Pascal once put it, “I have made this letter longer than usual, because I lack the time to make it short“.

NOTE: the links to slides are not working yet – I expect that the conference organizers will fix the problem tomorrow morning.

Christian Spiering: Astroparticle Physics, the European strategy ( Slides here)

Spiering gave some information about two new bodies, European organizations: ApPEC and ASPERA. ApPEC has two committees offering advice to national funding agencies, improve links and communication between the astroparticle physics community and scientific programmes of organizations like CERN, ESA etc. Aspera was launched in 2006, to give a roadmap for APP in Europe.Close coordination with ASTRONET, and links to CERN strategy bodies.

Roadmapping: science case, overview of the status, some recommendations for convergence. Second thing, a critical assessment of the plans, a calendar for milestones, coordinated with ASTRONET.

For dark matter and dark energy searches, Christian displayed a graph showing the cross section of WIMPS as a function of time, the reach of present-day experiments. In 2015 we should reach cross sections of about 10^-10 picobarns. We are now at some 10^-8 with our sensitivity. The reach depends on background, funding and infrastructure. Idea is to go toward a 2-ton-scale zero-background detectors. Projects: Zeplin, Xenon, others.

In an ideal scenario, LHC observations of new particles at weac scale could place these observations in a well-confined particle physics context, direct detection would be supported by indirect signatures. In case of a discovery, smoking-gun signatures of direct detection such as directionality and annual variations would be measured in detail.

Properties of neutrinos: direct mass measurement efforts are KATRIN and Troitzk. Double beta decay experiments are Cuoricino, Nemo-3, Gerda, Cuore, et al. The KKGH group claimed a signal of masses of a few tenths of eV, but normal hierarchy implies 10^-3 eV for the lightest neutrino mass of the same order. Experiments are expected to be in operation (cuoricino, nemo-3) or start by 2010-2011. Supernemo will start in 2014.

A large infrastructure for proton decay is advised. For charged cosmic rays, depending on which part of the spectrum one looks, there are different kinds of physics contributing and explorable.

The case for Auger-North is strong, high-statistics astronomy with reasonably fast data collection is needed.

For high-energy gamma rays, the situation has seen an enormous progress over the last 15 years. Mostly by imaging atmospheric Cherenkov telescopes (IACT). Whipple, Hegra, CAT, Cangaroo, Hess, Magic, Veritas. Also, wide-angle devices. For existing air Cherenkov telescopes, there are Hess and Magic running, very soon Magic will go into Magic-II. Whipple runs a monitoring telescope.

There are new plans for MACE in India, something between Magic and Hess. CTA and AGIS are in their design phase.

Aspera’s recommendations: the priority of VHE gamma astrophysics is CTA. They recommend design and prototyping of CTA and selection of sites, and proceeding decidedly towards start of deployment in 2012.

For point neutrino sources, there has been tremendous progress in sensitivity over the last decade. A factor of 1000 within 15 years in sensitivity to fluxes. IceCube will deliver what has promised, within 2012.

For gravitational waves, there is LISA and VIRGO. The frequency tested of LISA is in the 10^-2 Hz, VIRGO will go to 100-10000 Hz. The reach is of several to several hundred sources per year. The Einstein telescope, a graviwaves detector underground, could access thousand of sources per year. Einstein will construct starting in 2017. The conclusions: Einstein is the long-term future project of ground-based gravitational wave astronomy in Europe. A decision on funding will come after first detections with enhanced LIGO and virgo, but is most likely after collecting about a year of data.

In summary,the budget will increase by a factor of more than two in the next decade. Km3net, megaton, CTA, ET will be the experiments taking the largest share. We are moving into regions with a high discovery potential. An accelerated increase of sensitivity in nearly all fields.

K.Hoffmann, Results from IceCube and Amanda, and prospects for the future ( slides here)

IceCube will become the first instrumented cubic km neutrino telescope. Amanda-II consists of 677 optical modules embedded in the ice at depths of 1500-2000 m. It has been a testbed for icecube and for deploying optical modules. Icecube has been under construction for the last several years, Strings of PMT tubes have been deployed in the ice during the last few years. 59 of them are operating.

The rates: IC40 has 110 neutrino events per day. Getting close to 100% live time. 94% in January. IceCube has the largest effective area for muons, long track length. The range of sensitivity in energy is to TeV-PeV range.

Ice properties are important to understand. A dust logger measures dust concentration, which is connected to the attenuation length of light in ice. There is a thick layer of dust sitting at a depth of 2000m, clear ice above, and very clear ice below. They have to understand the light yield and propagation well.

Of course one of the most important parameters is the angular resolution. As the detector got larger, it improved. One of the more exciting things this year was to see the point spread function go peak at less than one degree with long track lengths for muons.

To see the Moon for a telescope is always reassuring. They did it, a >4 sigma effect for cosmic rays impinging on the Moon.

With the waveforms they have in IceCube, the energy reconstruction has muons that are non-minimum ionizing. They reconstruct energy by number of photons along the track. Can achieve some energy resolution, progress in understanding how to reconstruct energy.

First results from point-source searches. The 40-string configuration data will be analyzed soon. The point sources are sought with a unbinned likelihod search. Taking into account energy variable in point source search. They expect point sources to have higher energy spectrum than atmospheric neutrinos. From 5114 neutrino candidates in 276 days, they found one hot spot in the sky, with a significance after trial factor accounting that is of about 2.2 sigma. There are variables next year that will be less sensitive to dust model, so they might be able to say more about that one soon.

For a 7-years data, 3.8 year livetime, the hottest spot has a significance of 3.3 sigma. With one year of data, icecube 22 will already be more sensitive than Amanda. Icecube and Antares are complementary, since icecube looks at northern declination and antares is looking at the southern declinations. The point source flux sensitivity is down to 10^-11 GeV cm-2 s-1.

For GRBs, one can use a triggered search, that is an advantage, and latest results give for 400 bursts a limit. From IceCube22, a unbinned search similar to the one of the point source search, gives an exclusion power expected to 10^-1 GeV per cm^2 (in E^2 dN/dE units), in most of the energy range.

The naked-eye GRB of March 19, 2008, had detector in test mode, only 9 of 22 strings taking data. Bahcall predicted flux peaks at 10^6 GeV with a flux of 10^-1, but the limit found is 20 times higher.

Finally, they are looking for WIMPS. A search was recently sent for publication by the 22-string IceCube. 104 days of livetime. Can reach down well.

Atmospheric neutrinos are also a probe for violations of Lorentz invariance -possibly from Quantum Gravity effects. The survival probability depends on energy, assuming maximal mixing their sensitivity is down to a part in 10^-28. They are looking for a change in what one would expect for flavor oscillation. Atmospheric neutrinos are produced, depending on where they are produced they traverse more of the core of the Earth. So one gets a neutrino beam with different baselines, based on energy, and you would see a difference in the neutrino oscillation probability. The neutrino oscillation parameter will be energy dependent.

In the future they would like to see a high-energy extension. Ice is the only medium where one can see a coherent radio signal and an optical one, and acoustic too. Past season was very successful, with the addition of 19 new strings. Many analyses of 22-string configuration are complete. ANalysis techinques being refined to exploit size, energy threshold, and technology used. Underway to develop tech to build GZK scale nu detector after IceCube is complete.

Vincenzo Flaminio, Results from Antares ( slides here)

Potential sources of galactic neutrinos can be SN remnants, pulsars, microquasars, and extragalactic ones are gamma-ray bursts and active galactic nuclei. A by-product of Antares is an indirect search for dark matter, results are not ready yet.

Neutrinos from supernovas: these act as particle accelerators, can give hadrons and gammas from neutral pion decays. Possible sources are those found by Auger, or for example the TeV photons which come from molecular clouds.

Antares is an array of photomultiplier tubes that look at Cherenkov light produced by muon crossing the detector. The site is south of France, the galactic center is visible for 75% of the time. The collaboration comprises 200 physicists from many european countries. The control room in Toulon is more comfortable than the Amanda site (and this wins the understatement prize of the conference).

The depth in water is 2500m. All strings are connected via cables on the seabed. 40km long electro-optical cable connects ashore. Time resolution monitored by LED beacon in each detector storey. A sketch of the detector is shown below.

Deployment started in 2005, in 2006 first line installed. Finished one year ago. In addition there is an acoustic storey and several monitoring instruments. Biologists and oceanographers are interested in what is done, not just neutrino physicists.

The detector positioning is an important point, because lines move because of sea currents. Installed a large number of transmitters along the lines, use information to reconstruct minute-by-minute the precise position of the lines.

Collect triggers at 10 Hz rate with 12 lines. Detected 19 million muons with first 5 lines, 60 with the full detector.

First physics analyses are going on. Select up-going neutrinos, low S/N ratio with atmospheric muons is avoided this way. Rate is of the order of two per day using multi-line configuration.

Conclusions: Antares has successfully reached the end of construction phase. Data taking is ongoing, analyses in progress on atmospheric muons and neutrinos, cosmic neutrino sources, dark matter, neutrino oscillations, magnetic monopoles, etcetera.

David Saltzberg, Overview of the Anita experiment ( slides here)

Anita flies at 120,000 ft above the ice. It is the eyepiece of the telescope. The objective is the large amount of ice of the Antarctica. Tested with 8 metric tons of ice to test effect for detection. Done at SLAC. Observe radio pulses from the ice. A wake-field radio signal is detected. It goes up and down in less than a nanosecond, due to its Cherenkov nature. It is called Askaryan effect. You can observe the number of particles in the shower, and the measured field effect does track the number of particles in the shower. The signal is 100% polarized linearly. Wavelength is bigger than the size of the shower, so it is coherent. At a PeV there are more radio quanta emitted than optical ones.

They will use this at very high energy, looking for GZK-induced neutrinos. The GZK converts protons into neutrinos, 50 MPc around sources.

The energy is at the level of 10^18 eV or higher, proper time is 50 milliseconds, longest baseline neutrino experiment possible.

Anita has a GPS antenna for detection, and orientation which needs a fraction of a degree resolution. Solar powered. Antennas are pointed down 10 degrees.

This 50-page document describes the instrument.

Lucky coincidences: 70% of world’s fresh water is in antarctica, and it is the most quiet radio place. The place selects itself, so to speak.

They made a flight with a live time of 17.3 days, but this one never flew above the thickest ice, which is where most of the signal should be coming from.

The Askaryan effect gets distorted by antenna detection, electronics, and thermal noise. The triggering works like any multi-level trigger. Sufficient energy in one antenna, same for neighbors. L3 goes down to 5 Hz from a start of 150 kHz. L2 does coincidence between adjacent L1 triggers.

They put a transmitter underground to get pulses to be detected. Cross-correlation between antennas do interferometry, and gets position of source. The resolution obtained on elevation is an amazing 0.3 degrees, and for azimuth it is 0.7 degree resolution. The ground pulsers make even very small effects stand out. Even 0.2 degree tilt of detector can be spotted by looking at errors in elevation as a function of azimuth.

First pass of analysis of data: 8.2M hardware triggers. 20,000 of those point well to ice. After requiring upcoming plane waves, isolated from camps and other events, remain a few events. Could be some residual man-made noise. Background estimate: thermal noise, which is well simulated, and gives less than one event after all cuts, and anthropogenic impulsive noise, like iridium phones, spark plugs, discharge from structures.

Results: having seen zero vertical polarization events surviving cuts, constraints on GZK production models. Best result to date in the energy range from 10^10 to 10^13 GeV.

Anita 2 has 27 million better triggers, over deeper ice, 30 days afloat. Still to be analyzed. Anita 1 is doing a 2nd pass deep analysis of the data. Anita 2 has better data, expect factor 5-10 more GZK sensitivity from it.

Sanshiro Enomoto, Using neutrinos to study the Earth: Geoneutrinos. ( slides here)

Geoneutrinos are generated by beta decay chain of natural isotopes (U,TH,K). These all yield antineutrinos. With an organic scintillator, they are detected by inverse-beta decay reaction yielding a neutron and a positron. The threshold is at 1.8 MeV. Uranium and Thorium contribute in this energy range, while the Potassium yield is below it. Only U-238 can be seen.

Radiogenic heat dominates Earth energetics. Measured terrestrial heat flow is of 44 TW, and the radiogenic heat is 3TW. The only direct geochemical probe: deepest borehole reaches only 12 km, and rock samples down to 200 km underground. Heat release from the surface peaks off America in the Pacific and in south Indian ocean. Estimate is of 20 TW from radioactive heat, 8 from U, 8 from Th, 3 from K. Core heatflow from solidification etc. is estimated at 5-15 TW, secular cooling 18+-10 TW.

Kamland has seen 25 events above backgrounds, consistent with expectations.

I did not take further notes of this talk, but was impressed by some awesome plots of Earth planisferes with all sources of neutrino backgrounds, to figure out which is the best place for a detector studying geo-neutrinos. Check the slides for them…

Michele Maltoni, synergies between future atmospheric and long-baseline neutrino experiments ( slides here)

A global six-parameter fit of neutrino parameters was shown, including solar, atmospheric, rector, and accelerator neutrinos, but not SNO-III yet. There is a small preference for non-zero theta_13, coming completely from the solar sector; as pointed out by G.Fogli, we do not find a non-zero theta_13 angle from atmospheric data. All we can do is point out that there might be something interesting, suggest experiments to do their own analyses fast.

The question is: in this picture, were many experiments contribute, is there space left for relevance of atmospheric neutrinos ? Which is the role of atmospheric neutrino measurements ? Do we need them at all ?

At first sight, there is not much left for atmospheric neutrinos. Mass determination is dominated by MINOS, theta_13 is dominated by CHOOZ, atmospheric data dominate in determination of mixing angle, atmospheric neutrino measurements have highest statistics, but with the coming of next generation this is going to change. There is symmetry in sensitivity shape of other experiments to some of the parameters. On the other hand, when you include atmospheric data, the symmetry is broken in theta_13, which distinguishes between normal and inverted hierarchy.

Determination of the octant in \sin^2 \theta_{23} and \Delta m^2_{31}. Also, the introduction of atmospheric data introduces a modulation in the \delta_{CP} - \sin \theta_{13} plot. Will this usefulness continue in the future ?

Sensitivity to theta_13: apart from hints mentioned so far, atmospheric neutrinos can observe theta_13 through matter effects, MSW. In practice, the sensitivity is limited by statistics: at E=6 GeV the ATM flux is already suppressed; background comes from \nu_e \to \nu_e events which strongly dilute the \nu_\mu \to \nu_e events. Also, the resonance occurs only for neutrinos OR antineutrinos, but not both.

As far as resolution goes, MegaTon detectors are still far in the future, but Long-baseline experiments are starting now.

One concludes that the sensitivity to theta_13 is not competitive with dedicated LBL and reactor experiments.

Completely different is the issue with other properties, since the issue of the resonance can be exploited once theta_13 can be measured. resonant enhancement of neutrino (antineutrino) oscillations for a normal (inverted) hierarchy; mainly visible for high energy, >6 GeV. The effect can be observed if detector can discriminate charge, or, if no charge discrimination is possible, if the number of neutrinos and antineutrinos is different.

Sensitivity to the hierarchy depends on charge discrimination for muon neutrinos. Sensitivity to the octant: in the low-energy region (E<1 GeV), for theta_13=0 the excess of \nu_e flux for theta_23 in one or the other side. Otherwise, there are lots of oscillations, but the effect persitst on the average. It is also present for both neutrinos and antineutrinos. At high energy, E>3 GeV, for theta_13 the MSW resonance produces an excess of electron-neutrino events. Resonance occurs only for one kind of neutrino (neutrino vs antineutrino).

So in summary one can detect many features with atmospheric neutrinos, but only with some particular characteristics of the detector (charge discr, energy resolution…).

Without atmospheric data, only K2K can say something on the neutrino hierarchy for low theta_13.

LBL experiments have poor sensitivity due to parameter degeneracies. Atmospheric neutrinos contribute in this case. The sensitivity to the octant is almost completely dominated by atmospheric data, with only minor contributions by LBL measurements.

One final comment: there might be hints of neutrino hierarchy in high-energy data. If theta_13 is really large, there can be some sensitivity to neutrino mass hierarchy. So the idea is to have a part of the detectors with increased photo-coverage, and use the rest of the mass as a veto: the goal is to lower the energy threshold as much as possible, to gain sensitivity to neutrino parameters with large statistics.

Atmospheric data are always present in any long-baseline neutrino detector: ATM and LBL provide complementary information on neutrino parameters, information in particular on hierarchy and octant degeneracy.

Stavros Katsanevas, Toward an European megaton neutrino observatory ( slides here)

Underground science: interdisciplinary potential at all scales. Galactic supernova neutrinos, galactic neutrinos, SN relics, solar neutrinos, geo-neutrinos, dark matter, cosmology -dark energy and dark matter.

Laguna: aimed at defining and realizing this research programme in Europe. Includes a majority of European physicists interested in the construction ove very massive detectors realized in one of the three technologies using liquids: water, liquid argon, and liquid scintillator.

Memphys, Lena, Glacier. Where could we put them ? The muon flux goes down with the overburden, so one has to examine the sites by their depth. In Frejus there is a possibility to put a detector between road and train tracks. Frejus rock is not hard but not either soft. Hard rock can become explosive because of stresses, and is not good. Another site is Pyhasalmi in Finland, but there the rock is hard.

Frejus is probably the only place where one can put water Cherenkov detectors. For liquid Argon, we have ICARUS (hopefully starting data taking in May), others (LANNDD, GLACIER, etc.). Glacier is a 70 m tank, with several novel concepts. A safe LNG tank, developed for many years by petrochemical industry. R&D includes readout systems and electronics, safety, HV systems, LAr purification. Must think about getting an intermediate scale detector.

The physics scope: a complementary program, a lot of reach in Memphis in searches for positron-pizero decays of protons, better for kzero in liquid argon. Proton lifetime expectations are at 10^36 years.

By 2013-2014 we will know whether sinsquared theta13 is larger than zero.

European megaton detector community (3 liquids) in collaboration with its industrial partners is currently addressing common issues (sites, safety, infrastructures, non-accelerator physics potential) in the context of LAGUNA (EUI FP8) Cost estimates will be ready by July 2010.

David Cowan, The physics potential of Icecube’s deep core sub-array ( slides here)

A new sub-array in ice-cube, called deep-core: ICDC. Originally conceived as a way to improve the sensitivity to WIMPs. Denser sub-arrays to lower the energy threshold, they give one order of magnitude decrease in the low-energy reach. There are six special strings plus seven nearby icecube strings The vertical spacing is of 7 meters, with 72 meter horizontal interstring spacing: a x10 density with respect to IceCube.

The effective scattering length in deep ice, which is very clear, is longer than 40 meters. This gives a better possibility to do a calorimetric measurement.

The deep core is at the bottom center. They take the top modules in each string as an active veto for backgrounds coming from muon events going down. On the sides, three layers of IC strings also provide a veto. These beat down the cosmic background a lot.

The ICDC veto algorithms: one runs online, finds event light intensity, the weighted center of gravity, and the time. They do a number of things and come up with a 1:1 S/N ratio. So ICDC improves the sensitivity to WIMPs, neutrino sources in the southern sky, oscillations. For WIMPs, an annihilation can occur in the center of the Earth or Sun. Annihilations to bbbar pairs or tau-tau pairs gives soft neutrinos, while ones into W boson pairs yield hard ones. This way, they extend the reach to masses of less than 100 GeV, at cross sections of 10^-40 cm^2.

In conclusion, ICDC can analyze data at lower neutrino energy than previously thought possible. It improves overlap with other experiments. It provides for a powerful background rejection, and it has sufficient energy resolution to do a lot of neutrino oscillation studies.

Kenneth Lande, Projects in the US: a megaton detector at Homestake ( slides here)

DUSEL at Homestake, in South Dakota. There are four tanks of water Cherenkov in the design. Nearby there’s the old site of the chlorine experiment. Shafts a km apart.

DUSEL will be an array of 100-150 kT fiducial mass Cerenkov detectors, at 1300 km distance from FNAL. The beam goes from 0.7 MW to 2.0 MW as the project goes along. Eventually add 100 kT of argon. A picture below shows a cutaway view of the facility.

Goals are accelerator-based theta_13, look at neutrino mass hierarchy, CP violation through delta_CP. For non-accelerator, the program includes studies of proton decay, relic SN neutrinos, prompt SN neutrinos, atmospheric neutrinos, and solar neutrinos. They can build up to 70m-wide tanks, but settled to 50-60m. The plan is to build three modules.

Physics-wise, the fnal beam has oscillated and disappeared at energy around 4 GeV. Rate is of 200,000 CC events per year assuming 2MW power (no oscillation, raw events). Neutrino appearance (electron kind) for nu and antinu as a function of energy gives oscillation, and mass hierarchy.

Reach in theta_13 is below 10^-2. For nucleon decay: looking in the range of 10^34. 300 kT per 10 y means 10^35 proton-years. Sensitive also to K-nu mode of decay, at the level of 8×10^33.

DUSEL can choose the overburden. A deep option can go deeper than Sudbury.

US power reactors are far from Homestake. Typical distance is 500 km. The neutrino flux from reactors is 1/30 of that of SK.

For a SN in our galaxy they expect about 100,000 events in 10 seconds. For a SN in M31 they expect about 10-15 events in a few seconds.

Detector construction: excavation, installing water-tight liner… Financial timetable is uncertain. At the moment water is being pumped down. Rock studies can start in September.

And that would be all for today… I heard many other talks, but cannot bring myself to comment on those. Please check http://neutrino.pd.infn.it/NEUTEL09/the conference site for the slides of the other talks!

Neutrino Telescopes Day 1 note March 11, 2009

Posted by dorigo in cosmology, news, physics, science.
Tags: , , , , , , , ,
comments closed

Below are some notes I collected today during the first day of the “Neutrino Telescopes” conference in Venice. I have to warn you, dear readers, that my superficial knowledge of most of the topics discussed today makes it very likely to certain that I have inserted some inaccuracy, or even blatant mistakes, in this summary. I am totally responsible for the mistakes, and I apologize in advance for whatever I have failed to report correctly. Also, please note that because of the technical nature of this conference, and the specialized nature of the talks, I have decided to not even try to simplify the material: this is thus only useful for experts.

In general, the conference is always a pleasure to attend. The venue, Palazzo Franchetti, is located on the Canal Grande in Venice. To top that, today was a very nice and sunny day. I skipped the first few “commemorative” talks, and lazily walked to the conference hall in time for coffee break. The notes I took refer to only some of the talks, those which I managed to follow closely.

Art Mc Donald, SNO and the new SNOLAB

This was a discussion of the SNO experiment and a description of the new telescopes that will start to operate in the expansion of the SNO laboratory. SNO is an acrylic vessel, 12m in diameter, containing 1000 tonnes of deuterium (D_2 O), with some additional 1700 tonnes of water for inner shielding, and 5300 tonnes for outer shielding. 9500 photomultiplier tubes watch it, quick to record the faint neutrino signals.

The detector is located deep underground, in the Creighton mine near Sudbury, Ontario, Canada. The depth makes for smaller cosmic-ray backgrounds than other neutrino detectors, at a depth where muons from neutrino interactions start to compete with primary ones.

SNO was designed to observe neutrinos in three different reactions:

  1. In the charged-current weak interaction of a neutrino with a deuterium nucleus the neutrino becomes an electron, emitting a W boson which turns the nucleus into a pair of protons. This reaction has a energy threshold of 1.4 MeV, and the electron can be measured by the Cherenkov light it yields in the liquid.
  2. Neutral-current interactions -where the neutrinos interact with matter by exchanging virtual Z bosons, are possible with all kinds of neutrinos, and they provide a signature of a neutron and a proton freed from the nucleus, if the incoming neutrino has an energy above 2.2 MeV.
  3. Finally, elastic scattering both in water and deuterium can occur between neutrinos and the electrons of the medium.

SNO tries three neutron detection methods, which are “systematically different”: they rely on different physical processes and have thus different measurement systematics. First of all, in pure heavy water one can detect neutrons by capturing them into deuterium, with the emission of a 6.25 MeV photon.

Putting salt in the detector allows to get more gamma rays from neutron capture, because the sodium chloride allow neutron capture in 35Cl, and neutral current events can be separated from charged-current events using event isotropy.

In phase III they put in an array of long tubes of ultrapure Helium-3, and observe neutron capture and measure neutral current rates with an entirely different detection system.

Measurements showed that CC and NC reactions were not the same, fluxes were in a ratio of R(CC/NC)=0.34 \pm 0.023^{+0.029}_{-0.031}.

Phase III consists in inserting 40 strings on a 1-meter spaced grid in the vessel, for a total of 440 meters of proportional counters filled with 3He. The signal collected in phase III amounts to 983+-77 events.

Combined with the results of the KamLAND and Borexino experiments, the fit to SNO data constrains the angle \theta_12 to 34.4+-1.2 degrees, and \delta m^2 = (7.59^{+0.19}_{-0.21})\times 10^{-5} eV^2.

The future for SNO is to have it filled with liquid scintillator doped with Neodimium for double beta decay studies. 150-Nd is one of the most favourable candidates for double beta, with large phase space due to its high endpoint energy (3.37 MeV). It provides for a long attenuation length, and it is stable for more than 2 years. For double beta decays they expect to get to 0.1 eV sensitivity with a 1000 ton-mass detector.

Atsuto Suzuki: KamLAND

Atsuto discussed the history and the results of the KamLand experiment. There was a first proposal of the detector in 1994, a full budget approval in 1997 by the Japanese. In April 1998 the construction started, and in 1999 US-Kamland was approved by DOE. Data taking began in 2002. In August 2009 there will be a new budget proposal, for double beta decay studies.

Kamland consists in a Xenon-filled vessel, with an outside one filled with Gadolinium. Kamland wants detects neutrino oscillation with >100 Km baseline, exploiting the many nuclear reactors in Japan. The second goal is to search for geo-neutrinos: these are potential anti-neutrinos coming from fusion processes which could hypothetically take place at the center of the Earth.

Many reactors provide the source of neutrinos, a total of 70GW (12% of global nuclear power) at an average 175+-35 km distance from KamLAND. The largest systematic for reactor neutrino detection come from the knowledge of the fiducial volume (4.7%), the energy threshold (2.3%), the antineutrino spectrum (2.5%), for a total of 6.5%.

The experiment observed neutrino disappearance, measured the parameters of neutrino oscillations, and also put an upper limit of 6.4 TW for geo-neutrinos. Theoretical models, which predict the power at 3 TW, have not been excluded yet.

Gianluigi Fogli:  SNO, KamLAND and neutrino oscillations: theta_13.

Gianluigi started his talk with a flash-back: four slides which were shown at NO-VE 2008, the former instantiation of this conference. This came after the KamLAND 08 release, but before the SNO 2008 release of results.

What one would like to know is the hierarchy (normal or inverted), the CP asymmetry in the neutrino sector, and the \theta_{13} mixing. Some aspects of this picture are currently hidden below the 1-sigma level. A recent example is the slight preference for \sin^2_{13} = 0.01 from the combination of solar and reactor 2008 data. They are consistent with zero but their combination prefers a value 1-sigma different.

In the second slide from 2008, the reason was discussed. A disagreement comes from the difference between solar data, SNO-dominated, and the kamLAND data at \theta_{13}=0. The disagreement is reduced for \theta_{13} >0. A choice of \sin^2 \theta_{13}=0.03 (instead of zero) gives a better fit of the two sets of data. It is a tiny effect, but with some potential for improvement, once final SNO data and further Kamland data will be available.

The content of Fogli’s talk was organized as a time-table of eight events, in two acts.

First: in May 2008 the effect was discussed independently by Balantekin and Yilmaz. Then, in May, SNO-III data was released. In June, our analysis giving \sin^2_{13} = 0.021 \pm 0.017 went to PRL, and then an independent analysis of S+K was given in August.

Concerning atmospheric and long-baseline neutrinos, there were results yielding 0.016+-0.010 from all data in our analysis, then comments on the atmospheric hint by Maltoni and Shwetz, then a new three-flavor atmospheric analysis from SK. Finally, just a month ago we saw the first MINOS results on electron neutrino appearance.

Act one: solar and kamland hint for \theta_{13}>0: Balantekin and Yilmaz discussed it. The release of SNO-III data saw a strong improvement in the data, and the result is slightly lower cc/nc ratio, so a slightly lower value of \sin^2_{13} is preferred. Fogli here noted that the new data are ok from a model-independent viewpoint, that is, there is an internal consistency among SNO and SK. Also, there is consistency among neutral-current measurements and the standard solar model of 2005. On the other hand, also kamland data have their own internal consistency: they reconsruct the oscillation pattern through one full period. The fact that the solar and kamland datasets are ok, but they disagree on theta_12, unless theta_13>0, is thus intriguing.

Event 3: the hints of theta_13>0 from the global analysis. We have the hint plotted in the plane of the two mixing angles, and you see that the solar and Kamland region in sintheta_13 vs sintheta_12. the agreement is reached only if \sin \theta_{13} is larger than zero. When they are combined, they find a best fit more than one sigma away from zero, 0.021+-0.017. The reason of the different correlation of the two mixing angles relies on the relative sign of mixings in the expression for the survival probability of neutrinos in SNO and Kamland. At low energy, in the vacuum the survival probability is given with an anticorrelation of \sin^2 \theta_{12} and \sin^2 \theta_{13}. At high energy, adiabatic MSW (SNO), the sign is opposite.

Complementarity: solar and kamland data taken separately prefer theta_13=0. Combined they are 1.2 sigma away from zero.

Event 4 in the list given above was the analysis by Schwetz and Tortola and Valle: they also found a preference for \theta_{13}>0  at a slightly higher confidence level.

In conclusion, a weak preference for \theta_{13}>0 is accepted at 1.2-1.5 sigma. Is this preference also supported by atmospheric and acceleratr data ? In Fogli’s paper (0806.2649) they used as independent support for a nonzero value of the angle, an older hint coming from their analysis of atmospheric data with CHOOZ and long-baseline results.

The complication comes out in Act 2: event 5 is the older but persisting hint for \theta_{13}>0. It comes from the 3-neutrino analysis of atmospheric, LBL, and CHOOZ data. There one has to go in detail, by considering what one means when one talks of an excess of electron events induced by three-neutrino sub-leading effects. The calculations are based on a numerical evolution of the Hamiltonian along the neutrino path in the atmosphere and in the known Earth layers. However, semianalytical approximations can be useful. An important observable is the excess of expected electron events compared to the no-oscillation case.

The excess is given by a formula,N_e/N_0-1 = (P(ee)-1)+rP(e \mu), where P(ee) and P(e\mu) are the oscillation probabilities, and R is the ratio of fluxes. The excess is zero when both \theta_{13} and \delta m^2 are both zero, but can have contributions otherwise.

We have two kinds of matter effects that take place in the propagation. If one assumes a constant density approximation, and with a normal hierarchy, the three quantities can be given, where one can distinguish the theta_13, the delta_m, and the interference terms. All three effects can singularly dominate. The different terms help fitting the small electron excess in sub-Gev and multi-Gev data.

The atmospheric three-neutrino analyses by the SK Collaboration (in hep-ex/0604011) and Schwetz, Tortola, and Valle in 0808.2016 cannot directly compare with the one of Fogli, because they do not include the two sub-leading solar terms, since they make the assumption of one-mass-scale-dominance.

Sticking to his own analysis, Fogli continued by taking the two hints from solar+kamland results on one side, and atmospheric neutrinos+chooz+lbl on the other: they indicate a 1.6 sigma discrepancy from zero of theta_13. Combining all data together, sin^2 theta_{13} = 0.016+-0.010. This is the result of 0806.2649. Below are the results for the two angles together, showing their anticorrelation in the two simultaneous determinations.

Event 6: rather recent, in December of last year Maltoni and schwetz published 0812.3161, which includes the discussion of the preliminary Superkamiokande-II data. Using SK-I data they find at most 0.5 sigma from atmospheric neutrinos plus chooz data. This is weaker than Fogli’s 0.9 sigma, but shows similar qualitative features.

Event 7: a discussion of the data of SK-I, SK-II, and maybe SK-III, even if all these things are not yet published. There eists ongoing three-flavor analyses as reported in recent PhD theses using SK I+II data. Wendell, Takenaga. Unfortunately, none of the above analyses allows both theta_13 and \Delta M>0, and thus they do not include interference effects linear in theta_13, which may play some non trivial role.

Concerning the sub-Gev electron excess, effects persist in phases I and II, but slight excess present of upgoing multi-Gev evens is present in SK I but not in SK II. This downward fluctution may disfavor a non-zero value of theta_13, as noted by Maltoni and Schwetz.

From SK-III two distributions presented at neutrino 2008 by J.Raaf show that a slight excess of upgoing multi-Gev seems to be back, together with a persisting excess of sub-Gev data.

So the question is: SK-III shows both effects. Can this be interpreted away from statistical fluctuations ? This requires a refined statistical analysis with a complete set of data coming from SuperKamiokande.

Currently, there is an impressive number of bins in energy and angle, and 66 sources of systematics. These need to be handled carefully. Such a level of refinement is difficult to reproduce outside of the collaboration, In other words, independent analyses of atmospheric data searching for small effects at the level of 1-sigma are harder to perform now. So, it will be important to see the next official SK data release and especially the SK oscillation analysis, hopefully including a complete treatment of three flavor oscillation withboth parameters allowed to go larger than 0.

In the meantime, Fogli noted that he does not have compelling reasons to revise his 0.9-sigma hint of theta_13 coming from published SK-I data.

Finally, Event 8: this last one is very recent, concerns the first MINOS results on electron-neutrino appearance. These preliminary results have been released too recently, and it would be unfair to anticipate results and slides that will be shown later in this workshop, but

Fogli could not help noticing that the MINOS best fit for theta_13 sits around the chooz limit, and is away from zero at 90% C.L.

If we see the glass half-full, then we might have two independent 90% C.L. hints in favor of theta_13>0: one coming from Fogli’s global analysis of 2008, and one coming from MINOS, that can be roughly symmetrized and approximated in the form \sin^2 \theta_{13} = 0.05 \pm 0.03. A combination at face value gives a value of 0.02 +- 0.01, an indication at 2-sigma of a non-zero value of this important angle. In other words, the odds against a null theta_13 are now 20 to 1.

G.Testera:  Borexino

Borexino is a liquid scintillator detector. The active volume is filled by 270 liters of liquid scintillator contained in a thin nylon vessel. Light emitted is seen by Photomultiplier tubes. The outer volume is filled by the same organic material, but with a quencher in the buffer region. Water used as shield. The tubes are looking at Cherenkov light. Used for active muon veto. Borexino is a simple detector, but in practice the requirements needed for the radiopurity are tough to comply with.

The physics goals are a measurement in real time of flux and spectrum of solar neutrinos in MeV or sub-MeV range. Why measure solar neutrinos of low energy ? The LMA-MSW model predicts a specific behavior for the neutrino survival probability for the various types of neutrinos emitted in the sun. The shape of the prediction as a function of energy shows a larger survival probability at lower energy.

All data before Borexino measured higher energy. So Borexino wants to measure shape of survival probability as a function of energy, going lower. Measurement can constrain additional oscillation models. If we asssume that neutrinos oscillate and we take data of the survival probability, we get the absolute neutrino flux, and we might be able to measure the component of the CNO source in the neutrino flux, this can help constrain the solar models.

Borexino can also see antineutrinos (geo-nu), in gran sasso this will be relatively easy because background from reactor antineutrinos is small. We need statistics, several years to collect significant data. The signal to noise ratio provided by the apparatus is of 1.2. The detector has also sensitivity to supernova neutrinos. Borexino is thus entering the SNEW community.

Results of Borexino will be complementary to others. Taking data since mid march 2007. They have about 450 days of live time so far. The process of neutrino detection is elastic scattering on electrons. High scintillator yield of 500 photoelectrons per MeV, a high energy resolution, and a low threshold. No information on the direction of neutrinos, however. Scintillator is fast, can reconstruct the position with time measurements. Different answers to alpha and beta particles can distinguish the two. The shape of the energy spectrum allows to distinguish them. The energy spectrum is the only sign they can recognize.

The story of the cleanliness of Borexino encompasses 15 years of work. Careful selection of construction materials, special procedures for fluid procurement, scintillator and buffer purification during filling. Background from U and Th is very small, smaller than the initial goals. The purity of the liquid scintillator is very high.

If there is only a neutrino signal, the simulation shows that the Beryllium 7 neutrino signal is very well distinguishable, it shows a flat spectrum with an upper edge at 350 MeV. 14C is at smaller energy. 11C at high energy cannot be eliminated. Can be tagged some way, but not completely eliminated. At further higher energy there is the signal from Carbon-10.

In 192 days of lifetime there is a big Polonium peak and the edge of the Beryllium region, together with a contribution from Kripton. Data indicates also the presence of Bismuth-210. The rate of neutrinos from 7Be is of 49 counts per 100 T. They see an oscillation from these neutrinos because otherwise they would see 75 counts +- 4. The no-oscillation hypothesis is rejected at 4-sigma level. This is the first real-time measurement of oscillation of 7Be neutrinos.

Largest errors are coming from the fiducial mass ratio, and detector response function. These amount to 6% each.

Neutrino interactions in the earth could lead to regeneration of neutrinos: solar nu flux higher in the night than in the day, due to geometry. In the energy region of 7Be, they expect a very small effect. A larger effect would be expected in the low-solution, now excluded.

A new preliminary result: the day-night asymmetry for 7Be solar neutrinos. 422 days of live time are used for this. In the region where neutrinos contribute, there is no asymmetry seen.

Flux of Boron-8 neutrinos with low threshold: Borexino goes lower in energy threshold. In Borexino they go down to 2-3 MeV. After subtracting the muon contribution they see the oscillation of 8B neutrinos. By putting them together with 7Be, more points can be added to the survival probability plot. They are describing well the curve as a function of energy.

In conclusion, Borexino claims a first real time detection of the 7Be flux.

M.Nakahata: Superkamiokande results in neutrino astrophysics.

Kamiokande from 1983 to 1996 was a 16m high, 15.6m diameter tank with more than a thousand large photomultiplier  tubes. SK started in 1996. A 50,000T water tank, 32,000 T photosensitive volume.

After the accident they took data at SKII, then 2006 was SKIII and new electronics, and since September 2008 it is SK-IV. The original purpose of Kamiokande was a search for proton decay. Protons could be though to decay to positron plus neutral pion; but they wanted to measure different branching ratios. They made a detector with large coverage.

telescope, the advantage was directionality, provided by the imaging Cherenkov detector. And the The large photocollection efficiency is useful also for detecting low-energy neutrinos. As a second item is energy information. The number of Cherenkov photons is proportional to the energy of the particle. Another advantage is the particle identification. From the diffuseness of the ring pattern they can distinguish electron from muon events. The misidentification probability is less than 1%, very important when discussing atmospheric neutrinos.

The first solar neutrino plot at Kamiokande came from 450 days of exposure. E>9.3 MeV threshold. Saw an excess of neutrinos coming from the sun, but could not say much about the size. In Superk they had larger number, 22400 solar neutrino events, 14.5 per day, with very precise flux, with stat accuracy of 1% and syst of about 4%. SK info gave 8B flux and \nu_\mu and \nu_\tau fluxes.

SK will measure the survival probability of solar neutrinos as a function of energy, from 4 MeV down, and measure their spectrum distorsion.

From the supernova SN1987, SK observed 11 events in 13 seconds. Other 11 events were seen in that case from Baksan and IMB3. ASsuming we now got a new Supernova at 10 kpc, SK could measure directly energy information from the reaction. The event rate would discriminate models.

Adding Gadolinium in water can reduce backgrounds, because n capture yields a gamma ray, which gives 8 MeV energy, and the time is correlated (30 msec delay). If invisible muon backgrounds can be reduced by a factor of five using this neutron tagging, with 10 years of SK the signal will amount to 33 events, 27 from backgrounds, in energy of 10 to 30 MeV: they can thus see SN relic neutrinos. But they must first study water transparency, corrosion in the tank, etcetera, due to the addition of Gadolinium.

Atmospheric neutrino anomaly in Kamiokande: mu-e decay ratio was the first evidence. Data from 1983 to 1985 allowed to measure the ratio, 60% of the expectation in mu/e ratio. A paper was published in 1988. In 1994 they obtained a zenith angle distribution for multi-GeV events. In superK they got a much better result, and got sub-GeV electron-like and muon-like events.

Oscillation agreed very well with observed data. The latest plot of two-flavor oscillation analysis gives a \delta m^2 = 0.0021 eV^2, and angle theta consistent with 1.0.

And that is all for today!

Neutrino Telescopes XIII March 8, 2009

Posted by dorigo in astronomy, cosmology, news, personal, physics, science, travel.
Tags: , , , ,
comments closed

The conference “Neutrino Telescopes” has arrived at its XIII edition. It is a very nicely organized workshop, held in Venice every year towards the end of the winter or the start of the spring. For me it is especially pleasing to attend, since the venue, Palazzo Franchetti (see picture below) is located at a ten minute walk from my home: a nice change from my usual hour-long commute with Padova by train.

This year the conference will start on Tuesday, March 10th, and will last until Friday. I will be blogging from there, hopefully describing some new results heard in the several interesting talks that have been scheduled. Let me mention only a few of the talks, pasted from the program:

  • D. Meloni (University of Roma Tre)
    CP Violation in Neutrino Physics and New Physics
  • K. Hoffman (University of Maryland)
    AMANDA and IceCube Results
  • S. Enomoto (Tohoku University)
    Using Neutrinos to study the Earth
  • D.F. Cowen (Penn State University)
    The Physics Potential of IceCube’s Deep Core Sub-Detector
  • S. Katsanevas (Université de Paris 7)
    Toward a European Megaton Neutrino Observatory
  • E. Lisi (INFN, Bari)
    Core-Collapse Supernovae: When Neutrinos get to know Each
    Other
  • G. Altarelli (University of Roma Tre & CERN)
    Recent Developments of Models of Neutrino Mixing
  • M. Mezzetto (INFN, Padova)
    Next Challenge in Neutrino Physics: the θ13 Angle
  • M. Cirelli (IPhT-CEA, Saclay)
    PAMELA, ATIC and Dark Matter

The conference will close with a round table: here are the participants:

Chair: N. Cabibbo (University of Roma “La Sapienza”)
B. Barish (CALTECH)
L. Maiani (CNR)
V.A. Matveev (INR of RAS, Moscow)
H. Minakata (Tokyo Metropolitan University)
P.J. Oddone (FNAL)
R. Petronzio (INFN, Roma)
C. Rubbia (CERN)
M. Spiro (CEA, Saclay)
A. Suzuki (KEK)

Needless to say, I look forward to a very interesting week!

What’s hot around February 10, 2009

Posted by dorigo in astronomy, Blogroll, cosmology, internet, italian blogs, mathematics, news, physics, science.
Tags: , , , , ,
comments closed

For lack of interesting topics to blog about, I refer you to a short list of bloggers who have produced readable material in the last few days:

  • The always witty Resonaances has produced an informative post on Quirks.
  • My friend David Orban describes the recently-instituted singularity University
  • Stefan explains other types of singularities, those you can find in your kitchen!
  • Dmitry has an outstanding post out today about the physics of turbulence, with four mini-pieces on the Reynolds number, viscosity, universality and intermittency. Worth a visit, if even just for the pics!
  • Marco discusses the long winter of LHC. Sorry, in italian.
  • Peter discusses the same issue in English.
  • Marni points out a direct explanation of the Pioneer anomaly with the difference between atomic clock time and astronomical time. Or, if you will, a change of the speed of light with time!

CMS and extensive air showers: ideas for an experiment February 6, 2009

Posted by dorigo in astronomy, cosmology, physics, science.
Tags: , , , , , , ,
comments closed

The paper by Thomas Gehrmann and collaborators I cited a few days ago has inspired me to have a closer look at the problem of understanding the features of extensive air showers – the phenomenon of a localized stream of high-energy cosmic rays originated by the incidence on the upper atmosphere of a very energetic proton or light nucleus.

Layman facts about cosmic rays

While the topic of cosmic rays, their sources, and their study is largely terra incognita to me -I only know the very basic facts, having learned them like most of you from popularization magazines-, I do know that a few of their features are not too well understood as of yet. Let me mention only a few issues below, with no fear of being shown how ignorant on the topic I am:

  • The highest-energy cosmic rays have no clear explanation in terms of their origin. A few events with energy exceeding $10^{20} eV$ have been recorded by at least a couple of experiments, and they are the subject of an extensive investigation by the Pierre Auger observatory.
  • There are a number of anomalies on their composition, their energy spectrum, the composition of the showers they develop. The data from PAMELA and ATIC are just two recent examples of things we do not understand well, and which might have an exotic explanation.
  • While models of their formation suppose that only light nuclei -iron at most- are composing the flux of primary hadrons, some data (for instance this study by the Delphi collaboration) seems to imply otherwise.

The paper by Gehrmann addresses in particular the latter point. There appears to be a failure in our ability to describe the development of air showers producing very large number of muons, and this failure might be due to modeling uncertainties, heavy nuclei as primaries, or the creation of exotic particles with muonic decay, in decreasing order of likelihood. For sure, if an exotic particle like the 300 GeV one hypothesized in the interpretation paper produced by the authors of the CDF study of multi-muon events (see the tag cloud on the right column for an extensive review of that result) existed, the Tevatron would not be the only place to find it: high-energy cosmic rays would produce it in sizable amounts, and the observed multi-muon signature from its decay in the atmosphere might end up showing in those air showers as well!

Mind you, large numbers of muons are by no means a surprising phenomenon in high-energy cosmic ray showers. What happens is that a hadronic collision between the primary hadron and a nucleus of nitrogen or oxygen in the upper atmosphere creates dozens of secondary light hadrons. These in turn hit other nuclei, and the developing hadronic shower progresses until the hadrons fall below the energy required to create more secondaries. The created hadrons then decay, and in particular K^+ \to \mu^+ \nu_{\mu}, \pi^+ \to \mu^+ \nu_{\mu} decays will create a lot of muons.

Muons have a lifetime of two microseconds, and if they are energetic enough, they can travel many kilometers, reaching the ground and whatever detector we set there. In addition, muons are very penetrating: a muon needs just 52 GeV of energy to make it 100 meters underground, through the rock lying on top of the CERN detectors. Of course, air  showers include not just muons, but electrons, neutrinos, and photons, plus protons and other hadronic particles. But none of these particles, except neutrinos, can make it deep underground. And neutrinos pass through unseen…

Now, if one reads the Delphi publication, as well as information from other experiments which have studied high-multiplicity cosmic-ray showers, one learns a few interesting facts. Delphi found a large number of events with so many muon tracks that they could not even count them! In a few cases, they could just quote a lower limit on the number of muons crossing the detector volume. One such event is shown on the picture on the right: they infer that an air shower passed through the detector by observing voids in the distribution of hits!

The number of muons seen underground is an excellent estimator of the energy of the primary cosmic ray, as the Kascade collaboration result shown on the left shows (on the abscissa is the logarithm of the energy of the primary cosmic ray, and on the y axis the number of muons per square meter measured by the detector). But to compute energy and composition of cosmic rays from the characteristics we observe on the ground, we need detailed simulations of the mechanisms creating the shower -and these simulations require an understanding of the physical processes at the basis of the productions of secondaries, which are known only to a certain degree. I will get back to this point, but here I just mean to point out that a detector measuring the number of muons gets an estimate of the energy of the primary nucleus. The energy, but not the species!

As I was mentioning, the Delphi data (and that of other experiments, too) showed that there are too many high-muon-multiplicity showers. The graph on the right shows the observed excess at very high muon multiplicities (the points on the very right of the graph). This is a 3-sigma effect, and it might be caused by modeling uncertainties, but it might also mean that we do not understand the composition of the primary cosmic rays: yes, because if a heavier nucleus has a given energy, it usually produces more muons than a lighter one.

The modeling uncertainties are due to the fact that the very forward production of hadrons in a nucleus-nucleus collision is governed by QCD at very small energy scales, where we cannot calculate the theory to a good approximation. So, we cannot really compute with the precision we would like how likely it is that a 1,000,000-TeV proton, say, produces a forward-going 1-TeV proton in the collision with a nucleus of the atmosphere. The energy distribution of secondaries produced forwards is not so well-known, that is. And this reflects in the uncertainty on the shower composition.

Enter CMS

Now, what does CMS have to do with all the above ? Well. For one thing, last summer the detector was turned on in the underground cavern at Point 5 of LHC, and it collected 300 million cosmic-ray events. This is a huge data sample, warranted by the large extension of the detector, and the beautiful working of its muon chambers (which, by the way, have been designed by physicists of Padova University!).  Such a large dataset already includes very high-multiplicity muon showers, and some of my collaborators are busy analyzing that gold mine. Measurements of the cosmic ray properties are ongoing.

One might hope that the collection of cosmic rays will continue even after the LHC  is turned on. I believe it will, but only during the short periods when there is no beam circulating in the machine. The cosmic-ray data thus collected is typically used to keep the system “warm” while waiting for more proton-proton collisions, but it will not be a orders-of-magnitude increase in statistics with respect to what has been already collected last summer.

The CMS cosmic-ray data can indeed provide an estimate of several characteristics of the air showers, but it will not be capable of providing results qualitatively different from the findings of Delphi -although, of course, it might provide a confirmation of simulations, disproving the excess observed by that experiment. The problem is that very energetic events are rare -so one must actively pursue them, rather than turning on the cosmic ray data collection when not in collider mode. But there is one further important point: since only muons are detected, one cannot really understand whether the simulation is tuned correctly, and one cannot achieve a critical additional information: the amount of energy that the shower produced in the form of electrons and photons.

The electron- and photon-component of the air shower is a good discriminant of the nucleus which produced the primary interaction, as the plot on the right shows. It in fact is a crucial information to rule out the presence of nuclei heavier than iron, or the composition of primaries in terms of light nuclei. Since the number of muons in high-multiplicity showers is connected to the nuclear species as well, by determining both quantities one would really be able to understand what is going on. [In the plot, the quantity Y is plotted as a function of the primary cosmic ray energy. Y is the ratio between the logarithm of the number of detected muons and electrons. You can observe that Y is higher for iron-induced showers (the full black squares)].

Idea for a new experiment

The idea is thus already there, if you can add one plus one. CMS is underground. We need a detector at ground level to be sensitive to the “soft” component of the air shower- the one due to electrons and photons, which cannot punch through more than a meter of rock. So we may take a certain number of scintillation counters, layered alternated with lead sheets, all sitting on top of a thicker set of lead bricks, underneath which we may set some drift tubes or, even better, resistive plate chambers.

We can build a 20- to 50-square meter detector this way with a relatively small amount of money, since the technology is really simple and we can even scavenge material here and there (for instance, we can use spare chambers for the CMS experiment!). Then, we just build a simple logic of coincidences between the resistive plate chambers, imposing that several parts of our array fires together at the passage of many muons, and send the triggering signal 100 meters down, where CMS may be receiving a “auto-accept” to read out the event regardless of the presence of a collision in the detector.

The latter is the most complicated thing to do of the whole idea: to modify existing things is always harder than to create new ones. But it should not be too hard to read out CMS parasitically, and collect at very low frequency those high-multiplicity showers. Then, the readout of the ground-based electromagnetic calorimeter should provide us with an estimate of the (local) electron-to-muon ratio, which is what we know to determine the weight of the primary nucleus.

If the above sounds confusing, it is entirely my fault: I have dumped here some loose ideas, with the aim of coming back here when I need them. After all, this is a log. a Web log, but always a log of my ideas… But I wish to investigate more on the feasibility of this project. Indeed, CMS will for sure pursue cosmic-ray measurements with the 300M events it has already collected. And CMS does have spare muon chambers. And CMS does have plans of storing them at Point 5… Why not just power them up and build a poor man’s trigger ? A calorimeter might come later…

Follow

Get every new post delivered to your Inbox.

Join 102 other followers