jump to navigation

NeuTel 09: Oscar Blanch Bigas, update on Auger limits on the diffuse flux of neutrinos April 3, 2009

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , ,
comments closed

With this post I continue the series of short reports on the talks I heard at the Neutrino Telescopes 2009 conference, held three weeks ago in Venice.

The Pierre Auger Observatory is a huge (3000 km^2) hybrid detector of ultra-energetic cosmic rays -that means ones having an energy above 10^18 eV. The detector is located in Malargue, Argentina, at 1400m above sea level.

There are four six-eyed fluorescent detectors: when the shower of particles created by a very energetic primary cosmic ray develops in the atmosphere, it excites nitrogen atoms which emit energy in fluorescent light, collected in telescope. It is a calorimetric measurement of the shower, since the number of particles in the shower gives a measurement of the energy of the incident primary particle.

The main problem of the fluorescent detection method is statistics: fluorescent detectors have a reduced duty cycle because they can only observe in moonless nights. That amounts to a 10% duty cycle. So these are complemented by a surface detector, which has a 100% duty cycle.

The surface detector is composed by water Cherenkov detectors on the ground, which can detect light with PMT tubes. The signal is sampled as a function of distance from the center of shower. The measurement depends on a Monte Carlo simulation, so there are some systematic uncertainties present in the method.

The assembly includes 1600 surface detectors (red points), surrounded by four fluorescence detectors (shown by green lines in the map above). These study the high-energy cosmics, their spectra, their arrival direction, and their composition. The detector has some sensitivity to unidentified ultra-high energy neutrinos. A standard cosmic ray interacts at the top of atmosphere, and yields an extensive air shower which has an electromagnetic component developing on the ground; but if the arrival direction of the primary is tilted with respect to the vertical, the e.m. component is absorbed when it arrives on the ground, so it contains only muons. For neutrinos, which can penetrate deep in the atmosphere before interacting, the shower will instead have a significant e.m. component regardless of the angle of incidence.

The “footprint” is the pattern of firing detectors on the ground. It encodes information on the angle of incidence. For tilted showers, the presence of an e.m. component is strong indication of neutrino shower. An elongated footprint and a wide time structure of signal is seen for tilted showers.

There is a second method to detect neutrinos. This is based on the so-called “skimming neutrinos“: the Earth-skimming mechanism occurs when neutrinos interact in the Earth, producing a charged lepton via charged current interaction. The lepton produces a shower that can be detected above the ground. This channel has better sensitivity than neutrinos interacting in the atmosphere. It can be used for tau neutrinos, due to early tau decay in the atmosphere. The distance of interaction for a muon neutrino is 500 km, for a tau neutrino is 50 km. for electrons it is 10 km. These figures apply to 1 EeV primaries. If you are unfamiliar with these ultra-high energies, 1 Eev = 1000 PeV = 1,000,000 TeV: this is roughly equivalent to the energy drawn in a second by a handful of LEDs.

Showers induced by emerging tau leptons start close to the detector, and are very inclined. So one asks for an elongated footprint, and a shower moving at the speed of light using the available timing information. The background to such a signature is of the order of one event every 10 years. The most important drawback of Earth-skimming neutrinos is the large systematic uncertainty associated with the measurement.

Ideally, one would like to produce a neutrino spectrum or an energy-dependent limit on the flux, but no energy reconstruction is available. Observed energy depends on the height at which the shower is developing, and since this is not known for penetrating particles as neutrinos, one can only give a flux limit for them. The limit is in the range of energy where GZK neutrinos should peak, but its value is an order of magnitude above the expected flux of GZK neutrinos. A differential limit in energy is much worse in reach.

The figure below shows the result for the integrated flux of neutrinos obtained by the Pierre Auger observatory in 2008 (red line), compared with other limits and with expectations for GKZ neutrinos.

Neutrino Telescopes day 2 notes March 12, 2009

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , , , , , , ,
comments closed

The second day of the “Neutrino Conference XIII” in Venice was dedicated to, well, neutrino telescopes. I have written down in stenographical fashion some of the things I heard, and I offer them to those of you who are really interested in the topic, without much editing. Besides, making sense of my notes takes quite some time, more than I have of it tonight.

So, I apologize for spelling mistakes (the ones I myself recognize post-mortem), in addition to the more serious conceptual ones coming from missed sentences or errors caused by my poor understanding of English, of the matter, or of both. Also, I apologize to those of you who would have preferred a more succint, readable account: As Blaise Pascal once put it, “I have made this letter longer than usual, because I lack the time to make it short“.

NOTE: the links to slides are not working yet – I expect that the conference organizers will fix the problem tomorrow morning.

Christian Spiering: Astroparticle Physics, the European strategy ( Slides here)

Spiering gave some information about two new bodies, European organizations: ApPEC and ASPERA. ApPEC has two committees offering advice to national funding agencies, improve links and communication between the astroparticle physics community and scientific programmes of organizations like CERN, ESA etc. Aspera was launched in 2006, to give a roadmap for APP in Europe.Close coordination with ASTRONET, and links to CERN strategy bodies.

Roadmapping: science case, overview of the status, some recommendations for convergence. Second thing, a critical assessment of the plans, a calendar for milestones, coordinated with ASTRONET.

For dark matter and dark energy searches, Christian displayed a graph showing the cross section of WIMPS as a function of time, the reach of present-day experiments. In 2015 we should reach cross sections of about 10^-10 picobarns. We are now at some 10^-8 with our sensitivity. The reach depends on background, funding and infrastructure. Idea is to go toward a 2-ton-scale zero-background detectors. Projects: Zeplin, Xenon, others.

In an ideal scenario, LHC observations of new particles at weac scale could place these observations in a well-confined particle physics context, direct detection would be supported by indirect signatures. In case of a discovery, smoking-gun signatures of direct detection such as directionality and annual variations would be measured in detail.

Properties of neutrinos: direct mass measurement efforts are KATRIN and Troitzk. Double beta decay experiments are Cuoricino, Nemo-3, Gerda, Cuore, et al. The KKGH group claimed a signal of masses of a few tenths of eV, but normal hierarchy implies 10^-3 eV for the lightest neutrino mass of the same order. Experiments are expected to be in operation (cuoricino, nemo-3) or start by 2010-2011. Supernemo will start in 2014.

A large infrastructure for proton decay is advised. For charged cosmic rays, depending on which part of the spectrum one looks, there are different kinds of physics contributing and explorable.

The case for Auger-North is strong, high-statistics astronomy with reasonably fast data collection is needed.

For high-energy gamma rays, the situation has seen an enormous progress over the last 15 years. Mostly by imaging atmospheric Cherenkov telescopes (IACT). Whipple, Hegra, CAT, Cangaroo, Hess, Magic, Veritas. Also, wide-angle devices. For existing air Cherenkov telescopes, there are Hess and Magic running, very soon Magic will go into Magic-II. Whipple runs a monitoring telescope.

There are new plans for MACE in India, something between Magic and Hess. CTA and AGIS are in their design phase.

Aspera’s recommendations: the priority of VHE gamma astrophysics is CTA. They recommend design and prototyping of CTA and selection of sites, and proceeding decidedly towards start of deployment in 2012.

For point neutrino sources, there has been tremendous progress in sensitivity over the last decade. A factor of 1000 within 15 years in sensitivity to fluxes. IceCube will deliver what has promised, within 2012.

For gravitational waves, there is LISA and VIRGO. The frequency tested of LISA is in the 10^-2 Hz, VIRGO will go to 100-10000 Hz. The reach is of several to several hundred sources per year. The Einstein telescope, a graviwaves detector underground, could access thousand of sources per year. Einstein will construct starting in 2017. The conclusions: Einstein is the long-term future project of ground-based gravitational wave astronomy in Europe. A decision on funding will come after first detections with enhanced LIGO and virgo, but is most likely after collecting about a year of data.

In summary,the budget will increase by a factor of more than two in the next decade. Km3net, megaton, CTA, ET will be the experiments taking the largest share. We are moving into regions with a high discovery potential. An accelerated increase of sensitivity in nearly all fields.

K.Hoffmann, Results from IceCube and Amanda, and prospects for the future ( slides here)

IceCube will become the first instrumented cubic km neutrino telescope. Amanda-II consists of 677 optical modules embedded in the ice at depths of 1500-2000 m. It has been a testbed for icecube and for deploying optical modules. Icecube has been under construction for the last several years, Strings of PMT tubes have been deployed in the ice during the last few years. 59 of them are operating.

The rates: IC40 has 110 neutrino events per day. Getting close to 100% live time. 94% in January. IceCube has the largest effective area for muons, long track length. The range of sensitivity in energy is to TeV-PeV range.

Ice properties are important to understand. A dust logger measures dust concentration, which is connected to the attenuation length of light in ice. There is a thick layer of dust sitting at a depth of 2000m, clear ice above, and very clear ice below. They have to understand the light yield and propagation well.

Of course one of the most important parameters is the angular resolution. As the detector got larger, it improved. One of the more exciting things this year was to see the point spread function go peak at less than one degree with long track lengths for muons.

To see the Moon for a telescope is always reassuring. They did it, a >4 sigma effect for cosmic rays impinging on the Moon.

With the waveforms they have in IceCube, the energy reconstruction has muons that are non-minimum ionizing. They reconstruct energy by number of photons along the track. Can achieve some energy resolution, progress in understanding how to reconstruct energy.

First results from point-source searches. The 40-string configuration data will be analyzed soon. The point sources are sought with a unbinned likelihod search. Taking into account energy variable in point source search. They expect point sources to have higher energy spectrum than atmospheric neutrinos. From 5114 neutrino candidates in 276 days, they found one hot spot in the sky, with a significance after trial factor accounting that is of about 2.2 sigma. There are variables next year that will be less sensitive to dust model, so they might be able to say more about that one soon.

For a 7-years data, 3.8 year livetime, the hottest spot has a significance of 3.3 sigma. With one year of data, icecube 22 will already be more sensitive than Amanda. Icecube and Antares are complementary, since icecube looks at northern declination and antares is looking at the southern declinations. The point source flux sensitivity is down to 10^-11 GeV cm-2 s-1.

For GRBs, one can use a triggered search, that is an advantage, and latest results give for 400 bursts a limit. From IceCube22, a unbinned search similar to the one of the point source search, gives an exclusion power expected to 10^-1 GeV per cm^2 (in E^2 dN/dE units), in most of the energy range.

The naked-eye GRB of March 19, 2008, had detector in test mode, only 9 of 22 strings taking data. Bahcall predicted flux peaks at 10^6 GeV with a flux of 10^-1, but the limit found is 20 times higher.

Finally, they are looking for WIMPS. A search was recently sent for publication by the 22-string IceCube. 104 days of livetime. Can reach down well.

Atmospheric neutrinos are also a probe for violations of Lorentz invariance -possibly from Quantum Gravity effects. The survival probability depends on energy, assuming maximal mixing their sensitivity is down to a part in 10^-28. They are looking for a change in what one would expect for flavor oscillation. Atmospheric neutrinos are produced, depending on where they are produced they traverse more of the core of the Earth. So one gets a neutrino beam with different baselines, based on energy, and you would see a difference in the neutrino oscillation probability. The neutrino oscillation parameter will be energy dependent.

In the future they would like to see a high-energy extension. Ice is the only medium where one can see a coherent radio signal and an optical one, and acoustic too. Past season was very successful, with the addition of 19 new strings. Many analyses of 22-string configuration are complete. ANalysis techinques being refined to exploit size, energy threshold, and technology used. Underway to develop tech to build GZK scale nu detector after IceCube is complete.

Vincenzo Flaminio, Results from Antares ( slides here)

Potential sources of galactic neutrinos can be SN remnants, pulsars, microquasars, and extragalactic ones are gamma-ray bursts and active galactic nuclei. A by-product of Antares is an indirect search for dark matter, results are not ready yet.

Neutrinos from supernovas: these act as particle accelerators, can give hadrons and gammas from neutral pion decays. Possible sources are those found by Auger, or for example the TeV photons which come from molecular clouds.

Antares is an array of photomultiplier tubes that look at Cherenkov light produced by muon crossing the detector. The site is south of France, the galactic center is visible for 75% of the time. The collaboration comprises 200 physicists from many european countries. The control room in Toulon is more comfortable than the Amanda site (and this wins the understatement prize of the conference).

The depth in water is 2500m. All strings are connected via cables on the seabed. 40km long electro-optical cable connects ashore. Time resolution monitored by LED beacon in each detector storey. A sketch of the detector is shown below.

Deployment started in 2005, in 2006 first line installed. Finished one year ago. In addition there is an acoustic storey and several monitoring instruments. Biologists and oceanographers are interested in what is done, not just neutrino physicists.

The detector positioning is an important point, because lines move because of sea currents. Installed a large number of transmitters along the lines, use information to reconstruct minute-by-minute the precise position of the lines.

Collect triggers at 10 Hz rate with 12 lines. Detected 19 million muons with first 5 lines, 60 with the full detector.

First physics analyses are going on. Select up-going neutrinos, low S/N ratio with atmospheric muons is avoided this way. Rate is of the order of two per day using multi-line configuration.

Conclusions: Antares has successfully reached the end of construction phase. Data taking is ongoing, analyses in progress on atmospheric muons and neutrinos, cosmic neutrino sources, dark matter, neutrino oscillations, magnetic monopoles, etcetera.

David Saltzberg, Overview of the Anita experiment ( slides here)

Anita flies at 120,000 ft above the ice. It is the eyepiece of the telescope. The objective is the large amount of ice of the Antarctica. Tested with 8 metric tons of ice to test effect for detection. Done at SLAC. Observe radio pulses from the ice. A wake-field radio signal is detected. It goes up and down in less than a nanosecond, due to its Cherenkov nature. It is called Askaryan effect. You can observe the number of particles in the shower, and the measured field effect does track the number of particles in the shower. The signal is 100% polarized linearly. Wavelength is bigger than the size of the shower, so it is coherent. At a PeV there are more radio quanta emitted than optical ones.

They will use this at very high energy, looking for GZK-induced neutrinos. The GZK converts protons into neutrinos, 50 MPc around sources.

The energy is at the level of 10^18 eV or higher, proper time is 50 milliseconds, longest baseline neutrino experiment possible.

Anita has a GPS antenna for detection, and orientation which needs a fraction of a degree resolution. Solar powered. Antennas are pointed down 10 degrees.

This 50-page document describes the instrument.

Lucky coincidences: 70% of world’s fresh water is in antarctica, and it is the most quiet radio place. The place selects itself, so to speak.

They made a flight with a live time of 17.3 days, but this one never flew above the thickest ice, which is where most of the signal should be coming from.

The Askaryan effect gets distorted by antenna detection, electronics, and thermal noise. The triggering works like any multi-level trigger. Sufficient energy in one antenna, same for neighbors. L3 goes down to 5 Hz from a start of 150 kHz. L2 does coincidence between adjacent L1 triggers.

They put a transmitter underground to get pulses to be detected. Cross-correlation between antennas do interferometry, and gets position of source. The resolution obtained on elevation is an amazing 0.3 degrees, and for azimuth it is 0.7 degree resolution. The ground pulsers make even very small effects stand out. Even 0.2 degree tilt of detector can be spotted by looking at errors in elevation as a function of azimuth.

First pass of analysis of data: 8.2M hardware triggers. 20,000 of those point well to ice. After requiring upcoming plane waves, isolated from camps and other events, remain a few events. Could be some residual man-made noise. Background estimate: thermal noise, which is well simulated, and gives less than one event after all cuts, and anthropogenic impulsive noise, like iridium phones, spark plugs, discharge from structures.

Results: having seen zero vertical polarization events surviving cuts, constraints on GZK production models. Best result to date in the energy range from 10^10 to 10^13 GeV.

Anita 2 has 27 million better triggers, over deeper ice, 30 days afloat. Still to be analyzed. Anita 1 is doing a 2nd pass deep analysis of the data. Anita 2 has better data, expect factor 5-10 more GZK sensitivity from it.

Sanshiro Enomoto, Using neutrinos to study the Earth: Geoneutrinos. ( slides here)

Geoneutrinos are generated by beta decay chain of natural isotopes (U,TH,K). These all yield antineutrinos. With an organic scintillator, they are detected by inverse-beta decay reaction yielding a neutron and a positron. The threshold is at 1.8 MeV. Uranium and Thorium contribute in this energy range, while the Potassium yield is below it. Only U-238 can be seen.

Radiogenic heat dominates Earth energetics. Measured terrestrial heat flow is of 44 TW, and the radiogenic heat is 3TW. The only direct geochemical probe: deepest borehole reaches only 12 km, and rock samples down to 200 km underground. Heat release from the surface peaks off America in the Pacific and in south Indian ocean. Estimate is of 20 TW from radioactive heat, 8 from U, 8 from Th, 3 from K. Core heatflow from solidification etc. is estimated at 5-15 TW, secular cooling 18+-10 TW.

Kamland has seen 25 events above backgrounds, consistent with expectations.

I did not take further notes of this talk, but was impressed by some awesome plots of Earth planisferes with all sources of neutrino backgrounds, to figure out which is the best place for a detector studying geo-neutrinos. Check the slides for them…

Michele Maltoni, synergies between future atmospheric and long-baseline neutrino experiments ( slides here)

A global six-parameter fit of neutrino parameters was shown, including solar, atmospheric, rector, and accelerator neutrinos, but not SNO-III yet. There is a small preference for non-zero theta_13, coming completely from the solar sector; as pointed out by G.Fogli, we do not find a non-zero theta_13 angle from atmospheric data. All we can do is point out that there might be something interesting, suggest experiments to do their own analyses fast.

The question is: in this picture, were many experiments contribute, is there space left for relevance of atmospheric neutrinos ? Which is the role of atmospheric neutrino measurements ? Do we need them at all ?

At first sight, there is not much left for atmospheric neutrinos. Mass determination is dominated by MINOS, theta_13 is dominated by CHOOZ, atmospheric data dominate in determination of mixing angle, atmospheric neutrino measurements have highest statistics, but with the coming of next generation this is going to change. There is symmetry in sensitivity shape of other experiments to some of the parameters. On the other hand, when you include atmospheric data, the symmetry is broken in theta_13, which distinguishes between normal and inverted hierarchy.

Determination of the octant in \sin^2 \theta_{23} and \Delta m^2_{31}. Also, the introduction of atmospheric data introduces a modulation in the \delta_{CP} - \sin \theta_{13} plot. Will this usefulness continue in the future ?

Sensitivity to theta_13: apart from hints mentioned so far, atmospheric neutrinos can observe theta_13 through matter effects, MSW. In practice, the sensitivity is limited by statistics: at E=6 GeV the ATM flux is already suppressed; background comes from \nu_e \to \nu_e events which strongly dilute the \nu_\mu \to \nu_e events. Also, the resonance occurs only for neutrinos OR antineutrinos, but not both.

As far as resolution goes, MegaTon detectors are still far in the future, but Long-baseline experiments are starting now.

One concludes that the sensitivity to theta_13 is not competitive with dedicated LBL and reactor experiments.

Completely different is the issue with other properties, since the issue of the resonance can be exploited once theta_13 can be measured. resonant enhancement of neutrino (antineutrino) oscillations for a normal (inverted) hierarchy; mainly visible for high energy, >6 GeV. The effect can be observed if detector can discriminate charge, or, if no charge discrimination is possible, if the number of neutrinos and antineutrinos is different.

Sensitivity to the hierarchy depends on charge discrimination for muon neutrinos. Sensitivity to the octant: in the low-energy region (E<1 GeV), for theta_13=0 the excess of \nu_e flux for theta_23 in one or the other side. Otherwise, there are lots of oscillations, but the effect persitst on the average. It is also present for both neutrinos and antineutrinos. At high energy, E>3 GeV, for theta_13 the MSW resonance produces an excess of electron-neutrino events. Resonance occurs only for one kind of neutrino (neutrino vs antineutrino).

So in summary one can detect many features with atmospheric neutrinos, but only with some particular characteristics of the detector (charge discr, energy resolution…).

Without atmospheric data, only K2K can say something on the neutrino hierarchy for low theta_13.

LBL experiments have poor sensitivity due to parameter degeneracies. Atmospheric neutrinos contribute in this case. The sensitivity to the octant is almost completely dominated by atmospheric data, with only minor contributions by LBL measurements.

One final comment: there might be hints of neutrino hierarchy in high-energy data. If theta_13 is really large, there can be some sensitivity to neutrino mass hierarchy. So the idea is to have a part of the detectors with increased photo-coverage, and use the rest of the mass as a veto: the goal is to lower the energy threshold as much as possible, to gain sensitivity to neutrino parameters with large statistics.

Atmospheric data are always present in any long-baseline neutrino detector: ATM and LBL provide complementary information on neutrino parameters, information in particular on hierarchy and octant degeneracy.

Stavros Katsanevas, Toward an European megaton neutrino observatory ( slides here)

Underground science: interdisciplinary potential at all scales. Galactic supernova neutrinos, galactic neutrinos, SN relics, solar neutrinos, geo-neutrinos, dark matter, cosmology -dark energy and dark matter.

Laguna: aimed at defining and realizing this research programme in Europe. Includes a majority of European physicists interested in the construction ove very massive detectors realized in one of the three technologies using liquids: water, liquid argon, and liquid scintillator.

Memphys, Lena, Glacier. Where could we put them ? The muon flux goes down with the overburden, so one has to examine the sites by their depth. In Frejus there is a possibility to put a detector between road and train tracks. Frejus rock is not hard but not either soft. Hard rock can become explosive because of stresses, and is not good. Another site is Pyhasalmi in Finland, but there the rock is hard.

Frejus is probably the only place where one can put water Cherenkov detectors. For liquid Argon, we have ICARUS (hopefully starting data taking in May), others (LANNDD, GLACIER, etc.). Glacier is a 70 m tank, with several novel concepts. A safe LNG tank, developed for many years by petrochemical industry. R&D includes readout systems and electronics, safety, HV systems, LAr purification. Must think about getting an intermediate scale detector.

The physics scope: a complementary program, a lot of reach in Memphis in searches for positron-pizero decays of protons, better for kzero in liquid argon. Proton lifetime expectations are at 10^36 years.

By 2013-2014 we will know whether sinsquared theta13 is larger than zero.

European megaton detector community (3 liquids) in collaboration with its industrial partners is currently addressing common issues (sites, safety, infrastructures, non-accelerator physics potential) in the context of LAGUNA (EUI FP8) Cost estimates will be ready by July 2010.

David Cowan, The physics potential of Icecube’s deep core sub-array ( slides here)

A new sub-array in ice-cube, called deep-core: ICDC. Originally conceived as a way to improve the sensitivity to WIMPs. Denser sub-arrays to lower the energy threshold, they give one order of magnitude decrease in the low-energy reach. There are six special strings plus seven nearby icecube strings The vertical spacing is of 7 meters, with 72 meter horizontal interstring spacing: a x10 density with respect to IceCube.

The effective scattering length in deep ice, which is very clear, is longer than 40 meters. This gives a better possibility to do a calorimetric measurement.

The deep core is at the bottom center. They take the top modules in each string as an active veto for backgrounds coming from muon events going down. On the sides, three layers of IC strings also provide a veto. These beat down the cosmic background a lot.

The ICDC veto algorithms: one runs online, finds event light intensity, the weighted center of gravity, and the time. They do a number of things and come up with a 1:1 S/N ratio. So ICDC improves the sensitivity to WIMPs, neutrino sources in the southern sky, oscillations. For WIMPs, an annihilation can occur in the center of the Earth or Sun. Annihilations to bbbar pairs or tau-tau pairs gives soft neutrinos, while ones into W boson pairs yield hard ones. This way, they extend the reach to masses of less than 100 GeV, at cross sections of 10^-40 cm^2.

In conclusion, ICDC can analyze data at lower neutrino energy than previously thought possible. It improves overlap with other experiments. It provides for a powerful background rejection, and it has sufficient energy resolution to do a lot of neutrino oscillation studies.

Kenneth Lande, Projects in the US: a megaton detector at Homestake ( slides here)

DUSEL at Homestake, in South Dakota. There are four tanks of water Cherenkov in the design. Nearby there’s the old site of the chlorine experiment. Shafts a km apart.

DUSEL will be an array of 100-150 kT fiducial mass Cerenkov detectors, at 1300 km distance from FNAL. The beam goes from 0.7 MW to 2.0 MW as the project goes along. Eventually add 100 kT of argon. A picture below shows a cutaway view of the facility.

Goals are accelerator-based theta_13, look at neutrino mass hierarchy, CP violation through delta_CP. For non-accelerator, the program includes studies of proton decay, relic SN neutrinos, prompt SN neutrinos, atmospheric neutrinos, and solar neutrinos. They can build up to 70m-wide tanks, but settled to 50-60m. The plan is to build three modules.

Physics-wise, the fnal beam has oscillated and disappeared at energy around 4 GeV. Rate is of 200,000 CC events per year assuming 2MW power (no oscillation, raw events). Neutrino appearance (electron kind) for nu and antinu as a function of energy gives oscillation, and mass hierarchy.

Reach in theta_13 is below 10^-2. For nucleon decay: looking in the range of 10^34. 300 kT per 10 y means 10^35 proton-years. Sensitive also to K-nu mode of decay, at the level of 8×10^33.

DUSEL can choose the overburden. A deep option can go deeper than Sudbury.

US power reactors are far from Homestake. Typical distance is 500 km. The neutrino flux from reactors is 1/30 of that of SK.

For a SN in our galaxy they expect about 100,000 events in 10 seconds. For a SN in M31 they expect about 10-15 events in a few seconds.

Detector construction: excavation, installing water-tight liner… Financial timetable is uncertain. At the moment water is being pumped down. Rock studies can start in September.

And that would be all for today… I heard many other talks, but cannot bring myself to comment on those. Please check http://neutrino.pd.infn.it/NEUTEL09/the conference site for the slides of the other talks!

CMS and extensive air showers: ideas for an experiment February 6, 2009

Posted by dorigo in astronomy, cosmology, physics, science.
Tags: , , , , , , ,
comments closed

The paper by Thomas Gehrmann and collaborators I cited a few days ago has inspired me to have a closer look at the problem of understanding the features of extensive air showers – the phenomenon of a localized stream of high-energy cosmic rays originated by the incidence on the upper atmosphere of a very energetic proton or light nucleus.

Layman facts about cosmic rays

While the topic of cosmic rays, their sources, and their study is largely terra incognita to me -I only know the very basic facts, having learned them like most of you from popularization magazines-, I do know that a few of their features are not too well understood as of yet. Let me mention only a few issues below, with no fear of being shown how ignorant on the topic I am:

  • The highest-energy cosmic rays have no clear explanation in terms of their origin. A few events with energy exceeding $10^{20} eV$ have been recorded by at least a couple of experiments, and they are the subject of an extensive investigation by the Pierre Auger observatory.
  • There are a number of anomalies on their composition, their energy spectrum, the composition of the showers they develop. The data from PAMELA and ATIC are just two recent examples of things we do not understand well, and which might have an exotic explanation.
  • While models of their formation suppose that only light nuclei -iron at most- are composing the flux of primary hadrons, some data (for instance this study by the Delphi collaboration) seems to imply otherwise.

The paper by Gehrmann addresses in particular the latter point. There appears to be a failure in our ability to describe the development of air showers producing very large number of muons, and this failure might be due to modeling uncertainties, heavy nuclei as primaries, or the creation of exotic particles with muonic decay, in decreasing order of likelihood. For sure, if an exotic particle like the 300 GeV one hypothesized in the interpretation paper produced by the authors of the CDF study of multi-muon events (see the tag cloud on the right column for an extensive review of that result) existed, the Tevatron would not be the only place to find it: high-energy cosmic rays would produce it in sizable amounts, and the observed multi-muon signature from its decay in the atmosphere might end up showing in those air showers as well!

Mind you, large numbers of muons are by no means a surprising phenomenon in high-energy cosmic ray showers. What happens is that a hadronic collision between the primary hadron and a nucleus of nitrogen or oxygen in the upper atmosphere creates dozens of secondary light hadrons. These in turn hit other nuclei, and the developing hadronic shower progresses until the hadrons fall below the energy required to create more secondaries. The created hadrons then decay, and in particular K^+ \to \mu^+ \nu_{\mu}, \pi^+ \to \mu^+ \nu_{\mu} decays will create a lot of muons.

Muons have a lifetime of two microseconds, and if they are energetic enough, they can travel many kilometers, reaching the ground and whatever detector we set there. In addition, muons are very penetrating: a muon needs just 52 GeV of energy to make it 100 meters underground, through the rock lying on top of the CERN detectors. Of course, air  showers include not just muons, but electrons, neutrinos, and photons, plus protons and other hadronic particles. But none of these particles, except neutrinos, can make it deep underground. And neutrinos pass through unseen…

Now, if one reads the Delphi publication, as well as information from other experiments which have studied high-multiplicity cosmic-ray showers, one learns a few interesting facts. Delphi found a large number of events with so many muon tracks that they could not even count them! In a few cases, they could just quote a lower limit on the number of muons crossing the detector volume. One such event is shown on the picture on the right: they infer that an air shower passed through the detector by observing voids in the distribution of hits!

The number of muons seen underground is an excellent estimator of the energy of the primary cosmic ray, as the Kascade collaboration result shown on the left shows (on the abscissa is the logarithm of the energy of the primary cosmic ray, and on the y axis the number of muons per square meter measured by the detector). But to compute energy and composition of cosmic rays from the characteristics we observe on the ground, we need detailed simulations of the mechanisms creating the shower -and these simulations require an understanding of the physical processes at the basis of the productions of secondaries, which are known only to a certain degree. I will get back to this point, but here I just mean to point out that a detector measuring the number of muons gets an estimate of the energy of the primary nucleus. The energy, but not the species!

As I was mentioning, the Delphi data (and that of other experiments, too) showed that there are too many high-muon-multiplicity showers. The graph on the right shows the observed excess at very high muon multiplicities (the points on the very right of the graph). This is a 3-sigma effect, and it might be caused by modeling uncertainties, but it might also mean that we do not understand the composition of the primary cosmic rays: yes, because if a heavier nucleus has a given energy, it usually produces more muons than a lighter one.

The modeling uncertainties are due to the fact that the very forward production of hadrons in a nucleus-nucleus collision is governed by QCD at very small energy scales, where we cannot calculate the theory to a good approximation. So, we cannot really compute with the precision we would like how likely it is that a 1,000,000-TeV proton, say, produces a forward-going 1-TeV proton in the collision with a nucleus of the atmosphere. The energy distribution of secondaries produced forwards is not so well-known, that is. And this reflects in the uncertainty on the shower composition.

Enter CMS

Now, what does CMS have to do with all the above ? Well. For one thing, last summer the detector was turned on in the underground cavern at Point 5 of LHC, and it collected 300 million cosmic-ray events. This is a huge data sample, warranted by the large extension of the detector, and the beautiful working of its muon chambers (which, by the way, have been designed by physicists of Padova University!).  Such a large dataset already includes very high-multiplicity muon showers, and some of my collaborators are busy analyzing that gold mine. Measurements of the cosmic ray properties are ongoing.

One might hope that the collection of cosmic rays will continue even after the LHC  is turned on. I believe it will, but only during the short periods when there is no beam circulating in the machine. The cosmic-ray data thus collected is typically used to keep the system “warm” while waiting for more proton-proton collisions, but it will not be a orders-of-magnitude increase in statistics with respect to what has been already collected last summer.

The CMS cosmic-ray data can indeed provide an estimate of several characteristics of the air showers, but it will not be capable of providing results qualitatively different from the findings of Delphi -although, of course, it might provide a confirmation of simulations, disproving the excess observed by that experiment. The problem is that very energetic events are rare -so one must actively pursue them, rather than turning on the cosmic ray data collection when not in collider mode. But there is one further important point: since only muons are detected, one cannot really understand whether the simulation is tuned correctly, and one cannot achieve a critical additional information: the amount of energy that the shower produced in the form of electrons and photons.

The electron- and photon-component of the air shower is a good discriminant of the nucleus which produced the primary interaction, as the plot on the right shows. It in fact is a crucial information to rule out the presence of nuclei heavier than iron, or the composition of primaries in terms of light nuclei. Since the number of muons in high-multiplicity showers is connected to the nuclear species as well, by determining both quantities one would really be able to understand what is going on. [In the plot, the quantity Y is plotted as a function of the primary cosmic ray energy. Y is the ratio between the logarithm of the number of detected muons and electrons. You can observe that Y is higher for iron-induced showers (the full black squares)].

Idea for a new experiment

The idea is thus already there, if you can add one plus one. CMS is underground. We need a detector at ground level to be sensitive to the “soft” component of the air shower- the one due to electrons and photons, which cannot punch through more than a meter of rock. So we may take a certain number of scintillation counters, layered alternated with lead sheets, all sitting on top of a thicker set of lead bricks, underneath which we may set some drift tubes or, even better, resistive plate chambers.

We can build a 20- to 50-square meter detector this way with a relatively small amount of money, since the technology is really simple and we can even scavenge material here and there (for instance, we can use spare chambers for the CMS experiment!). Then, we just build a simple logic of coincidences between the resistive plate chambers, imposing that several parts of our array fires together at the passage of many muons, and send the triggering signal 100 meters down, where CMS may be receiving a “auto-accept” to read out the event regardless of the presence of a collision in the detector.

The latter is the most complicated thing to do of the whole idea: to modify existing things is always harder than to create new ones. But it should not be too hard to read out CMS parasitically, and collect at very low frequency those high-multiplicity showers. Then, the readout of the ground-based electromagnetic calorimeter should provide us with an estimate of the (local) electron-to-muon ratio, which is what we know to determine the weight of the primary nucleus.

If the above sounds confusing, it is entirely my fault: I have dumped here some loose ideas, with the aim of coming back here when I need them. After all, this is a log. a Web log, but always a log of my ideas… But I wish to investigate more on the feasibility of this project. Indeed, CMS will for sure pursue cosmic-ray measurements with the 300M events it has already collected. And CMS does have spare muon chambers. And CMS does have plans of storing them at Point 5… Why not just power them up and build a poor man’s trigger ? A calorimeter might come later…

Radiation over Atlantic (reprise) October 19, 2008

Posted by dorigo in personal, physics, science, travel.
Tags: ,
comments closed

Earlier this month I pasted here a graph showing the level of radiation over the Atlantic ocean, recorded by a digital dosimeter I carried with me. I like that device: it is fun to see it becoming alive and counting real cosmic radiation as the plane climbs up the atmosphere (at sea level the thorium contamination inside buildings is the largest source of counts).

On that occasion, I found a strange effect toward the end of my trip over the Atlantic ocean, flying westward. It looked as if there were two dips in the intensity, which I was unable to explain (a list of possible effects was given in the other post). I did the same experiment on my flight back yesterday, with a halved integration time (half-hour intervals instead than one-hour intervals), as suggested by a reader. The results are shown in the graph below, which is collected flying eastward this time:

As you can see, there is a clear dip in the integrated dose (the blue curve) two and a half hours into the flight. Other features of the graph are the remarkable smoothness of both the integrated dose and the maximum recorded flux (the purple line). I am even a bit surprised by the smallness of the fluctuations of the latter: the maximum flux is basically a record of the highest deposited energy, the tail of the distribution of recorded rates. It all goes in the direction of confirming that the measurements are trustworthy.

The location of the dip approximately coincides with the region where the former graph showed one of its two dips, so the new data somehow agrees with the hypothesis that those dips are real variations in the cosmic-ray flux, maybe due to disuniformities in the magnetic field and/or solar wind effects. In other words, while the new data cannot really tell which is the source of the dips, I think they strengthen the hypothesis that the dips are not instrumental artifacts. The instrument in question is, I believe, one of the best devices available in commerce to record personal radiation exposure, and I would indeed be surprised if it failed so clearly.

Radiation over Atlantic October 8, 2008

Posted by dorigo in personal, physics, science, travel.
Tags: , ,
comments closed

Swamped by last-minute obligations before leaving to Fermilab for an owl shift as a Scientific Coordinator in the CDF control room, I was prevented from contributing to the recent discussions on the Nobel prize in Physics, other than providing the original post below. After an uneventful trip, I find myself jet-lagged this afternoon in my good-old office in the CDF portakamps. Everything looks and feels as always: home.

Anyway, I want to report on a small scientific experiment here. I brought with me in my trip a digital dosimeter, which records exposure to ionizing radiation as a function of time. The device (which I described here) is a nifty little thing I bought some time ago, and still carry around when I work around particle physics experiments. It measures radiation in milliRem, but is actually very sensitive – it can signal doses in increments as small as 100 nanoRems, which I figured out correspond to about a dozen minimum-ionizing particle hits.

So, about the experiment: I set integration times of 3600 seconds, and turned the device on before leaving Munich with LH434, a flight departing to Chicago at 9AM this morning. The plane actually left with a half hour delay, and finally arrived at O’Hare at about noon local time, ten full hours later. Below are the radiation doses recorded by the instrument during the flight.

As you immediately notice, the purple points describe a quickly rising function, which levels off and finally goes back down. The maximum instantaneous levels of radiation recorded by the instrument appear in line with what one would expect: as the plane takes off and gains elevation, the screening effect of our atmosphere is reduced, and the radiation increases. Local effects may have an impact in the distribution, and they thus depend on time, while the plane traveled above Europe, Greenland, Canada, and the north-western US; but they are not observable given the uncertainty in the points -0.1 mRem is the smallest digit provided when rates are measured in the logs.

The blue line instead puzzles me. It is the integrated dose per hour, and it should be a much more accurate description of the radiation field. But it bounces back and forth, after leveling at about 0.25 mRem/h. What are the causes of this funny behavior ? Here is what I can think of:

  • a real fluctuation in the flux of cosmic rays, due to magnetic field effects
  • an erroneous recording of data by the instrument, specifically at 14h and 16h (Munich time)

I instead discard the option that the fluctuations are statistical in nature: 0.01 mRem corresponds, as I noted above, to roughly a thousand hits.

Any other idea ?

PS (mainly for the record): another simple experiment I performed with the dosimeter is discussed here.