jump to navigation

Variation found in a dimensionless constant! April 1, 2009

Posted by dorigo in cosmology, mathematics, news, physics, science.
Tags: , ,
comments closed

I urge you to read this preprint by R.Scherrer (from Vanderbilt University), which appeared yesterday on the arxiv. It is titled “Time variation of a fundamental dimensionless constant“, and I believe it might have profound implications in our understanding of cosmology, as well as theoretical physics. I quote the incipit of the paper below:

“Physicists have long speculated that the fundamental constants might not, in fact, be constant, but instead might vary with time. Dirac was the first to suggest this possibility [1], and time variation of the fundamental constants has been investigated numerous times since then. Among the various possibilities, the fine structure constant and the gravitational constant have received the greatest attention, but work has also been done, for example, on constants related to the weak and strong interactions, the electron-proton mass ratio, and several others.”

Many thanks to Massimo Pietroni for pointing out the paper to me this morning. I am now collecting information about the study, and will update this post shortly.

Neutrino Telescopes day 2 notes March 12, 2009

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , , , , , , ,
comments closed

The second day of the “Neutrino Conference XIII” in Venice was dedicated to, well, neutrino telescopes. I have written down in stenographical fashion some of the things I heard, and I offer them to those of you who are really interested in the topic, without much editing. Besides, making sense of my notes takes quite some time, more than I have of it tonight.

So, I apologize for spelling mistakes (the ones I myself recognize post-mortem), in addition to the more serious conceptual ones coming from missed sentences or errors caused by my poor understanding of English, of the matter, or of both. Also, I apologize to those of you who would have preferred a more succint, readable account: As Blaise Pascal once put it, “I have made this letter longer than usual, because I lack the time to make it short“.

NOTE: the links to slides are not working yet – I expect that the conference organizers will fix the problem tomorrow morning.

Christian Spiering: Astroparticle Physics, the European strategy ( Slides here)

Spiering gave some information about two new bodies, European organizations: ApPEC and ASPERA. ApPEC has two committees offering advice to national funding agencies, improve links and communication between the astroparticle physics community and scientific programmes of organizations like CERN, ESA etc. Aspera was launched in 2006, to give a roadmap for APP in Europe.Close coordination with ASTRONET, and links to CERN strategy bodies.

Roadmapping: science case, overview of the status, some recommendations for convergence. Second thing, a critical assessment of the plans, a calendar for milestones, coordinated with ASTRONET.

For dark matter and dark energy searches, Christian displayed a graph showing the cross section of WIMPS as a function of time, the reach of present-day experiments. In 2015 we should reach cross sections of about 10^-10 picobarns. We are now at some 10^-8 with our sensitivity. The reach depends on background, funding and infrastructure. Idea is to go toward a 2-ton-scale zero-background detectors. Projects: Zeplin, Xenon, others.

In an ideal scenario, LHC observations of new particles at weac scale could place these observations in a well-confined particle physics context, direct detection would be supported by indirect signatures. In case of a discovery, smoking-gun signatures of direct detection such as directionality and annual variations would be measured in detail.

Properties of neutrinos: direct mass measurement efforts are KATRIN and Troitzk. Double beta decay experiments are Cuoricino, Nemo-3, Gerda, Cuore, et al. The KKGH group claimed a signal of masses of a few tenths of eV, but normal hierarchy implies 10^-3 eV for the lightest neutrino mass of the same order. Experiments are expected to be in operation (cuoricino, nemo-3) or start by 2010-2011. Supernemo will start in 2014.

A large infrastructure for proton decay is advised. For charged cosmic rays, depending on which part of the spectrum one looks, there are different kinds of physics contributing and explorable.

The case for Auger-North is strong, high-statistics astronomy with reasonably fast data collection is needed.

For high-energy gamma rays, the situation has seen an enormous progress over the last 15 years. Mostly by imaging atmospheric Cherenkov telescopes (IACT). Whipple, Hegra, CAT, Cangaroo, Hess, Magic, Veritas. Also, wide-angle devices. For existing air Cherenkov telescopes, there are Hess and Magic running, very soon Magic will go into Magic-II. Whipple runs a monitoring telescope.

There are new plans for MACE in India, something between Magic and Hess. CTA and AGIS are in their design phase.

Aspera’s recommendations: the priority of VHE gamma astrophysics is CTA. They recommend design and prototyping of CTA and selection of sites, and proceeding decidedly towards start of deployment in 2012.

For point neutrino sources, there has been tremendous progress in sensitivity over the last decade. A factor of 1000 within 15 years in sensitivity to fluxes. IceCube will deliver what has promised, within 2012.

For gravitational waves, there is LISA and VIRGO. The frequency tested of LISA is in the 10^-2 Hz, VIRGO will go to 100-10000 Hz. The reach is of several to several hundred sources per year. The Einstein telescope, a graviwaves detector underground, could access thousand of sources per year. Einstein will construct starting in 2017. The conclusions: Einstein is the long-term future project of ground-based gravitational wave astronomy in Europe. A decision on funding will come after first detections with enhanced LIGO and virgo, but is most likely after collecting about a year of data.

In summary,the budget will increase by a factor of more than two in the next decade. Km3net, megaton, CTA, ET will be the experiments taking the largest share. We are moving into regions with a high discovery potential. An accelerated increase of sensitivity in nearly all fields.

K.Hoffmann, Results from IceCube and Amanda, and prospects for the future ( slides here)

IceCube will become the first instrumented cubic km neutrino telescope. Amanda-II consists of 677 optical modules embedded in the ice at depths of 1500-2000 m. It has been a testbed for icecube and for deploying optical modules. Icecube has been under construction for the last several years, Strings of PMT tubes have been deployed in the ice during the last few years. 59 of them are operating.

The rates: IC40 has 110 neutrino events per day. Getting close to 100% live time. 94% in January. IceCube has the largest effective area for muons, long track length. The range of sensitivity in energy is to TeV-PeV range.

Ice properties are important to understand. A dust logger measures dust concentration, which is connected to the attenuation length of light in ice. There is a thick layer of dust sitting at a depth of 2000m, clear ice above, and very clear ice below. They have to understand the light yield and propagation well.

Of course one of the most important parameters is the angular resolution. As the detector got larger, it improved. One of the more exciting things this year was to see the point spread function go peak at less than one degree with long track lengths for muons.

To see the Moon for a telescope is always reassuring. They did it, a >4 sigma effect for cosmic rays impinging on the Moon.

With the waveforms they have in IceCube, the energy reconstruction has muons that are non-minimum ionizing. They reconstruct energy by number of photons along the track. Can achieve some energy resolution, progress in understanding how to reconstruct energy.

First results from point-source searches. The 40-string configuration data will be analyzed soon. The point sources are sought with a unbinned likelihod search. Taking into account energy variable in point source search. They expect point sources to have higher energy spectrum than atmospheric neutrinos. From 5114 neutrino candidates in 276 days, they found one hot spot in the sky, with a significance after trial factor accounting that is of about 2.2 sigma. There are variables next year that will be less sensitive to dust model, so they might be able to say more about that one soon.

For a 7-years data, 3.8 year livetime, the hottest spot has a significance of 3.3 sigma. With one year of data, icecube 22 will already be more sensitive than Amanda. Icecube and Antares are complementary, since icecube looks at northern declination and antares is looking at the southern declinations. The point source flux sensitivity is down to 10^-11 GeV cm-2 s-1.

For GRBs, one can use a triggered search, that is an advantage, and latest results give for 400 bursts a limit. From IceCube22, a unbinned search similar to the one of the point source search, gives an exclusion power expected to 10^-1 GeV per cm^2 (in E^2 dN/dE units), in most of the energy range.

The naked-eye GRB of March 19, 2008, had detector in test mode, only 9 of 22 strings taking data. Bahcall predicted flux peaks at 10^6 GeV with a flux of 10^-1, but the limit found is 20 times higher.

Finally, they are looking for WIMPS. A search was recently sent for publication by the 22-string IceCube. 104 days of livetime. Can reach down well.

Atmospheric neutrinos are also a probe for violations of Lorentz invariance -possibly from Quantum Gravity effects. The survival probability depends on energy, assuming maximal mixing their sensitivity is down to a part in 10^-28. They are looking for a change in what one would expect for flavor oscillation. Atmospheric neutrinos are produced, depending on where they are produced they traverse more of the core of the Earth. So one gets a neutrino beam with different baselines, based on energy, and you would see a difference in the neutrino oscillation probability. The neutrino oscillation parameter will be energy dependent.

In the future they would like to see a high-energy extension. Ice is the only medium where one can see a coherent radio signal and an optical one, and acoustic too. Past season was very successful, with the addition of 19 new strings. Many analyses of 22-string configuration are complete. ANalysis techinques being refined to exploit size, energy threshold, and technology used. Underway to develop tech to build GZK scale nu detector after IceCube is complete.

Vincenzo Flaminio, Results from Antares ( slides here)

Potential sources of galactic neutrinos can be SN remnants, pulsars, microquasars, and extragalactic ones are gamma-ray bursts and active galactic nuclei. A by-product of Antares is an indirect search for dark matter, results are not ready yet.

Neutrinos from supernovas: these act as particle accelerators, can give hadrons and gammas from neutral pion decays. Possible sources are those found by Auger, or for example the TeV photons which come from molecular clouds.

Antares is an array of photomultiplier tubes that look at Cherenkov light produced by muon crossing the detector. The site is south of France, the galactic center is visible for 75% of the time. The collaboration comprises 200 physicists from many european countries. The control room in Toulon is more comfortable than the Amanda site (and this wins the understatement prize of the conference).

The depth in water is 2500m. All strings are connected via cables on the seabed. 40km long electro-optical cable connects ashore. Time resolution monitored by LED beacon in each detector storey. A sketch of the detector is shown below.

Deployment started in 2005, in 2006 first line installed. Finished one year ago. In addition there is an acoustic storey and several monitoring instruments. Biologists and oceanographers are interested in what is done, not just neutrino physicists.

The detector positioning is an important point, because lines move because of sea currents. Installed a large number of transmitters along the lines, use information to reconstruct minute-by-minute the precise position of the lines.

Collect triggers at 10 Hz rate with 12 lines. Detected 19 million muons with first 5 lines, 60 with the full detector.

First physics analyses are going on. Select up-going neutrinos, low S/N ratio with atmospheric muons is avoided this way. Rate is of the order of two per day using multi-line configuration.

Conclusions: Antares has successfully reached the end of construction phase. Data taking is ongoing, analyses in progress on atmospheric muons and neutrinos, cosmic neutrino sources, dark matter, neutrino oscillations, magnetic monopoles, etcetera.

David Saltzberg, Overview of the Anita experiment ( slides here)

Anita flies at 120,000 ft above the ice. It is the eyepiece of the telescope. The objective is the large amount of ice of the Antarctica. Tested with 8 metric tons of ice to test effect for detection. Done at SLAC. Observe radio pulses from the ice. A wake-field radio signal is detected. It goes up and down in less than a nanosecond, due to its Cherenkov nature. It is called Askaryan effect. You can observe the number of particles in the shower, and the measured field effect does track the number of particles in the shower. The signal is 100% polarized linearly. Wavelength is bigger than the size of the shower, so it is coherent. At a PeV there are more radio quanta emitted than optical ones.

They will use this at very high energy, looking for GZK-induced neutrinos. The GZK converts protons into neutrinos, 50 MPc around sources.

The energy is at the level of 10^18 eV or higher, proper time is 50 milliseconds, longest baseline neutrino experiment possible.

Anita has a GPS antenna for detection, and orientation which needs a fraction of a degree resolution. Solar powered. Antennas are pointed down 10 degrees.

This 50-page document describes the instrument.

Lucky coincidences: 70% of world’s fresh water is in antarctica, and it is the most quiet radio place. The place selects itself, so to speak.

They made a flight with a live time of 17.3 days, but this one never flew above the thickest ice, which is where most of the signal should be coming from.

The Askaryan effect gets distorted by antenna detection, electronics, and thermal noise. The triggering works like any multi-level trigger. Sufficient energy in one antenna, same for neighbors. L3 goes down to 5 Hz from a start of 150 kHz. L2 does coincidence between adjacent L1 triggers.

They put a transmitter underground to get pulses to be detected. Cross-correlation between antennas do interferometry, and gets position of source. The resolution obtained on elevation is an amazing 0.3 degrees, and for azimuth it is 0.7 degree resolution. The ground pulsers make even very small effects stand out. Even 0.2 degree tilt of detector can be spotted by looking at errors in elevation as a function of azimuth.

First pass of analysis of data: 8.2M hardware triggers. 20,000 of those point well to ice. After requiring upcoming plane waves, isolated from camps and other events, remain a few events. Could be some residual man-made noise. Background estimate: thermal noise, which is well simulated, and gives less than one event after all cuts, and anthropogenic impulsive noise, like iridium phones, spark plugs, discharge from structures.

Results: having seen zero vertical polarization events surviving cuts, constraints on GZK production models. Best result to date in the energy range from 10^10 to 10^13 GeV.

Anita 2 has 27 million better triggers, over deeper ice, 30 days afloat. Still to be analyzed. Anita 1 is doing a 2nd pass deep analysis of the data. Anita 2 has better data, expect factor 5-10 more GZK sensitivity from it.

Sanshiro Enomoto, Using neutrinos to study the Earth: Geoneutrinos. ( slides here)

Geoneutrinos are generated by beta decay chain of natural isotopes (U,TH,K). These all yield antineutrinos. With an organic scintillator, they are detected by inverse-beta decay reaction yielding a neutron and a positron. The threshold is at 1.8 MeV. Uranium and Thorium contribute in this energy range, while the Potassium yield is below it. Only U-238 can be seen.

Radiogenic heat dominates Earth energetics. Measured terrestrial heat flow is of 44 TW, and the radiogenic heat is 3TW. The only direct geochemical probe: deepest borehole reaches only 12 km, and rock samples down to 200 km underground. Heat release from the surface peaks off America in the Pacific and in south Indian ocean. Estimate is of 20 TW from radioactive heat, 8 from U, 8 from Th, 3 from K. Core heatflow from solidification etc. is estimated at 5-15 TW, secular cooling 18+-10 TW.

Kamland has seen 25 events above backgrounds, consistent with expectations.

I did not take further notes of this talk, but was impressed by some awesome plots of Earth planisferes with all sources of neutrino backgrounds, to figure out which is the best place for a detector studying geo-neutrinos. Check the slides for them…

Michele Maltoni, synergies between future atmospheric and long-baseline neutrino experiments ( slides here)

A global six-parameter fit of neutrino parameters was shown, including solar, atmospheric, rector, and accelerator neutrinos, but not SNO-III yet. There is a small preference for non-zero theta_13, coming completely from the solar sector; as pointed out by G.Fogli, we do not find a non-zero theta_13 angle from atmospheric data. All we can do is point out that there might be something interesting, suggest experiments to do their own analyses fast.

The question is: in this picture, were many experiments contribute, is there space left for relevance of atmospheric neutrinos ? Which is the role of atmospheric neutrino measurements ? Do we need them at all ?

At first sight, there is not much left for atmospheric neutrinos. Mass determination is dominated by MINOS, theta_13 is dominated by CHOOZ, atmospheric data dominate in determination of mixing angle, atmospheric neutrino measurements have highest statistics, but with the coming of next generation this is going to change. There is symmetry in sensitivity shape of other experiments to some of the parameters. On the other hand, when you include atmospheric data, the symmetry is broken in theta_13, which distinguishes between normal and inverted hierarchy.

Determination of the octant in \sin^2 \theta_{23} and \Delta m^2_{31}. Also, the introduction of atmospheric data introduces a modulation in the \delta_{CP} - \sin \theta_{13} plot. Will this usefulness continue in the future ?

Sensitivity to theta_13: apart from hints mentioned so far, atmospheric neutrinos can observe theta_13 through matter effects, MSW. In practice, the sensitivity is limited by statistics: at E=6 GeV the ATM flux is already suppressed; background comes from \nu_e \to \nu_e events which strongly dilute the \nu_\mu \to \nu_e events. Also, the resonance occurs only for neutrinos OR antineutrinos, but not both.

As far as resolution goes, MegaTon detectors are still far in the future, but Long-baseline experiments are starting now.

One concludes that the sensitivity to theta_13 is not competitive with dedicated LBL and reactor experiments.

Completely different is the issue with other properties, since the issue of the resonance can be exploited once theta_13 can be measured. resonant enhancement of neutrino (antineutrino) oscillations for a normal (inverted) hierarchy; mainly visible for high energy, >6 GeV. The effect can be observed if detector can discriminate charge, or, if no charge discrimination is possible, if the number of neutrinos and antineutrinos is different.

Sensitivity to the hierarchy depends on charge discrimination for muon neutrinos. Sensitivity to the octant: in the low-energy region (E<1 GeV), for theta_13=0 the excess of \nu_e flux for theta_23 in one or the other side. Otherwise, there are lots of oscillations, but the effect persitst on the average. It is also present for both neutrinos and antineutrinos. At high energy, E>3 GeV, for theta_13 the MSW resonance produces an excess of electron-neutrino events. Resonance occurs only for one kind of neutrino (neutrino vs antineutrino).

So in summary one can detect many features with atmospheric neutrinos, but only with some particular characteristics of the detector (charge discr, energy resolution…).

Without atmospheric data, only K2K can say something on the neutrino hierarchy for low theta_13.

LBL experiments have poor sensitivity due to parameter degeneracies. Atmospheric neutrinos contribute in this case. The sensitivity to the octant is almost completely dominated by atmospheric data, with only minor contributions by LBL measurements.

One final comment: there might be hints of neutrino hierarchy in high-energy data. If theta_13 is really large, there can be some sensitivity to neutrino mass hierarchy. So the idea is to have a part of the detectors with increased photo-coverage, and use the rest of the mass as a veto: the goal is to lower the energy threshold as much as possible, to gain sensitivity to neutrino parameters with large statistics.

Atmospheric data are always present in any long-baseline neutrino detector: ATM and LBL provide complementary information on neutrino parameters, information in particular on hierarchy and octant degeneracy.

Stavros Katsanevas, Toward an European megaton neutrino observatory ( slides here)

Underground science: interdisciplinary potential at all scales. Galactic supernova neutrinos, galactic neutrinos, SN relics, solar neutrinos, geo-neutrinos, dark matter, cosmology -dark energy and dark matter.

Laguna: aimed at defining and realizing this research programme in Europe. Includes a majority of European physicists interested in the construction ove very massive detectors realized in one of the three technologies using liquids: water, liquid argon, and liquid scintillator.

Memphys, Lena, Glacier. Where could we put them ? The muon flux goes down with the overburden, so one has to examine the sites by their depth. In Frejus there is a possibility to put a detector between road and train tracks. Frejus rock is not hard but not either soft. Hard rock can become explosive because of stresses, and is not good. Another site is Pyhasalmi in Finland, but there the rock is hard.

Frejus is probably the only place where one can put water Cherenkov detectors. For liquid Argon, we have ICARUS (hopefully starting data taking in May), others (LANNDD, GLACIER, etc.). Glacier is a 70 m tank, with several novel concepts. A safe LNG tank, developed for many years by petrochemical industry. R&D includes readout systems and electronics, safety, HV systems, LAr purification. Must think about getting an intermediate scale detector.

The physics scope: a complementary program, a lot of reach in Memphis in searches for positron-pizero decays of protons, better for kzero in liquid argon. Proton lifetime expectations are at 10^36 years.

By 2013-2014 we will know whether sinsquared theta13 is larger than zero.

European megaton detector community (3 liquids) in collaboration with its industrial partners is currently addressing common issues (sites, safety, infrastructures, non-accelerator physics potential) in the context of LAGUNA (EUI FP8) Cost estimates will be ready by July 2010.

David Cowan, The physics potential of Icecube’s deep core sub-array ( slides here)

A new sub-array in ice-cube, called deep-core: ICDC. Originally conceived as a way to improve the sensitivity to WIMPs. Denser sub-arrays to lower the energy threshold, they give one order of magnitude decrease in the low-energy reach. There are six special strings plus seven nearby icecube strings The vertical spacing is of 7 meters, with 72 meter horizontal interstring spacing: a x10 density with respect to IceCube.

The effective scattering length in deep ice, which is very clear, is longer than 40 meters. This gives a better possibility to do a calorimetric measurement.

The deep core is at the bottom center. They take the top modules in each string as an active veto for backgrounds coming from muon events going down. On the sides, three layers of IC strings also provide a veto. These beat down the cosmic background a lot.

The ICDC veto algorithms: one runs online, finds event light intensity, the weighted center of gravity, and the time. They do a number of things and come up with a 1:1 S/N ratio. So ICDC improves the sensitivity to WIMPs, neutrino sources in the southern sky, oscillations. For WIMPs, an annihilation can occur in the center of the Earth or Sun. Annihilations to bbbar pairs or tau-tau pairs gives soft neutrinos, while ones into W boson pairs yield hard ones. This way, they extend the reach to masses of less than 100 GeV, at cross sections of 10^-40 cm^2.

In conclusion, ICDC can analyze data at lower neutrino energy than previously thought possible. It improves overlap with other experiments. It provides for a powerful background rejection, and it has sufficient energy resolution to do a lot of neutrino oscillation studies.

Kenneth Lande, Projects in the US: a megaton detector at Homestake ( slides here)

DUSEL at Homestake, in South Dakota. There are four tanks of water Cherenkov in the design. Nearby there’s the old site of the chlorine experiment. Shafts a km apart.

DUSEL will be an array of 100-150 kT fiducial mass Cerenkov detectors, at 1300 km distance from FNAL. The beam goes from 0.7 MW to 2.0 MW as the project goes along. Eventually add 100 kT of argon. A picture below shows a cutaway view of the facility.

Goals are accelerator-based theta_13, look at neutrino mass hierarchy, CP violation through delta_CP. For non-accelerator, the program includes studies of proton decay, relic SN neutrinos, prompt SN neutrinos, atmospheric neutrinos, and solar neutrinos. They can build up to 70m-wide tanks, but settled to 50-60m. The plan is to build three modules.

Physics-wise, the fnal beam has oscillated and disappeared at energy around 4 GeV. Rate is of 200,000 CC events per year assuming 2MW power (no oscillation, raw events). Neutrino appearance (electron kind) for nu and antinu as a function of energy gives oscillation, and mass hierarchy.

Reach in theta_13 is below 10^-2. For nucleon decay: looking in the range of 10^34. 300 kT per 10 y means 10^35 proton-years. Sensitive also to K-nu mode of decay, at the level of 8×10^33.

DUSEL can choose the overburden. A deep option can go deeper than Sudbury.

US power reactors are far from Homestake. Typical distance is 500 km. The neutrino flux from reactors is 1/30 of that of SK.

For a SN in our galaxy they expect about 100,000 events in 10 seconds. For a SN in M31 they expect about 10-15 events in a few seconds.

Detector construction: excavation, installing water-tight liner… Financial timetable is uncertain. At the moment water is being pumped down. Rock studies can start in September.

And that would be all for today… I heard many other talks, but cannot bring myself to comment on those. Please check http://neutrino.pd.infn.it/NEUTEL09/the conference site for the slides of the other talks!

Neutrino Telescopes XIII March 8, 2009

Posted by dorigo in astronomy, cosmology, news, personal, physics, science, travel.
Tags: , , , ,
comments closed

The conference “Neutrino Telescopes” has arrived at its XIII edition. It is a very nicely organized workshop, held in Venice every year towards the end of the winter or the start of the spring. For me it is especially pleasing to attend, since the venue, Palazzo Franchetti (see picture below) is located at a ten minute walk from my home: a nice change from my usual hour-long commute with Padova by train.

This year the conference will start on Tuesday, March 10th, and will last until Friday. I will be blogging from there, hopefully describing some new results heard in the several interesting talks that have been scheduled. Let me mention only a few of the talks, pasted from the program:

  • D. Meloni (University of Roma Tre)
    CP Violation in Neutrino Physics and New Physics
  • K. Hoffman (University of Maryland)
    AMANDA and IceCube Results
  • S. Enomoto (Tohoku University)
    Using Neutrinos to study the Earth
  • D.F. Cowen (Penn State University)
    The Physics Potential of IceCube’s Deep Core Sub-Detector
  • S. Katsanevas (Université de Paris 7)
    Toward a European Megaton Neutrino Observatory
  • E. Lisi (INFN, Bari)
    Core-Collapse Supernovae: When Neutrinos get to know Each
  • G. Altarelli (University of Roma Tre & CERN)
    Recent Developments of Models of Neutrino Mixing
  • M. Mezzetto (INFN, Padova)
    Next Challenge in Neutrino Physics: the θ13 Angle
  • M. Cirelli (IPhT-CEA, Saclay)
    PAMELA, ATIC and Dark Matter

The conference will close with a round table: here are the participants:

Chair: N. Cabibbo (University of Roma “La Sapienza”)
B. Barish (CALTECH)
L. Maiani (CNR)
V.A. Matveev (INR of RAS, Moscow)
H. Minakata (Tokyo Metropolitan University)
P.J. Oddone (FNAL)
R. Petronzio (INFN, Roma)
C. Rubbia (CERN)
M. Spiro (CEA, Saclay)
A. Suzuki (KEK)

Needless to say, I look forward to a very interesting week!

Vernon Barger: perspectives on neutrino physics May 22, 2008

Posted by dorigo in cosmology, news, physics, science.
Tags: , , ,
comments closed

Yesterday morning Barger gave the first talk at PPC 2008, discussing the status and the perspectives of research in physics and cosmology with neutrinos. I offer my notes of his talk below.

Neutrinos mix among each other and have mass. There is a matrix connecting flavor and mass eigenstates, and the matrix is parametrized by three angles and a phase. To these can be addded two majorana phases for the neutrinoless double beta decay: \phi_2 and \phi_3. Practically speaking these are unmeasurable.

What do we know about these parameters ? We are at the 10th year anniversary of the groundbreaking discovery of SuperKamiokande. This was then confirmed by other neutrino experiments: MACRO, K2K, MINOS. The new result by MINOS has a 6.5 sigma significance in the low energy region. This allows to measure the mass difference precisely. It is found that (\Delta m^2)_{12}  = 2.4 \times 10^{-3} eV squared at \sin^2 \theta_{23}=1.00. The mixing angle is maximal, but we do not really know because there is a substantial error on it.

Solar neutrino oscillations are a mystery that existed for years. The flux of solar neutrinos was calculated by Bahcall, and there was a deficit. The deficit has an energy structure, as measured by the Gallium, Chlorine, and SuperK and SNO experiments by looking in neutrinos coming from different reactions -because of the different energy thresholds of the detectors: pp interactions, Beryllium 7, and Boron 8 neutrinos.
The interpretation of the data, which evolved over time, is now that the solar mixing angle is quite large, and what happens is that the high energy neutrinos sent here are created in the center of the sun, but they make a transition in matter, an adiabatic transition to a state \nu_2 which travels to the earth. This happens to the matter-dominated higher energy neutrinos. The vacuum dominated ones at lower energy have a different phenomenology.

There is a new result from Borexino, they measured neutrinos from the Beryllium line, and they reported a result consistent with others. Borexino is going to measure with 2% accuracy the deficit, and if KamLand has enough purity they can also go down to about 5% accuracy.

Kamland data provides a beautiful confirmation of the solution of the solar neutrino problem: solar parameters are measured precisely. M^2_{21} vs \tan^2 \theta_{12}. The survival probability as a function of distance versus neutrino energy has a beautiful oscilation. The result only assumes CPT invariance. The angle \theta_{12} is 34.4° with 1° accuracy, and (\Delta_m^2)_{12} = 7.6 \times 10^{-5} eV squared.

There is one remaining angle to determine, \theta_{13}. From reactor experiments you expect that the probability for electron neutrino disappearance is zero in short baseline experiments. Chooz had a limit at \theta_{13} < 11.5 degrees. There are experiments that have sensitivities on \sin^2 \theta_{13} of 0.02 (Double CHooz, DB, Reno). The angle is crucial because iti s a gateway to CP violation. If the angle is zero CP violation is not accessible in this sector.

What do we expect theoretically on \theta_{13} ? There are a number of models to interpret the data. Predictions cluster around 0.01. Half of the models will be tested with the predicted accuracies of planned experiments.

There is a model called tri-bimaximal mixing: a composition analogous to the quark model basis of neutral pions, eta and eta’ mesons. An intriguing matrix: it could be a new symmetry of nature, possibly softly broken with a slightly non-zero value of the angle \theta_x. Or, it could well be an accident.

So, we need to find what \theta_{13} is. It is zero in the Tri-binaximal mixing. We then need to measure hte mass hierarchy: is it normal ( the third state much heavier than the other two) or inverted (the lightest much lighter than the others) ? Also, is CP violated ?

Neutrinoless double-beta decay can tell us if the neutrinos are Majorana particles. In the inverted hierarchy, you measure the average mass versus the sum. There is a lot of experimental activity going on here.

Cosmic microwave has put a limit on sum of masses at 0.6 eV. By doing 21cm tomography one can measure neutrino masses with precision of 0.007 eV. If this is realizable, it could individually measure the masses of these fascinating particles.

Barger then mentioned the idea of mapping the universe with neutrinos: the idea is that active galactic nuclei (AGN) produce hadronic interactions with pions decaying to neutrinos, and there is a whole range of experiments looking at this. You could study the neutrinos coming from AGN and their flavor composition.
Another late-breaking development is that Auger has shown a correlation of ultra high-energy cosmic rays with AGNs in the sky: the cosmic rays seem to arrive directly from the nuclei of active galaxies. Auger found a higher correlation with sources at 100 Mpc, but falling off at higher distances. Cosmic rays are already probing the AGN, and this is very good news for neutrino experiments.

Then he discussed neutrinos from the sun: annihilation of weakly interacting massive particles (WIMPS), dark matter candidates, can give neutrinos from WW, ZZ, ttbar production. The idea is that the sun captured these particles gravitationally during its history, and the particles annihilate in the center of the sun, letting neutrinos escape with high energy. The ICECUBE discovery potential is high if the spin-dependent cross section for WIMP interaction in the sun is sufficiently high.

In conclusion, we have a rich panorama of experiments that all make use of neutrinos as probes of exotic phenomena as well as processes which we have to measure better to gain understanding of fundamental physics as well as gather information about the universe.

Short summary from an intense day at PPC08 – part 1 May 21, 2008

Posted by dorigo in cosmology, news, physics, science.
Tags: , , , ,
comments closed

Besides my talk, which opened the morning session, there were a number of interesting talks today at PPC 2008, the conference on the interconnection between particle physics and cosmology which is being held at the New Mexico University in Albuquerque. I will give some quick highlights below from the morning session, reserving the right of providing more information on some of them later on, as I will have time to reorganize my notes and go back to the talks slides for a second look; a summary of selected afternoon talks will have to wait tomorrow. In the meantime, you can find the slides of the talks at this link.

I. Sarcevic talked about “Ultra-high energy cosmic neutrinos“. Neutrinos are stable neutral particles, so cosmic neutrinos point back to their astrophysical point source and bring to us information from processes that are otherwise obscured by material in the line of sight. Extragalactic neutrinos have large energies, and they can probe particle physics and astrophysics, since they can escape from extreme environments and they point back to the sources.

Among the sources, active galactic nuclei are the most powerful sources of energy. The AGN flux is the largest below 10^10 GeV. There is a correlation important to discover between photons and neutrinos: if photons come from the hadronic interaction p \gamma \to p \pi^0 \to \gamma \gamma, they can be observed together with neutrinos yielded by p \gamma \to n pi^+ \to n \gamma \to p \pi^+ when the charged pions decay to neutrino of electron and muon kinds. A fraction of these may then also give tau neutrinos from oscillations. The sources of pion decays have a ratio of 1:2:0 between electron, muon, and tau neutrinos, but the neutrino oscillation changes this double ratio.

Experiments are looking for muon tracks (ICECUBE, RICE), electromagnetic and hadronic showers (ICECUBE and others). To determine the energy flux that reaches the detector we need to consider propagation through earth and ice. Tau neutrinos will give different contributions from muon ones because of the short tau lifetime and the regeneration effect.

Nicolao Fornengo spoke on “Dark Matter direct and indirect detection”. We know that we have non-baryonic cold dark matter (DM), which points to extensions of the standard model (SM), new elementary particles. The evidence is multi-fold: dynamics of galaxy clusters, rotational curves, weak lensing, structure formation, energy budget from cosmological determinations.

We are concerned with dark matter inside galaxies. This can be made of a new class of particles, WIMPS for instance – weakly interacting massive particles. We need to know how these new particles are distributed. The halo in the galaxy has uncertainties, a thermal component, some round spherical distribution. From the velocity distribution it can be thermal, or non-thermal (for instance related to the merging history of the galaxy).

We need to exploit specific signals to disentangle dark matter signals from backgrounds. It is important ot quantify the uncertainties for the signal if we want to do that. What we do is a multi-channel search for DM: we have in fact different possibilities. The first is the direct detection of DM related to the fact that we are in the halo, so the particle can interact with the nuclei of a detector, and a signal in this case is nuclear recoil due to scattering. There are specific signatures, annual modulation or directionality of the recoil in order to correlate with the direction of the detector in its motion in the galaxy.

The other class of searches are typically called indirect searches; they rely on the self-annihilation of these particles with themselves, which produce anything allowed by the process, neutrinos, photons, or antimatter.

In annihilation to neutrinos, you can have a signal which you can correlate with the density of the galaxy, and spectral features in order to disentangle signal from backgrounds. For photons you can produce a line, since they decay at rest, and the line is very much suppressed – for neutralino it occurs at one loop level, but it would be a very good signature.

You can also annihilate into matter and antimatter, and produce cosmic rays. Also, you can have neutrino fluxes from the center of the Sun or Earth. Spectral features can also be used ot discriminate these signals from atmospheric neutrino backgrounds.

Let us start with direct detection. Upper limits are compared to some theoretical models. The latest CDMS upper bound is compared to results from a isothermal sphere of DM. The exclusion depends on the uncertainty on the shape of the local density and the velocity distribution functions. A spherical halo with a non isotropic velocity distribution function can give a different limit.

For neutralino masses higher than 50 GeV you have a realization of supersymmetry (SUSY). On the lower mass side the model is a minimal SUSY model with implemented a different parameter to loosen the LEP bounds on neutralino mass. If instead we compare with the DAMA result, we have models for neutralino and models for sneutrino which have allwoed solution in the region where there is a signal.

The question which arises is the following: Is there a candidate of something that agrees with both DAMA and CDMS ? You can have the DM candidate, the lightest sneutrino interacts with a nucleus producing a heavier state. The \chi_1 couples with the Z only off-diagonal. You have a kinematical bound related to the threshold of the experiment such that scattering is possible if the difference in mass is smller than a function of the masses and the velocity of the particle. For sneutrinos the suppression factor for a germanium nucleus divided by the suppression factor for a iodine nucleus, you can have situation where they are very different (germanium is the constituent of CDMS, iodine the target in DAMA). The same point could satisfy the DAMA region and the CDMS bound.

Neutrinos from earth and sun: idea is that you can accumulate your particles by gravitational capture, these produce annihilation, and neutrinos emitted are found in neutrino telescopes. Typically these calculations were done with fluxes of \nu_\mu; but they can be obtained by oscillation as well. The correction due to oscillation for annihilations in the earth or in the sun is large. For the sun at very high energy there is absorption.

How much is the signal of upgoing muons affected depending on mass ? If you produce Z or W in the earth you are not much affected by propagation, while in the sun for large mass the flux is reduced by a large amount.

One can try to exploit some energy spectrum of atmospheric neutrinos versus DM annihilation.
Antimatter signals are due to the fact that particles annihilate in the halo, produce proton-antiprotons.

Backgrounds are in the disk. One can model diffusion and propagation in the galaxy, and solve the diffusion equation, a bunch of parameters to fix using cosmic ray data. Then you can make a prediction for signal and background. The predictions for the spectrum in energy show a difference, although there are large uncertainties in signal fluxes. Better data on cosmic rays will fix the parameters.

For antiprotons, there is not much room for an excess in the lowest energy bins over the background, and not a big handle on the energy spectrum to distinguish signal from backgrounds. In this case one can only set constraints. The uncertainty in the theoretical estimates for SUSY is large in cross section. Firm exclusion of points is not possible unless you make strict assumptions on the propagation parameters.

One interesting signal is an antideuteron signal: in the process of annihilations you can produce antineutrons with antiprotons, and in turn produce antideuterons. It is nice because you have a good handle to detect it with respect to backgrounds. The uncertainty on the background (you produce anti-D from standard processes) is on the nuclear processes, for the signal instead the situation is the other way around: propagation gives a much wider uncertainty. Nevertheless, for antideuteron the signal and backgrounds have very different spectra. At low energy there is good discrimination (background is suppressed in that kinematical region). No detection in space so far for antideuterons, but there are proposals. An experiment called GAP can work on a balloon flight, and the expected sensitivity could access the signal. By taking a gaugino non-universal model with MSSM, the coverage in the parameter space for one-event sensitivity cuts into the space of models. Models not excluded by antiproton searches can predict up to 100 events for a long balloon flight.

For cosmic-ray positrons, you have production through annihilation and backgrounds. There are uncertainties at low energy because of propagation, below 10 GeV. How much can you boost your signal because of clumps of DM in the halo ? You do not expect for positrons a very big enhancement.

In summary, as far as direct detection we have annual modulation and in the rate they have a modulation. This can be indeed due to a DM particle. If interpreted that way, is this compatible with the SUSY candidate ? Yes, compatible with neutralino, harder for minimal SUGRA. It is also compatible with sneutrino. In models where you give mass to the neutrino with induced majorana-like mass terms. On the other hand you have CDMS, XENON etc, which try to reduce the background and have upper bounds. If we compare the models, they exclude some configurations. In one specific example the two sets could be working at the same time.

For indirect detection we have many possibilities. Antideuterons would be the best chance. When GAPS will fly, it could exploit a strong feature at low energy, and a good chance of finally to have a signal detected. Antiprotons at the moment do not show a very good spectral feature, but we can have potentially strong bounds. Positrons may possess spectral features but typically they require some overdensity to match the data. Gamma rays may have good spectral features, like lines. GLAST will be able to test this. Neutrinos from earth and sun could be found through their energy and angle features. One possibility could be to find the tau neutrino component, a virtually background-free signal.

Maurizio Giannotti talked on “New physics and stellar evolution“. Stellar cooling can provide bounds that are much better than experiments in near future.

The golden age for astrophyscs and stellar evolution was the late 50’s, since the role of weak interactions was recognized. The reference paper for neutrino pysics is by Feynman and Gell-mann of 1957, V-A theory of weak interactions. Indeed, stars can provide a test of weak interactions theory. Already in 1963 Bernstein and colleggues showed that the stellar bound on neutrino magnetic moment was better than the experimental one at that time.

Can we use stars to test physics above the ew scale ? Yes. If there is new physics (NP) at the electroweak (EW) scale this will appear in stars. Electroweak physics enters at 4th power of Gf, while NP will bring about a different scaling law. So we have to use other scales, energy scales like the temperature of the stars or masses. All these are much smaller than the EW scale.

Orthopositronium experiments: orthopositronium is an electron-positron bound state of spin 1. It thus cannot decay to 2 photons. Main decay is to 3 photons. Lifetime is 150 ns, longer than spin 0 state. It is a clean bound state of pure leptons, non strong interaction, only electromagnetic. In fact there is a little bit of weak interactions, but their contribution is small. So one can hypothesize that one gamma makes itself invisible through its disappearance into extra dimensions. The goal is to measure the invisible width of orthopositronium to a part in 10^-8. The partial width of orthopositronium to two neutrinos is less than 10^-17 of the three photons mode.

For stars, the energy loss through photon decay into the extra dimensions would delay the ignition of helium in the core of a red giant. The new energy loss rate must not exceed the standard loss through plasmon decay by more than a factor 2-3. The decay provides extra cooling in stars.
The delay of the Helium flash tells us that for one extra dimension, the scale of the extra dimension is k>10^{21} TeV, for n=3 extra dimensions the bound is softer, k>10^2 TeV.

To keep the scales in the model close to the EW scale one either needs a large number of extra dimensions, or very high scales.

A terrestrial experiment sensitive to invisible decay modes should have the following sensitivities to provide analogous bounds on k: B< 2x10^{-24+1.75n} for red giant stars.

In conclusion, stars offfer a variety of interesting environments to test physics beyond the SM. Bounds from astrophysics can be much better than the experimental ones. The models which try to confine the photon on the brane through gravity only are severely constrained by stellar evolution considertions. In this case, the sensitivity required in the orthopositronium experiment to provide the same bound is beyond any possibility in the near future. The result is that the number of extra compact dimensions must be 4 or greater, in
order to keep the scale of extradimensions close to the electroweak scale.

Another blog on PPC08 talks May 20, 2008

Posted by dorigo in astronomy, Blogroll, cosmology, news, physics, science.
Tags: , ,
comments closed

Just a note while I prepare my own summary of today’s talks: Mandeep Gill, a astrophysicist from Ohio State University, is also blogging on the talks we are listening at PPC08 this week. Please check his notes here….

Highlights from the morning talks at PPC08 May 19, 2008

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , , , , , ,
comments closed

The conference on the Interconnections between particle physics and cosmology, PPC2008, started this morning in the campus of the University of New Mexico. The conference features a rather relaxed, informal setting where speakers get a democratic 30′ each (plus 5′ for questions), and they do not frown at the repeated interruptions to their talks by questions from a self-forgiving audience.

This morning I listened to six talks, and I managed to not fall asleep during any. Quite a result, if you take into account the rather long trip I had yesterday, and the barely 4 hours of sleep I could manage tonight. This is a sign that the talks were interesting to me. Or at least that I need to justify to myself having traveled 22 hours and spending a week in a remote, desertic place (sorry Carl).

Here is a list of the talks, with very brief notes (which, my non-expert readers will excuse me, I cannot translate to less cryptic lingo due to lack of time):

  • The first talk was by Eiichiro Komatsu, from Austin, who discussed the “WMAP 5-year results and their implications for inflation“. Eiichiro reviewed the mass of information that can be extracted from WMAP data, and the results one can obtain on standard cosmology from the combination of WMAP constraints and other inputs from baryon acoustic oscillations (which one derives from the distribution of galaxies in the universe), supernovae, HST data and the like. He discussed the flatness of the universe (it is very flat, although not perfectly so), the level of non-gaussianicity in the distribution of primordial fluctuations (things are about as gaussian as they can), the adiabaticity relation between radiation and matter (which can be tested by cross-correlations in the power spectrum), and scale invariance (when n_s is found to be smaller than one at 2-sigma level, and if combined with additional input from omega_baryons can go as low as 3.4-sigma below 1).
  • Riccardo Cerulli then talked about the “Latest results from the DAMA-LIBRA collaboration“. I discussed these results in a post about a month ago, and Riccardo did not show anything I had not already seen, although his presentation was much, much better than the one I had listened to in Venice. In short, DAMA members believe their signal, which now stands out at 8.2 standard deviations, and they stand by it. Riccardo insisted on the model-independence of the result, while confronted with several questions by an audience that wouldn’t be convinced about the solidity of the analysis and less so about the interpretation in terms of a dark matter candidate. DAMA has collected so far a statistics of 0.53 tons x year, and is still taking data. I wonder if they are after a day-night variation or what, since it does not make much sense to increase a signal whose nature is -this is sure by now- of systematic nature.
  • Rupak Mahapatra talked just after Riccardo about the “First 5-tower results from CDMS”, another direct search for dark matter candidates. I also discussed the results of their work in a recent post (I am surprised to be able to say that and rather proud of it), so I will also not indulge in the details here. Basically, they can detect both the phonons from the nuclear recoil of a WIMP in their germanium detector, and the charge signal. Their detectors are disks of germanium operated at 40 millikelvins. ON the phonon side there are four quadrants of athermal phonon sensors, where a small energy release from the phonon disrupts cooper pairs and the change in resistivity is easily detected. On the charge side, two concentric electrodes give energy measurement and veto capability. The full shebang is well shielded, with exotic materials such as old lead from 100-old ships fished out of the ocean (old lead is not radioactive anymore). The experiment tunes cuts of their signal region to accept about half event from backgrounds. They observed zero events, and set stringent limits on the mass-cross section plane of a WIMP candidate. They plan to upgrade their device to a 1000 kg detector, which will make many things easier on the construction side, but which will run into non-rejectable neutron backgrounds at some point.
  • Alexei Safonov talked about the “Early physics with CMS“. Alexei discussed the plans of LHC for the years 2008 and 2009, and the results in terms of collected luminosity that we can expect for CMS and ATLAS, plus the expectations for analyses of SUSY and other searches. He was quite down-to-earth on the predictions, saying that the experiments are unlikely to produce very interesting results before the end of 2009. In 2008 we expect to collect 40 inverse picobarns of 10 TeV collisions, while in 2009 from 7 months of running starting in June the expectation is of about 2.4 inverse femtobarns. It goes without saying that it is quite likely that data collected until the end of 2009 might be insufficient even for a standard model Higgs boson discovery.
  • Teruki Kamon talked about “Measuring DM relic density at the LHC and perspectives on inflation“. He pointed to a recent paper at the beginning of his talk: hep-ph/0802.2968. Teruki took in consideration the coannihilation region of SUSY, where there is a small mass difference \Delta M between neutralino and stau, making the two likely to interact and creating a particular phenomenology. This region of the parameter space at high tan(beta) can be accessed by searches for tau pairs, which arise at the end of the gluino-squark decay chain. Through a measurement of tau pair masses and endpoints the mass of SUSY particles can be determined with 20 GeV accuracy. In particular, the ratio of gluino to neutralino masses can be measured rather well. With just 10 inverse femtobarns of data Teruki claims that one can get a very small error on the two parameters M_0 and M_{1/2}. A final plot showing the resulting constraints on \Omega_\chi versus \Delta M raised some eyebrows, because it showed an ellipse while the model dependence on \Delta M is exponential (the suppression of the coannihilation goes as e^{-\Delta M/20}) and one would thus expect a fancier contour of constraints. In any case, if nature has chosen this bizarre manifestation, LHC experiments are likely to measure things accurately with a relatively small bounty of data.
  • U. Oberlack was the last one to talk, discussing “Dark matter searches with the XENON experiment“, another setup for direct dark matter detection. Xenon as a detector medium is interesting because it has ten isotopes which allow sensitivity to spin-independent and spin-dependent interactions of WIMPS with nuclei. In principle, if one detected a signal, changing the isotope mixture would make the measurement sensitive to the details of the interaction. Liquid xenon has a high atomic number, so it is self-shielding from backgrounds. The experiment is located in the gran sasso laboratories in Italy, and it has taken data with a small “proof of principle” setup which nevertheless allowed to obtain meaningful limits on the mass versus cross section plane. They plan to make a much larger detector, with a ton of xenon: since they can detect the position of their signals, and have a fiducial region which is basically free of backgrounds, scaling up the detector size is an obvious improvement since the fiducial region increases quickly. He showed a nice plot of the cross section sensitivity of different experiments versus time, where one sees three main trends in the past, depending on the technology on which experiments have been based. xenon as a medium appears to be producing a much better trend of sensitivity versus time, and one expects it will dominate the direct searches in the next future.

I will complement the above quick writeup with links to the talk slides as they become available…