Milind Diwan: recent MINOS results April 8, 2009
Posted by dorigo in news, physics, science.Tags: minos, neutrino, neutrino experiments, neutrino oscillations, Sterile neutrinos
comments closed
I offer below another piece of the notes I took at the NEUTEL09 conference in Venice last month. For the slides of the talk reported here, see the conference site.
Milind’s presentation concentrated on results of muon-neutrino to electron-neutrino conversions. Minos is a “Main-Injecor Neutrino Oscillation Search”. It is a long-baseline experiment: the beam from the Main injector, Fermilab’s high-intensity source of protons feeding the Tevatron accelerator, can be sent from Batavia (IL) 735km away to the Soudan mine in Minnesota. There are actually two detectors, a near and a far detector: this is the unique feature of MINOS. The spectra collected at the two sites are compared to measure muon neutrino disappearance and appearance. The near detector is 1km away from the target.
The beam is a horn-focused muon-neutrino one. Horns are parabolic-shaped magnets. 120 GeV protons originate pions, which are focused by these structures; negative ones are defocused, and the beam contains predominantly muon neutrinos from the decay of these pions. The accelerator provides 10-microsecond pulses every 2.2 seconds, with protons per pulse. 95% of the resulting neutrino flux is
, 4% are
.
Besides the presence of two detectors in line, another unique feature of the Fermilab beam is the possibility to move the target in and out, shifting the spectrum of neutrinos that come out, because the focal point of the horns changes. Two positions of the target are used, corresponding to two beam configurations. In the high-energy configuration one can get a beam centered at an energy of 8 GeV or so, while the low-energy configuration is centered at 3 GeV. Most of the time Minos runs with the 3 GeV beam.
Detectors are a kiloton-worth of steel and scintillator planes in the near detector, and 5.4-kT in the far detector. Scintillator strips are 1 cm thick, 4.1 cm wide, and their Moliere radius is of 3.7cm. A 1-GeV muon crosses 27 planes. The iron in the detectors is magnetized with a 1 Tesla magnetic field.
Minos event topologies include CC-like and NC-like events. A charged-current (CC) event gives a muon plus hadrons: a long charged track from the muon, which is easy to find. A neutral current (NC) event will make a splash and it is diffuse, since all one sees is the signal from the disgregation of the target nucleus; an electron CC event will leave a dense, short shower, with a typical electromagnetic shower profile. The three processes giving rise to the observed signatures are described by the Feynman diagrams below.
The analysis challenge is to put together a selection algorithm capable of rejecting backgrounds and select CC events. Fluxes are measured in the near detector, and they allow to predict what can be found in the far detector. This minimizes the dependence on MC, because there are too many factors that may cause fluctuations in the real data, and the simulation cannot possibly handle them all. So they carry out a blind analysis. They check background estimates with independent samples: this serves the purpose of avoiding to bias oneself with what one should observe. They generate many simulated samples not containing an oscillation signal, to check all analysis procedures.
Basic cuts are applied on their data sample to ensure data quality. Fiducial volume cuts provide rejection to cosmic ray backgrounds. Simple cuts lead to a S/N ratio of 1:12. By “signal” one means the appearance of electron neutrinos. events are selected with artificial Neural Networks, which use the properties of the shower, the lateral shower spread, etcetera. These can discriminate NC interactions from electron-neutrino-induced CC interactions. After the application of the algorithm, the S/N ratio is 1/4. At this stage, one important remaining source of background is due to muon-neutrino CC backgrounds which can be mistaken from electron-neutrino ones when the muon is not seen in the detector.
They can select events with a “library event matching” (LEM). This matches the data event with a shower library, reconstructing the fraction of the best 50 matches which are electron-neutrino events. This is more or less like an evolved “nearest-neighbor” algorithm. As a result, they get a better separation. However, according to the speaker this method is not ready yet, since they still need to understand its details better.
[As I was taking these notes, I observed that data and Monte Carlo simulation do not match well in the low-ANN output region. The speaker claims tha the fraction of events in the tail of the Monte Carlo distribution can be modeled only with some approximation, but that they do not need to model that region too well for their result. However, it looks as if the discrepancy between data and MC not well understood. Please refer to the graph shown below, which shows the NN output in data and simulation at a preselection level.]
Back to the presentation. To obtain a result, the calculation they performis simple: how many events are expected in the far detector ? The ratio of far to near flux is known, 1.3E-6. This includes all geometrical factors. For this analysis they have 3E20 protons on target. They expect 27 events for the ANN selection, and 22 for the LEM analysis.
They need to separate backgrounds in NC and CC, so they do a trick: they take data in two different beam configurations, then they look at the spectrum in the near detector, where they expect muon-type events to be rejected much more easily because they are more deflected. From this they can separate the two contributions.
Their final result for the predicted number of electron-induced CC events is 27+-5 (stat)+-2 (syst).
A second check on the background calculation consists in removing the muon in tagged CC events, and use these for two different calculations. One is an independent background calculation: they can add a simulated electron to the event raw data after removing the muon. This checks whether the signal is modeled correctly. From these studies they conclude that the signal is modeled well.
The results show that there is indeed a small signal: they observe 35 events, when they expect 27, in the high-NN output region, as shown in the figure above. For the other method, LEM, results are consistent. The energy spectrum of the selected events is shown in the graph below.
With the observation of this small excess (which is compatible with predictions), a confidence level is set in the plane of the two parameters versus
, at 90%. It goes up to 0.35, with a small oscillation dependent on the value of
. You can see it in the figure on the right below.
The speaker claims that if the excess they are observing disappears with further accumulated data, they will be able to reach below the existing bound.
The other result of MINOS is a result from disappearance studies. The signal amounts to several hundred events of deficit. They can put a limit on an empirical parameter which determines what fraction of the initial flux has gone into sterile neutrinos. They have 6.6E20 protons on target now taken. The fraction of sterile
neutrinos is less than 0.68 at 90%CL.
Alexander Kusenko on sterile neutrinos, and the other afternoon talks May 20, 2008
Posted by dorigo in astronomy, cosmology, news, physics, science.Tags: cepheids, dark energy, pulsars, Sterile neutrinos, supernovae
comments closed
Still many interesting talks in the afternoon session at PPC08 today, and I continued the unprecedented record of NOT dozing during the talks. Maybe I am growing old.
Alexander Kusenko was the first to speak, and he discussed “The dark side of light fermions: sterile neutrinos as dark matter“. I would love to be able to report in detail the contents of his talk, which was enlightening, but I would do a very poor job because I tried to understand it by following it with undivided attention rather than taking notes (core duo upgrade to grey matter, anyone?). I failed miserably, but what I did get and took notes about was, however, the fact that sterile neutrinos can explain the velocity distribution of pulsars.
Before I get to that, however, let me tell you how I found myself grinning during Krusenko’s talk. He discussed at some point the fact that making the Majorana mass large one can observe small masses with a Yukawa coupling of order unity. He pointed out that all quarks have y<<1 except the top quark: if these couplings come from string theory, they have a reason to be all of order unity; while from extra dimension kinds of models these would typically be exponentially small (as a function of the scale of the extra dimension). So, concluded Alexander, “both options are equally likely”. I found that sentence a daring stretch (and attached a “UN” before “likely” in my notes)! He appeared to be inferring some freedom in the phenomenology of neutrinos from the fact that string theory and large extra dimension models make conflicting predictions… Tsk tsk.
Anyway, about pulsars. Pulsars are rapidly rotating, magnetized neutron stars produced in supernova explosions. They have been known to have a velocity spectrum which far exceeds that of other stars – by about an order of magnitude. Proposed explanations of this phenomenon do not work: for example, the explosion itself does not seem to have enough asymmetry. There is a lot of energy in the explosion, but 99% of it escapes with neutrinos. If these escape anisotropically they could propel the pulsar. Through the reactions with polarized electrons one can easily get 10% asymmetries in the neutrino production in the core of imploding supernovas, while one needs just 1% to explain the velocity distribution. The asymmetry, however, is lost as they rescatter inside the star. If instead one has sterile neutrinos, they interact much more weakly than ordinary ones, they escape asymmetrically, and a potential 10% asymmetry in produced neutrinos, 10% of which could be going to sterile neutrinos, would explain it.
Alexander pointed out that sterile neutrinos must have lifetimes larger than age of universe, but they can annihilate, and produce photons which have energy m/2. So concetrations of sterile neutrino dark matter emits x-rays.
In summary, we have introduced additional degrees of freedom in our particle model when we discovered neutrino masses. This implies the need of right-handed neutrinos. Usually one sets a large Majorana mass and hides these right-handed states to a very high mass scale, but one neutrino remains at low mass, and it could provide a good dark matter candidate. It could play a role in baryon asymmetry of the universe, and explain pulsar velocities. X-ray telescopes could discover it, and if discovered, the line from relic sterile neutrino annihilation can be used to map out the redishift distribution of DM. This could potentially be used in an optimistic future to study the structure and the expansion history of the Universe.
After Krusenko’s talk, A. Riess discussed the “implications for dark energy of supernovae with redshift larger than one“. With the Hubble space telescope they made a systematic search for supernovae in a small region of sky, and found 50 in three years, 25 of them with z>1. These allowed an improved constraint on w=1.06+-0.10. A paper is in preparation. I am quite sorry to have no chance of reporting about the very interesting discussion Riess made of distance indicators like cepheids, keplerian motion of masers around the central black hole in NGC4258 -which anchors the distance of this galaxy and allows relative distances to become absolute measurements- and parallax measurements made on cepheids. His talk was very instructive, but my brain does not work well tonight…
Dragan Huterer discussed “A decision tree for dark energy“. In two words, he proposes to start from the simple lambda_CDM model and try more complex constructions incrementally, starting with the extensions that have more predictive power. He also discussed a generic figure of merit of measurements – basically defined as , or the inverse of the area of 95% CL measurements of these two parameters. Apologies here too for having to cut this summary short…
Mark Trodden concluded the afternoon talks by discussing “Cosmic Acceleration and Modified Gravity“. Mark is a blogger and so I feel no shame in saying he should report about his talk rather than having me do it. However, I found very interesting a note he made on extrapolations of newtonian mechanics working or not working. When observed perturbations of the Uranus orbit pointed to the existence of an outer planet, Neptune, the prediction was on solid ground: As an effective theory, you expect newtonian mechanics to work well at higher radii. On the other hand, when Mercury’s precession was hypotesized to come from perturbing effects from an inner planet, the answer turned out to be wrong -there, newtonian mechanics did break down, and general relativity showed its effect. The lesson for modified gravity models is clear.
The session was concluded by a panel led by Krusenko, where some of the speakers were asked a few questions. The first was: “If you had a billion dollars how would you spend them for measuring cosmological parameters ?”
A.Riess said he would do the easiest things first. Before launching to space, one has to look at things that are easy and less expensive. He sees a lot of room. Space has advantages (stability of conditions) but it is hard to get there. We want measurements to the few percent level on a few cosmological parameters: maybe can we do that now from the ground, or from instruments that are
already in space now. He stated the necessity of being more creative at using the resources we already have. All of the small but significant improvements that are possible matter.
I gave my own answer by grabbing the microphone after a few of the speakers had their say. I said we do not need billions of dollars: there is, in fact, a lot we can do with much smaller amounts of money on the ground, without spending huge amounts of money for space experiments. CP violation experiments can be improved with limited budgets, and also low energy hadronic cross sections and nuclear cross sections are quite important for the understanding of the early universe. The LHC did cost a few billion dollars, and it will maybe give us an answer about dark matter; but in general, particle physics -even direct DM searches- require smaller budgets, and the payoff may be large.