jump to navigation

Milind Diwan: recent MINOS results April 8, 2009

Posted by dorigo in news, physics, science.
Tags: , , , ,
trackback

I offer below another piece of the notes I took at the NEUTEL09 conference in Venice last month. For the slides of the talk reported here, see the conference site.

Milind’s presentation concentrated on results of muon-neutrino to electron-neutrino conversions. Minos is a “Main-Injecor Neutrino Oscillation Search”. It is a long-baseline experiment: the beam from the Main injector, Fermilab’s high-intensity source of protons feeding the Tevatron accelerator, can be sent from Batavia (IL) 735km away to the Soudan mine in Minnesota. There are actually two detectors, a near and a far detector: this is the unique feature of MINOS. The spectra collected at the two sites are compared to measure muon neutrino disappearance and appearance. The near detector is 1km away from the target.

The beam is a horn-focused muon-neutrino one. Horns are parabolic-shaped magnets. 120 GeV protons originate pions, which are focused by these structures; negative ones are defocused, and the beam contains predominantly muon neutrinos from the decay of these pions. The accelerator provides 10-microsecond pulses every 2.2 seconds, with 3.3 \times 10^{13} protons per pulse. 95% of the resulting neutrino flux is \nu_\mu, 4% are \bar \nu_\mu.

Besides the presence of two detectors in line, another unique feature of the Fermilab beam is the possibility to move the target in and out, shifting the spectrum of neutrinos that come out, because the focal point of the horns changes. Two positions of the target are used, corresponding to two beam configurations. In the high-energy configuration one can get a beam centered at an energy of 8 GeV or so, while the low-energy configuration is centered at 3 GeV. Most of the time Minos runs with the 3 GeV beam.

Detectors are a kiloton-worth of steel and scintillator planes in the near detector, and 5.4-kT in the far detector. Scintillator strips are 1 cm thick, 4.1 cm wide, and their Moliere radius is of 3.7cm. A 1-GeV muon crosses 27 planes. The iron in the detectors is magnetized with a 1 Tesla magnetic field.

Minos event topologies include CC-like and NC-like events. A charged-current (CC) event gives a muon plus hadrons: a long charged track from the muon, which is easy to find. A neutral current (NC) event will make a splash and it is diffuse, since all one sees is the signal from the disgregation of the target nucleus; an electron CC event will leave a dense, short shower, with a typical electromagnetic shower profile. The three processes giving rise to the observed signatures are described by the Feynman diagrams below.

The analysis challenge is to put together a selection algorithm capable of rejecting backgrounds and select CC \nu_e events. Fluxes are measured in the near detector, and they allow to predict what can be found in the far detector. This minimizes the dependence on MC, because there are too many factors that may cause fluctuations in the real data, and the simulation cannot possibly handle them all. So they carry out a blind analysis. They check background estimates with independent samples: this serves the purpose of avoiding to bias oneself with what one should observe. They generate many simulated samples not containing an oscillation signal, to check all analysis procedures.

Basic cuts are applied on their data sample to ensure data quality. Fiducial volume cuts provide rejection to cosmic ray backgrounds. Simple cuts lead to a S/N ratio of 1:12. By “signal” one means the appearance of electron neutrinos. \nu_e events are selected with artificial Neural Networks, which use the properties of the shower, the lateral shower spread, etcetera. These can discriminate NC interactions from electron-neutrino-induced CC interactions. After the application of the algorithm, the S/N ratio is 1/4. At this stage, one important remaining source of background is due to muon-neutrino CC backgrounds which can be mistaken from electron-neutrino ones when the muon is not seen in the detector.

They can select \nu_e events with a “library event matching” (LEM). This matches the data event with a shower library, reconstructing the fraction of the best 50 matches which are electron-neutrino events. This is more or less like an evolved “nearest-neighbor” algorithm. As a result, they get a better separation. However, according to the speaker this method is not ready yet, since they still need to understand its details better.

[As I was taking these notes, I observed that data and Monte Carlo simulation do not match well in the low-ANN output region. The speaker claims tha the fraction of events in the tail of the Monte Carlo distribution can be modeled only with some approximation, but that they do not need to model that region too well for their result. However, it looks as if the discrepancy between data and MC not well understood. Please refer to the graph shown below, which shows the NN output in data and simulation at a preselection level.]

Back to the presentation. To obtain a result, the calculation they performis simple: how many events are expected in the far detector ? The ratio of far to near flux is known, 1.3E-6. This includes all geometrical factors. For this analysis they have 3E20 protons on target. They expect 27 events for the ANN selection, and 22 for the LEM analysis.

They need to separate backgrounds in NC and CC, so they do a trick: they take data in two different beam configurations, then they look at the spectrum in the near detector, where they expect muon-type events to be rejected much more easily because they are more deflected. From this they can separate the two contributions.

Their final result for the predicted number of electron-induced CC events is 27+-5 (stat)+-2 (syst).

A second check on the background calculation consists in removing the muon in tagged CC events, and use these for two different calculations. One is an independent background calculation: they can add a simulated electron to the event raw data after removing the muon. This checks whether the signal is modeled correctly. From these studies they conclude that the signal is modeled well.

The results show that there is indeed a small signal: they observe 35 events, when they expect 27, in the high-NN output region, as shown in the figure above. For the other method, LEM, results are consistent. The energy spectrum of the selected events is shown in the graph below.

With the observation of this small excess (which is compatible with predictions), a confidence level is set in the plane of the two parameters \sin^2 2 \theta_{13} versus \delta_{cp}, at 90%. It goes up to 0.35, with a small oscillation dependent on the value of \delta. You can see it in the figure on the right below.

The speaker claims that if the excess they are observing disappears with further accumulated data, they will be able to reach below the existing bound.

The other result of MINOS is a \delta M^2 result from disappearance studies. The signal amounts to several hundred events of deficit. They can put a limit on an empirical parameter which determines what fraction of the initial flux has gone into sterile neutrinos. They have 6.6E20 protons on target now taken. The fraction of sterile
neutrinos is less than 0.68 at 90%CL.

About these ads

Comments

1. Daniel de França MTd2 - April 8, 2009

So, do these results impose boundaries on what theories?

2. WebdePaylas.Net - Science News&Articles!!! - April 8, 2009

Milind Diwan: recent MINOS results…

Milind’s presentation concentrated on results of muon-neutrino to electron-neutrino conversions. Minos is a “Main-Injecor Neutrino Oscillation Search”. It is a long-baseline experiment: the beam from the Main injector, Fermilab’s high-intensity source …

3. Michael Schmitt - April 9, 2009

Good call on the data / MC comparison, Tommaso. I agree that it is odd to place significance on the upper end of the distribution when the lower end does not look so good. This is a difficult analysis – there is not much to establish a result – so they might choose to be a bit more ‘perfectionistic.’

4. dorigo - April 10, 2009

Hi Daniel,

No, these results do not constrain enough the parameter space… But they are a step in the right direction.

Hi Michael,

yes, it looks odd that they do not show they understand the very distribution from which they extract their signal. I guess there is more in the analysis justifying their procedures, which the speaker could not convey during the talk.

Cheers,
T.


Sorry comments are closed for this entry

Follow

Get every new post delivered to your Inbox.

Join 102 other followers

%d bloggers like this: