Latest global fits to SM observables: the situation in March 2009March 25, 2009

Posted by dorigo in news, physics, science.
Tags: , , , , , , , , , ,

A recent discussion in this blog between well-known theorists and phenomenologists, centered on the real meaning of the experimental measurements of top quark and W boson masses, Higgs boson cross-section limits, and other SM observables, convinces me that some clarification is needed.

The work has been done for us: there are groups that do exactly that, i.e. updating their global fits to express the internal consistency of all those measurements, and the implications for the search of the Higgs boson. So let me go through the most important graphs below, after mentioning that most of the material comes from the LEP electroweak working group web site.

First of all, what goes in the soup ? Many things, but most notably, the LEP I/SLD measurements at the Z pole, the top quark mass measurements by CDF and DZERO, and the W mass measurements by CDF, DZERO, and LEP II. Let us give a look at the mass measurements, which have recently been updated.

For the top mass, the situation is the one pictured in the graph shown below. As you can clearly see, the CDF and DZERO measurements have reached a combined precision of 0.75% on this quantity.

The world average is now at $M_t = 173.1 \pm 1.3 GeV$. I am amazed to see that the first estimate of the top mass, made by a handful of events published by CDF in 1994 (a set which did not even provide a conclusive “observation-level” significance at the time) was so dead-on: the measurement back then was $M_t=174 \pm 15 GeV$! (for comparison, the DZERO measurement of 1995, in their “observation” paper, was $M_t=199 \pm 30 GeV$).

As far as global fits are concerned, there is one additional point to make for the top quark: knowing the top mass any better than this has become, by now, useless. You can see it by comparing the constraints on $M_t$ coming from the indirect measurements and W mass measurements (shown by the blue bars at the bottom of the graph above) with the direct measurements at the Tevatron (shown with the green band). The green band is already too narrow: the width of the blue error bars compared to the narrow green band tells us that the SM does not care much where exactly the top mass is, by now.

Then, let us look at the W mass determinations. Note, the graph below shows the situation BEFORE the latest DZERO result;, obtained with 1/fb of data, and which finds $M_W = 80401 \pm 44 MeV$; its inclusion would not change much of the discussion below, but it is important to stress it.

Here the situation is different: a better measurement would still increase the precision of our comparisons with indirect information from electroweak measurements at the Z. This is apparent by observing that the blue bars have width still smaller than the world average of direct measurements (again in green). Narrow the green band, and you can still collect interesting information on its consistency with the blue points.

Finally, let us look at the global fit: the electroweak working group at LEP displays in the by now famous “blue band plot”, shown below for March 2009 conferences. It shows the constraints on the Higgs boson mass coming from all experimental inputs combined, assuming that the Standard Model holds.

I will not discuss this graph in details, since I have done it repeatedly in the past. I will just mention that the yellow regions have been excluded by direct searches of the Higgs boson at LEP II (on the left, the wide yellow area) and the Tevatron ( the narrow strip on the right). From the plot you should just gather that a light Higgs mass is preferred (the central value being 90 GeV, with +36 and -27 GeV one-sigma error bars). Also, a 95% confidence-level exclusion of masses above 163 GeV is implied by the variation of the global fit $\chi^2$ with Higgs mass.

I have started to be a bit bored by this plot, because it does not do the best job for me. For one thing, the LEP II limit and the Tevatron limit on the Higgs mass are treated as if they were equivalent in their strength, something which could not be possibly farther from the truth. The truth is, the LEP II limit is a very strong one -the probability that the Higgs has a mass below 112 GeV, say, is one in a billion or so-, while the limit obtained recently by the Tevatron is just an “indication”, because the excluded region (160 to 170 GeV) is not excluded strongly: there still is a one-in-twenty chance or so that the real Higgs boson mass indeed lies there.

Another thing I do not particularly like in the graph is that it attempts to pack too much information: variations of $\alpha$, inclusion of low-Q^2 data, etcetera. A much better graph to look at is the one produced by the GFitter group instead. It is shown below.

In this plot, the direct search results are introduced with their actual measured probability of exclusion as a function of Higgs mass, and not just in a digital manner, yes/no, as the yellow regions in the blue band plot. And in fact, you can see that the LEP II limit is a brick wall, while the Tevatron exclusion acts like a smooth increase in the global $\chi^2$ of the fit.

From the black curve in the graph you can get a lot of information. For instance, the most likely values, those that globally have a 1-sigma probability of being one day proven correct, are masses contained in the interval 114-132 GeV. At two-sigma, the Higgs mass is instead within the interval 114-152 GeV, and at three sigma, it extends into the Tevatron-excluded band a little, 114-163 GeV, with a second region allowed between 181 and 224 GeV.

In conclusion, I would like you to take away the following few points:

• Future indirect constraints on the Higgs boson mass will only come from increased precision measurements of the W boson mass, while the top quark has exhausted its discrimination power;
• Global SM fits show an overall very good consistency: there does not seem to be much tension between fits and experimental constraints;
• The Higgs boson is most likely in the 114-132 GeV range (1-sigma bounds from global fits).

Zooming in on the HiggsMarch 24, 2009

Posted by dorigo in news, physics, science.
Tags: , , , , , , , , ,

Yesterday Sven Heinemeyer kindly provided me with an updated version of a plot which best describes the experimental constraints on the Higgs boson mass, coming from electroweak observables measured at LEP and SLD, and from the most recent measurements of W boson and top quark masses. It is shown on the right (click to get the full-sized version).

The graph is a quite busy one, but I will try below to explain everything one bit at a time, hoping I keep things simple enough that a non-physicist can understand it.

The axes show suitable ranges of values of the top quark mass (varying on the horizontal axis) and of the W boson masses (on the vertical axis). The value of these quantities is functionally dependent (because of quantum effects connected to the propagation of the particles and their interaction with the Higgs field) on the Higgs boson mass.

The dependence, however, is really “soft”: if you were to double the Higgs mass by a factor of two from its true value, the effect on top and W masses would be only of the order of 1% or less. Because of that, only recently have the determinations of top quark and W boson masses started to provide meaningful inputs for a guess of the mass of the Higgs.

Top mass and W mass measurements are plotted in the graphs in the form of ellipses encompassing the most likely values: their size is such that the true masses should lie within their boundaries, 68% of the time. The red ellipse shows CDF results, and the blue one shows DZERO results.

There is a third measurement of the W mass shown in the plot: it is displayed as a horizontal band limited by two black lines, and it comes from the LEP II measurements. The band also encompasses the 68% most likely W masses, as ellipses do.

In addition to W and top masses, other experimental results constrain the mass of top, W, and Higgs boson. The most stringent of these results are those coming from the LEP experiment at CERN, from detailed analysis of electroweak interactions studied in the production of Z bosons. A wide band crossing the graph from left to right, with a small tilt, encompasses the most likely region for top and W masses.

So far we have described measurements. Then, there are two different physical models one should consider in order to link those measurements to the Higgs mass. The first one is the Standard Model: it dictates precisely the inter-dependence of all the parameters mentioned above. Because of the precise SM predictions, for any choice of the Higgs boson mass one can draw a curve in the top mass versus W mass plane. However, in the graph a full band is hatched instead. This correspond to allowing the Higgs boson mass to vary from a minimum of 114 GeV to 400 GeV. 114 GeV is the lower limit on the Higgs boson mass found by the LEP II experiments in their direct searches, using electron-positron collisions; while 400 GeV is just a reference value.

The boundaries of the red region show the functional dependence of Higgs mass on top and W masses: an increase of top mass, for fixed W mass, results in an increase of the Higgs mass, as is clear by starting from the 114 GeV upper boundary of the red region, since one then would move into the region, to higher Higgs masses. On the contrary, for a fixed top mass, an increase in W boson mass results in a decrease of the Higgs mass predicted by the Standard Model. Also note that the red region includes a narrow band which has been left white: it is the region corresponding to Higgs masses varying between 160 and 170 GeV, the masses that direct searches at the Tevatron have excluded at 95% confidence level.

The second area, hatched in green, is not showing a single model predictions, but rather a range of values allowed by varying arbitrarily many of the parameters describing the supersymmetric extension of the SM called “MSSM”, its “minimal” extension. Even in the minimal extension there are about a hundred additional parameters introduced in the theory, and the values of a few of those modify the interconnection between top mass and W mass in a way that makes direct functional dependencies in the graph impossible to draw. Still, the hatched green region shows a “possible range of values” of the top quark and W boson masses. The arrow pointing down only describes what is expected for W and top masses if the mass of supersymmetric particles is increased from values barely above present exclusion limits to very high values.

So, to summarize, what to get from the plot ? I think the graph describes many things in one single package, and it is not easy to get the right message from it alone. Here is a short commentary, in bits.

• All experimental results are consistent with each other (but here, I should add, a result from NuTeV which finds indirectly the W mass from the measured ratio of neutral current and charged current neutrino interactions is not shown);
• Results point to a small patch of the plane, consistent with a light Higgs boson if the Standard Model holds
• The lower part of the MSSM allowed region is favored, pointing to heavy supersymmetric particles if that theory holds
• Among experimental determinations, the most constraining are those of the top mass; but once the top mass is known to within a few GeV, it is the W mass the one which tells us more about the unknown mass of the Higgs boson
• One point to note when comparing measurements from LEP II and the Tevatron experiments: when one draws a 2-D ellipse of 68% contour, this compares unfavourably to a band, which encompasses the same probability in a 1-D distribution. This is clear if one compares the actual measurements: CDF $80.413 \pm 48 MeV$ (with 200/pb of data), DZERO $80,401 \pm 44 MeV$ (with five times more statistics), LEP II $80.376 \pm 33 MeV$ (average of four experiments). The ellipses look like they are half as precise as the black band, while they are actually only 30-40% worse. If the above is obscure to you, a simple graphical explanation is provided here.
• When averaged, CDF and DZERO will actually beat the LEP II precision measurement -and they are sitting on 25 times more data (CDF) or 5 times more (DZERO).

A seminar against the Tevatron!March 20, 2009

Posted by dorigo in news, physics, science.
Tags: , , , ,

I spent this week at CERN to attend the meetings of the CMS week – an event which takes place four times a year, when collaborators of the CMS experiment, coming from all parts of the world, get together at CERN to discuss detector commissioning, analysis plans, and recent results. It was a very busy and eventful week, and only now, sitting on a train that brings me back from Geneva to Venice, can I find the time to report with the due dedication on some things you might be interested to know about.

One thing to report on is certainly the seminar I eagerly attended on Thursday morning, by Michael Dittmar (ETH-Zurich). Dittmar is a CMS collaborator, and he talked at the CERN theory division on a tickling subject:”Why I never believed in the Tevatron Higgs sensitivity claims for Run 2ab”. The title did promise a controversial discussion, but I was really startled by its level, as much as by the defamation of which I felt personally to be a target. I will explain this below.

I have also to mention that by Thursday I had already attended to a reduced version of his talk, since he had given it on the previous day in another venue. Both I and John Conway had corrected him on a few plainly wrong statements back then, but I was puzzled to see he reiterated those false statements in the longer seminar! More on that below.

Dittmar’s obnoxious seminar

Dittmar started by saying he was infuriated by the recent BBC article where “a statement from the director of a famous laboratory” claimed that the Tevatron had 50% odds of finding a Higgs boson, in a certain mass range. This prompted him to prepare a seminar to express his scepticism. However, it turned out that his scepticism was not directed solely at the optimistic statement he had read, but at every single result on Higgs searches that CDF and DZERO had produced since Run I.

In order to discuss sensitivity and significances, the speaker made a un-illuminating digression on how counting experiments can or cannot obtain observation-level significances with their data depending on the level of background of their searches and the associated systematical uncertainties. His statements were very basic and totally uncontroversial on this issue, but he failed to focus on the fact that nowadays, nobody does counting experiments any more when searching for evidence of a specific model: our confidence in advanced analysis methods involving neural networks, shape analysis, and likelihood discriminants; the tuning of Monte Carlo simulations; and the accurate analytical calculations of high-order diagrams for Standard Model processes, have all grown tremendously with years of practice and studies, and these methods and tools overcome the problems of searches for small signals immersed in large backgrounds. One can be sceptical, but one cannot ignore the facts, as the speaker seemed inclined to.

Then Dittmar said that in order to judge the value of sensitivity claims for the future, one may turn to past studies and verify their agreement with the actual results. So he turned to the Tevatron Higgs Sensitivity studies of 2000 and 2003, two endeavours to which I had participated with enthusiasm.

He produced a plot showing the small signal of $ZH \to l^+ l^- b \bar b$ decays that the Tevatron 2000 study believed the two experiments could achieve with 10 inverse femtobarns of data, expressing his doubts that the “tiny excess” could mean an evidence for Higgs production. On the side of that graph, he had for comparison placed a result of CDF on real Run I data, where a signal of WH or ZH decays to four jets had been searched in the dijet invariant mass distribution of the two b-jets.

He commented that figure by saying half-mockingly that the data could have been used to exclude the standard model process of associated $Z+jets$ production, since the contribution from Z decays to b-quark pairs was sitting at a mass where one bin had fluctuated down by two standard deviations with respect to the sum of background processes. This ridiculous claim was utterly unsupported by the plot -which had an overall very good agreement between data and MC sources- and by the fact that the bins adjacent to the downward-fluctuating one were higher than the prediction. I found this claim really disturbing, because it tried to denigrate my experiment with a futile and incorrect argument. But I was about to get more upset for his next statement.

In fact, he went on to discuss the global expectation of the Tevatron on Higgs searches, a graph (see below) produced in 2000 after a big effort from several tens of people in CDF and DZERO.

He started by saying that the graph was confusing, and that it was not clear in the documentation how it had been produced, nor that it was the combination of CDF and DZERO sensitivity. This was very amusing, since sitting from the far back John Conway, a CDF colleague, shouted: “It says it in print on top of it: combined thresholds!”, then adding, with a pacate voice “…In case you’re wondering, I made that plot.” John had in fact been the leader of the Tevatron Higgs sensitivity study, not to mention the author of many of the most interesting searches for the higgs boson in CDF since then.

Dittmar continued his surreal talk with an overbid, by claiming that the plot had been produced “by assuming a 30% improvement in the mass resolution of pairs of b-jets, when nobody had not even the least idea on how such improvement could be achieved”.

I could not have put together a more personal, direct attack to years of my own work myself! It is no mystery that I worked on dijet resonances since 1992, but of course I am a rather unknown soldier in this big game; however, I felt the need to interrupt the speaker at this point -exactly as I had done at the shorter talk the day before.

I remarked that in 1998, one year before the Tevatron sensitivity study, I had produced a PhD thesis and public documents showing the observation of a signal of $Z \to b \bar b$ decays in CDF Run I data, and had demonstrated on that very signal how the use of ingenuous algorithms could reduce by at least 30% the dijet mass resolution, making the signal more prominent. The relevant plots are below, directly from my PhD thesis: judge for yourself.

In the plots, you can see how the excess over background predictions moves to the right as more and more refined jet energy corrections are applied, starting from the result of generic jet energy corrections (top) to optimized corrections (bottom) until the signal becomes narrower and centered at the true value. The plots on the left show the data and the background prediction, those on the right show the difference, which is due to Z decays to b-quark jet pairs. Needless to say, the optimization is done on Monte Carlo Z events, and only then checked on the data.

So I said that Dittmar’s statement was utterly false: we had an idea of how to do it, we had proven we could do it, and besides, the plots showing what we had done had been indeed included in the Tevatron 2000 report. Had he overlooked them ?

Escalation!

Dittmar seemed unbothered by my remark, and he responded that that small signal had not been confirmed in Run II data. His statement constituted an even more direct attack to four more years of my research time, spent on that very topic. I kept my cool, because when your opponent offers you on a silver plate the chance to verbally sodomize him, you cannot be too angry with him.

I remarked that a signal had indeed been found in Run II, amounting to about 6000 events after all selection cuts; it confirmed the past results. Dittmar then said that “to the best of his knowledge” this had not been published, so it did not really count. I then explained it was a 2008 NIM publication, and would he please document himself before making such unsubstantiated allegations? He shrugged his shoulders, said he would look more carefully for the paper, and went back to his talk.

His points about the Tevatron sensitivity studies were laid down: for a low-mass Higgs boson, the signal is just too small and backgrounds are too large, and the sensitivity of real searches is below expectations by a large factor. To stress this point, he produced a slide containing a plot he had taken from this blog! The plot (see on the left), which is my own concoction and not Tevatron-approved material, shows the ratio between observed limit to Higgs production and the expectations of the 2000 study. He pointed at the two points for 100-140 GeV Higgs boson masses, trying to prove his claim: The Tevatron is now doing three times worse than expected. He even uttered “It is time to confess: the sensitivity study was wrong by a large factor!”.

I could not help interrupting again: I had to stress that the plot was not approved material and was just a private interpretation of Tevatron results, but I did not deny its contents. The plot was indeed showing that low-mass searches were below par, but it was also showing that high-mass ones were amazingly in agreement with expectations worked at 10 years before. Then John Conway explained the low-mass discrepancy for the benefit of the audience, as he had done one day before for no apparent benefit of the speaker.

Conway explained that the study had been done under the hypothesis that an upgrade of our silicon detector would be financed by the DoE: it was in fact meant to prove the usefulness of funding an upgrade. A larger acceptance of inner silicon tracking boosts the sensitivity to identify b-quark jets from Higgs decays by a large factor, because any acceptance increase gets squared when computing the over-efficiency. So Dittmar could not really blame the Tevatron experiments for predicting something that would not materialize in a corresponding result, given that the DoE had denied the funding to build the upgraded detector!

I then felt compelled to add that by using my plot Dittmar was proving the opposite thesis of what he wanted to demonstrate: low-mass Tevatron searches were shown to underperform because of funding issues, rather than because of a wrong estimate of sensitivity; and high-mass searches, almost unhindered by the lack of an upgraded silicon, were in excellent agreement with expectations!

The speaker said that no, the high-mass searches were not in agreement, because their results could not be believed, and moved on to discuss those by taking real-data results by the Tevatron.

He said that the $H \to WW$ is a great channel at the LHC.

“Possible at the Tevatron ? I believe that the WW continuum background is much larger at a ppbar collider than at a pp collider, so my personal conclusion is that if the Tevatron people want to waste their time on it, good luck to them.”

Now, come on. I cannot imagine how a respectable particle physicist could drive himself into making such statements in front of a distinguished audience (which, have I mentioned it, included several theorists of the highest caliber, including none less than Edward Witten). Waste their time ? I felt I was wasting my time listening to him, but my determination of reporting his talk here kept me anchored to my chair, taking notes.

So this second part of the talk was not less unpleasant than the first part: Dittmar criticized the Tevatron high-mass Higgs results in the most incorrect, and scientifically dishonest, way that I could think of. Here is just a summary:

• He picked up a distribution of one particular sub-channel from one experiment, noting that it seemed to have the most signal-rich region showing a deficit of events. He then showed the global CDF+DZERO limit, which did not show a departure between expected and observed limit on Higgs cross section, and concluded that there was something fishy in the way the limit had been evaluated. But the limit is extracted from literally several dozens of those distributions -something he failed to mention despite having been warned of that very issue in advance.
• He picked up two neural-network output distributions for a search of Higgs at 160 and 165 GeV, and declared they could not be correct since they were very different in shape! John, from the back, replied “You have never worked with neural networks, have you ?” No, he had not. Had he, he would probably have understood that different mass points, optimized differently, can provide very different NN outputs.
• He showed another Neural Network output based on 3/fb of data, which had a pair of data points lying one standard deviation above the background predictions, and the corresponding plot for a search performed with improved statistics, which had instead a underfluctuation. He said he was puzzled by the effect. Again, some intervention from the audience was necessary, explaining that the methods are constantly reoptimized, and there is no wonder that adding more data can result in a different outcome. This produced a discussion when somebody from the audience tried to speculate that searches were maybe performed by looking at the data before choosing which method to use for a limit extraction! On the contrary of course, all Tevatron searches of the Higgs are blind analyses, where the optimization is performed on expected limits, using control samples, and Monte Carlo, and the data is only looked at afterwards.
• He showed that the Tevatron 2000 report had estimated a maximum Signal/Noise ratio for the H–>WW search of 0.34, and he picked up one random plot from the many searches of that channel by CDF and DZERO, showing that the signal to noise there was never larger than 0.15 or so. Explaining to him that the S/N of searches based on neural networks and combined discriminants is not a fixed value, and that many improvements have occurred in data analysis techniques in 10 years was useless.

Dittmar concluded his talk by saying that:

“Optimistic expectations might help to get funding! This is true, but it is also true that this approach eventually destroys some remaining confidence in science of the public.”.

His last slide even contained the sentence he had previously brought himself to uttering:

“It is the time to confess and admit that the sensitivity predictions were wrong”.

Finally, he encouraged LHC experiments to looking for the Higgs where the Tevatron had excluded it -between 160 and 170 GeV- because Tevatron results cannot be believed. I was disgusted: he most definitely places a strong claim on the prize of the most obnoxious talk of the year. Unfortunately for all, it was just as much an incorrect, scientifically dishonest, and dilettantesque lamentation, plus a defamation of a community of 1300 respected physicists.

In the end, I am really wondering what really moved Dittmar to such a disastrous performance. I think I know the answer, at least in part: he has been an advocate of the $H \to WW$ signature since 1998, and he must now feel bad for that beautiful process being proven hard to see, by his “enemies”. Add to that the frustration of seeing the Tevatron producing brilliant results and excellent performances, while CMS and Atlas are sitting idly in their caverns, and you might figure out there is some human factor to take into account. But nothing, in my opinion, can justify the mix he put together: false allegations, disregard of published material, manipulation of plots, public defamation of respected colleagues. I am sorry to say it, but even though I have nothing personal against Michael Dittmar -I do not know him, and in private he might even be a pleasant person-, it will be very difficult for me to collaborate with him for the benefit of the CMS experiment in the future.

Streaming video for Y(4140) discoveryMarch 17, 2009

Posted by dorigo in news, physics, science.
Tags: , , , , ,

The CDF collaboration will present at a public venue (Fermilab’s Wilson Hall) its discovery of the new Y(4140) hadron, a mysterious particle created in B meson decays, and observed to decay strongly into a $J/\psi \phi$ state, a pair of vector mesons. I have described that exciting discovery in a recent post.

From this site you can connect to streaming video (starting at 4.00PM CDT, or 9.00PM GMT – should last about 1.30 hours).

DZERO refutes CDF’s multimuon signal… Or does it ?March 17, 2009

Posted by dorigo in news, physics, science.
Tags: , , , , ,

Hot off the press: Mark Williams, a DZERO member speaking at Moriond QCD 2009 -a yearly international conference in particle physics, where HEP experimentalists regularly present their hottest results- has shown today the preliminary results of their analysis of dimuon events, based on 900 inverse picobarns of proton-antiproton collision data. And the conclusion is…

DZERO searched for an excess of muons with large impact parameter by applying a data selection very similar, and when possible totally equivalent, to the one used by CDF in its recent study. Of course, the two detectors have entirely different hardware, software algorithms, and triggers, so there are certain limits to how closely one analysis can be replicated by the other experiment. However, the main machinery is quite similar: they count how many events have two muons produced within the first layer of silicon detector, and extrapolate to determine how many they expect to see which fail to yield a hit in that first layer, comparing to the actual number. They find no excess of large impact parameter muons.

Impact parameter, for those of you who have not followed this closely in the last few months, is the smallest distance between a track and the proton-antiproton collision vertex, in the plane transverse to the beam direction. A large impact parameter indicates that a particle has been produced in the decay of a parent body which was able to travel away from the interaction point before disintegrating. More information about the whole issue can be found in this series of posts, or by just clicking the “anomalous muons” tab in the column on the right of this text.

There are many things to say, but I will not say them all here now, because I am still digesting the presentation, the accompanying document produced by DZERO (not ready for public consumption yet), and the implications and subtleties involved. However, let me flash a few of the questions I am going to try and give an answer to with my readings:

• The paper does not address the most important question – what is DZERO’s track reconstruction efficiency as a function of track impact parameter ? They do discuss with some detail the complicated mixture of their data, which results from triggers which enforce that tracks have very small impact parameter -effectively cutting away all tracks with an impact parameter larger than 0.5cm- and a dedicated trigger which does not enforce an IP requirement; they also discuss their offline track reconstruction algorithms. But at a first sight it did not seem clear to me that they can actually reconstruct effectively tracks with impact parameters up to 2.5 cm as they claim. I would have inserted in the documents an efficiency graph for the reconstruction efficiency as a function of impact parameter, had I authored it.
• The paper shows a distribution of the decay radius of neutral K mesons, reconstructed from their decay into pair of charged pions. From the plot, the efficiency of reconstructing those pions is excessively small -some three times smaller than what it is in CMS, for instance. I need to read another paper by DZERO to figure out what drives their K-zero reconstruction efficiency to be so small, and whether this is in fact due to the decrease of effectiveness with track displacement.
• What really puzzles me, however, is the fact that they do not see *any* excess, while we know there must be in any case a significant one: decays in flight of charged kaons and pions. Why is it that CDF is riddled with those, while DZERO appears free of them ? To explain this point: charged kaons and pions yield muons, which get reconstructed as real muons with large impact parameter. If the decay occurs within the tracking volume, the track is partly reconstructed with the muon hits and partly with the kaon or pion hits. Now, while pions have a mass similar to that of muons, and thus the muon practically follows the pion trajectory faithfully, for kaons there must be a significant kink in the track trajectory. One expects that the track reconstruction algorithm will fail to associate inner hits to a good fraction of those tracks, and the resulting muons will belong to the “loose” category, without a correspondence in the “tight” muon category which has muons containing a silicon hit in the innermost layer of the silicon detector. This creates an excess of muons with large impact parameter. CDF does estimate that contribution, and it is quite large, of the order of tens of thousands of events in 743 inverse picobarns of data! Now where are those events in the DZERO dataset, then ?

Of course, you should not expect that my limited intellectual capabilities and my slow reading of a paper I have had in my hands for no longer than two hours can produce foulproof arguments. So the above is just a first pass, sort of a quick and dirty evaluation. I imagine I will be able to give an answer to those puzzles myself, at least in part, with a deeper look at the documentation. But, for the time being, this is what I have to say about the DZERO analysis.

Or rather, I should add something. By reading the above, you might get the impression that I am only criticizing DZERO out of bitterness for the failed discovery of the century by CDF… No, it is not the case: I have always thought, and I continue to think, that the multi-muon signal by CDF is some unaccounted-for background. And I do salute with relief and interest the new effort by DZERO on this issue. I actually thank them for providing their input on this mystery. However, I still retain some scepticism with respect to the findings of their study. I hope that scepticism can be wiped off by some input – maybe some reader belonging to DZERO wants to shed some light on the issues I mentioned above ? You are most welcome to do so!

UPDATE: Lubos pitches in, and guess what, he blames CDF… But Lubos the experimentalist is not better than Lubos the diplomat, if you know what I mean…

Other reactions will be collected below – if you have any to point to, please do so.

Tevatron excludes chunk of Higgs masses!March 13, 2009

Posted by dorigo in news, physics, science.
Tags: , , ,

This just in – the Fermilab site has the news on the new exclusion in a range of Higgs masses. At 95% C.L., the Higgs boson cannot have a mass in the 160-170 GeV range, as shown in the graph below. The new limit is shown by the orange band.

This is the first real exclusion range on the Higgs boson mass from CDF and DZERO.  I will have more to say about this great new result during the weekend.

UPDATE: maybe the most interesting thing is not the limit shown above, but the information contained in the graph shown below. It shows how the combination of CDF and DZERO searches for the Higgs bosons end up agreeing with the background-only hypothesis (black hatched curve) or the background plus signal hypothesis (red curve), as a function of the unknown value of the Higgs boson mass. The full black line seems to favor the signal plus background hypothesis, although only marginally and at just the 1-sigma level, at around 130 GeV of mass:

However, they say that if you like sausages and if you follow laws, you should not ask how these things are made. The same goes with global limits, to some extent. In this case it is not a criticism of the limit by itself, but rather of the interpretation that one might be led to give to it. In fact, the width of the green band should put you en garde against wild speculations: It would be extremely suspicious if the black line did not venture outside of the green band somewhere, even in case the Higgs boson does not exist!

That is because the band shows the expected range of 1-sigma fluctuations -due to statistical effects, and not to systematic ones such as the real presence of a signal!- and since the black curve is extracted from the data by combining many datasets and each individual point of the line (in, say, 5-GeV intervals) has little correlation with the others, it is entirely appropriate for the curve to not be fully contained in the green area! So, the fact that the black curve overlaps with the signal plus background hypothesis at 130 GeV really -really!- means very, very little.

What does mean something is that the hatched black and red curves appear separated by about one-sigma (the width of the green band surrounding the background-only black hatched curve) over a wide range of Higgs masses. This says that the two Tevatron experiments have by now reached a sensitivity of about 1-sigma to the signal with the data they have analyzed so far. Beware: they are already sitting on about twice as much data (most analyses rely on about 2.5/fb of collisions, but the Tevatron has already delivered to the experiments over 5/fb). So they expect new results, significantly improved, by this summer.

It does seem that at last, the game of Higgs hunting is starting to get exciting again, after a hiatus of about 7 years following the tentative signal seen by the LEP II experiments!

CDF discovers a new hadron!March 13, 2009

Posted by dorigo in news, physics, science.
Tags: , , , ,

This morning CDF released the results of a search for narrow resonances produced in B meson decays, and in turn decaying into a pair of vector mesons: namely, $Y \to J/\psi \phi$. This Y state is a new particle whose exact composition is as of yet unknown, except that CDF has measured its mass (4144 MeV) and established that its decay appears to be mediated by strong interactions, given that the natural width of the state is in the range of a few MeV. I describe succintly the analysis below, but first let me make a few points on the relevance of area of investigation.

Heavy meson spectroscopy appears to be a really entertaining research field these days. While all eyes are pointed at the searches for the Higgs boson and supersymmetric particles, if not at even more exotic high-mass objects, and while careers are made and unmade on those uneventful searches, it is elsewhere that action develops. Just think about it: the $\Xi_b$ baryon, the $\Omega_b$, those mysterious X and Y states which are still unknown in their quark composition. Such discoveries tell the tale of a very prolific research field: one where there is really a lot to understand.

Low-energy QCD  is still poorly known and not easily calculable. In frontier High-Energy Physics we bypassed the problem for the sake of studying high-energy phenomena by tuning our simulations such that their output well resembles the result of low-energy QCD processes in all cases where we need them -such as the details of parton fragmentation, or jet production, or transverse momentum effects in the production of massive bodies. However, we have not learnt much with our parametrizations:  those describe well what we already know, but they do not even come close to guessing whatever we do not know. Our understanding of low-energy QCD is starting to be a limiting factor in cosmological studies, such as in baryogenesis predictions. So by all means, let us pursue low-energy QCD in all the dirty corners of our produced datasets at hadron colliders!

CDF is actively pursuing this task. The outstanding spectroscopic capabilities of the detector, combined with the huge size of the dataset collected since 2002, allow searches for decays in the one-in-a-million range of branching ratios. The new discovery I am discussing today has indeed been made possible by pushing to the limit our search range.

The full decay chain which has been observed is the following: $B^+ \to Y K^+ \to J/\psi \phi K^+ \to \mu^+ \mu^- K^+ K^- K^+$. That $J/\psi$ mesons decay to muon pairs is not a surprise, as is the decay to two charged kaons of the $\phi$ vector meson. Also the original decay of the B hadron into the $J/\psi \phi K$ final state is not new: it had been in fact observed previously. What had not been realized yet, because of the insufficient statistics and mass resolution, is that the $J/\psi$ and $\phi$ mesons produced in that reaction often “resonate” at a very definite mass value, indicating that in those instances the $B \to J/\psi \phi K$ decay actually takes place in two steps as the chain of two two-body decays: $B \to Y K$ and $Y \to J/\psi \phi$.

The new analysis by CDF is a pleasure to examine, because the already excellent momentum resolution of the charged particle tracking system gets boosted when constraints are placed on the combined mass of multi-body systems. Take the B meson, reconstructed with two muons and three charged tracks, each assumed to be a kaon: if you did not know that the muon pair comes from a $J/\psi$ nor that two of the kaons come from a $\phi$, the mass resolution of the system would be in the few tens of MeV range. Instead, by forcing the momenta of the two muons to be consistent with the World average mass of the $J/\psi$, $M_{J/\psi}=3096.916 \pm 0.011 MeV$ , and by imposing that the two kaons make exactly the extremely well-known $\phi$ mass ($M_\phi=1019.455 \pm 0.020 MeV$), much of the uncertainty on the daughter particle momenta disappears, and the B meson becomes an extremely narrow signal: its mass resolution is just 5.9 MeV, a per-mille measurement event-by-event!

The selection of signal events requires several cleanup cuts, including mass window cuts around the J/Psi and phi masses, a decay length of the reconstructed B+ meson longer than 500 microns, and a cut on the log-likelihood ratio fed with dE/dx and time-of-flight information capable of discriminating kaon tracks from other hadrons. After those cuts, the B+ signal really stands above the flat background. There is a total of 78+-10 events in the sample after these cuts, and this is the largest sample of such decays ever isolated. It is shown above (left), together with the corresponding distribution in the $\phi \to KK$ candidate mass (right).

A Dalitz plot of the reconstructed decay candidates is shown in the figure on the right. A Dalitz plot is a scatterplot of the squared invariant mass of a subset of the particles emitted in the decay, versus the squared invariant mass of another subset. If the decay proceeds via the creation of an intermediate state, one may observe a horizontal or vertical cluster of events. Judge by yourself: do the points appear to spread evenly in the allowed phase space of the B+ decays ?

The answer is no: a significant structure is seen corresponding to a definite mass of the $J/\psi \phi$ system. A histogram of the difference between the reconstructed mass of the $J/\psi \phi$ system and the $J/\psi$ mass is shown in the plot below: a near-threshold structure appears at just above 1 GeV energy. An unbinned fit to a relativistic Breit-Wigner signal shape on top of the expected background shape shows a signal at a mass difference of $\Delta M=1046.3 \pm 2.9 MeV$, with a width of 11.7+-5.7 MeV.

The significance of the signal is, after taking account of trial factors, equal to 3.8 standard deviations. For the non-zero width hypothesis, the significance is of 3.4 standard deviations, implying that the newfound structure has strong decay. The mass of the new state is thus of 4143+-2.9 MeV.

The new state is above the threshold for decay to pair of charmed hadrons. The decay of the state appears to occur to a pair of vector mesons, $J/\psi \phi$, in close similarity to a previous state found at 3930 MeV, the Y(3930), which also decays to two vector mesons in $Y \to J/\psi \omega$. Therefore, the new state can be also called a Y(4140).

Although the significance of this new signal has not reached the coveted threshold of 5 standard deviations, there are few doubts about its nature. Being a die-hard sceptic, I did doubt about the reality of the signal shown above for a while when I first saw it, but I must admit that the analysis was really done with a lot of care. Besides, CDF now has tens of thousands of fully reconstructed B meson decays available, with which it is possible to study and understand even the most insignificant nuances to every effect, including reconstruction problems, fit method, track characteristics, kinematical biases, you name it. So I am bound to congratulate with the authors of this nice new analysis, which shows once more how the CDF experiment is producing star new results not just in the high-energy frontier, but as well as in low-energy spectroscopy. Well done, CDF!

Neutrino Telescopes day 2 notesMarch 12, 2009

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , , , , , , ,

The second day of the “Neutrino Conference XIII” in Venice was dedicated to, well, neutrino telescopes. I have written down in stenographical fashion some of the things I heard, and I offer them to those of you who are really interested in the topic, without much editing. Besides, making sense of my notes takes quite some time, more than I have of it tonight.

So, I apologize for spelling mistakes (the ones I myself recognize post-mortem), in addition to the more serious conceptual ones coming from missed sentences or errors caused by my poor understanding of English, of the matter, or of both. Also, I apologize to those of you who would have preferred a more succint, readable account: As Blaise Pascal once put it, “I have made this letter longer than usual, because I lack the time to make it short“.

NOTE: the links to slides are not working yet – I expect that the conference organizers will fix the problem tomorrow morning.

Christian Spiering: Astroparticle Physics, the European strategy ( Slides here)

Spiering gave some information about two new bodies, European organizations: ApPEC and ASPERA. ApPEC has two committees offering advice to national funding agencies, improve links and communication between the astroparticle physics community and scientific programmes of organizations like CERN, ESA etc. Aspera was launched in 2006, to give a roadmap for APP in Europe.Close coordination with ASTRONET, and links to CERN strategy bodies.

Roadmapping: science case, overview of the status, some recommendations for convergence. Second thing, a critical assessment of the plans, a calendar for milestones, coordinated with ASTRONET.

For dark matter and dark energy searches, Christian displayed a graph showing the cross section of WIMPS as a function of time, the reach of present-day experiments. In 2015 we should reach cross sections of about 10^-10 picobarns. We are now at some 10^-8 with our sensitivity. The reach depends on background, funding and infrastructure. Idea is to go toward a 2-ton-scale zero-background detectors. Projects: Zeplin, Xenon, others.

In an ideal scenario, LHC observations of new particles at weac scale could place these observations in a well-confined particle physics context, direct detection would be supported by indirect signatures. In case of a discovery, smoking-gun signatures of direct detection such as directionality and annual variations would be measured in detail.

Properties of neutrinos: direct mass measurement efforts are KATRIN and Troitzk. Double beta decay experiments are Cuoricino, Nemo-3, Gerda, Cuore, et al. The KKGH group claimed a signal of masses of a few tenths of eV, but normal hierarchy implies 10^-3 eV for the lightest neutrino mass of the same order. Experiments are expected to be in operation (cuoricino, nemo-3) or start by 2010-2011. Supernemo will start in 2014.

A large infrastructure for proton decay is advised. For charged cosmic rays, depending on which part of the spectrum one looks, there are different kinds of physics contributing and explorable.

The case for Auger-North is strong, high-statistics astronomy with reasonably fast data collection is needed.

For high-energy gamma rays, the situation has seen an enormous progress over the last 15 years. Mostly by imaging atmospheric Cherenkov telescopes (IACT). Whipple, Hegra, CAT, Cangaroo, Hess, Magic, Veritas. Also, wide-angle devices. For existing air Cherenkov telescopes, there are Hess and Magic running, very soon Magic will go into Magic-II. Whipple runs a monitoring telescope.

There are new plans for MACE in India, something between Magic and Hess. CTA and AGIS are in their design phase.

Aspera’s recommendations: the priority of VHE gamma astrophysics is CTA. They recommend design and prototyping of CTA and selection of sites, and proceeding decidedly towards start of deployment in 2012.

For point neutrino sources, there has been tremendous progress in sensitivity over the last decade. A factor of 1000 within 15 years in sensitivity to fluxes. IceCube will deliver what has promised, within 2012.

For gravitational waves, there is LISA and VIRGO. The frequency tested of LISA is in the 10^-2 Hz, VIRGO will go to 100-10000 Hz. The reach is of several to several hundred sources per year. The Einstein telescope, a graviwaves detector underground, could access thousand of sources per year. Einstein will construct starting in 2017. The conclusions: Einstein is the long-term future project of ground-based gravitational wave astronomy in Europe. A decision on funding will come after first detections with enhanced LIGO and virgo, but is most likely after collecting about a year of data.

In summary,the budget will increase by a factor of more than two in the next decade. Km3net, megaton, CTA, ET will be the experiments taking the largest share. We are moving into regions with a high discovery potential. An accelerated increase of sensitivity in nearly all fields.

K.Hoffmann, Results from IceCube and Amanda, and prospects for the future ( slides here)

IceCube will become the first instrumented cubic km neutrino telescope. Amanda-II consists of 677 optical modules embedded in the ice at depths of 1500-2000 m. It has been a testbed for icecube and for deploying optical modules. Icecube has been under construction for the last several years, Strings of PMT tubes have been deployed in the ice during the last few years. 59 of them are operating.

The rates: IC40 has 110 neutrino events per day. Getting close to 100% live time. 94% in January. IceCube has the largest effective area for muons, long track length. The range of sensitivity in energy is to TeV-PeV range.

Ice properties are important to understand. A dust logger measures dust concentration, which is connected to the attenuation length of light in ice. There is a thick layer of dust sitting at a depth of 2000m, clear ice above, and very clear ice below. They have to understand the light yield and propagation well.

Of course one of the most important parameters is the angular resolution. As the detector got larger, it improved. One of the more exciting things this year was to see the point spread function go peak at less than one degree with long track lengths for muons.

To see the Moon for a telescope is always reassuring. They did it, a >4 sigma effect for cosmic rays impinging on the Moon.

With the waveforms they have in IceCube, the energy reconstruction has muons that are non-minimum ionizing. They reconstruct energy by number of photons along the track. Can achieve some energy resolution, progress in understanding how to reconstruct energy.

First results from point-source searches. The 40-string configuration data will be analyzed soon. The point sources are sought with a unbinned likelihod search. Taking into account energy variable in point source search. They expect point sources to have higher energy spectrum than atmospheric neutrinos. From 5114 neutrino candidates in 276 days, they found one hot spot in the sky, with a significance after trial factor accounting that is of about 2.2 sigma. There are variables next year that will be less sensitive to dust model, so they might be able to say more about that one soon.

For a 7-years data, 3.8 year livetime, the hottest spot has a significance of 3.3 sigma. With one year of data, icecube 22 will already be more sensitive than Amanda. Icecube and Antares are complementary, since icecube looks at northern declination and antares is looking at the southern declinations. The point source flux sensitivity is down to 10^-11 GeV cm-2 s-1.

For GRBs, one can use a triggered search, that is an advantage, and latest results give for 400 bursts a limit. From IceCube22, a unbinned search similar to the one of the point source search, gives an exclusion power expected to 10^-1 GeV per cm^2 (in E^2 dN/dE units), in most of the energy range.

The naked-eye GRB of March 19, 2008, had detector in test mode, only 9 of 22 strings taking data. Bahcall predicted flux peaks at 10^6 GeV with a flux of 10^-1, but the limit found is 20 times higher.

Finally, they are looking for WIMPS. A search was recently sent for publication by the 22-string IceCube. 104 days of livetime. Can reach down well.

Atmospheric neutrinos are also a probe for violations of Lorentz invariance -possibly from Quantum Gravity effects. The survival probability depends on energy, assuming maximal mixing their sensitivity is down to a part in 10^-28. They are looking for a change in what one would expect for flavor oscillation. Atmospheric neutrinos are produced, depending on where they are produced they traverse more of the core of the Earth. So one gets a neutrino beam with different baselines, based on energy, and you would see a difference in the neutrino oscillation probability. The neutrino oscillation parameter will be energy dependent.

In the future they would like to see a high-energy extension. Ice is the only medium where one can see a coherent radio signal and an optical one, and acoustic too. Past season was very successful, with the addition of 19 new strings. Many analyses of 22-string configuration are complete. ANalysis techinques being refined to exploit size, energy threshold, and technology used. Underway to develop tech to build GZK scale nu detector after IceCube is complete.

Vincenzo Flaminio, Results from Antares ( slides here)

Potential sources of galactic neutrinos can be SN remnants, pulsars, microquasars, and extragalactic ones are gamma-ray bursts and active galactic nuclei. A by-product of Antares is an indirect search for dark matter, results are not ready yet.

Neutrinos from supernovas: these act as particle accelerators, can give hadrons and gammas from neutral pion decays. Possible sources are those found by Auger, or for example the TeV photons which come from molecular clouds.

Antares is an array of photomultiplier tubes that look at Cherenkov light produced by muon crossing the detector. The site is south of France, the galactic center is visible for 75% of the time. The collaboration comprises 200 physicists from many european countries. The control room in Toulon is more comfortable than the Amanda site (and this wins the understatement prize of the conference).

The depth in water is 2500m. All strings are connected via cables on the seabed. 40km long electro-optical cable connects ashore. Time resolution monitored by LED beacon in each detector storey. A sketch of the detector is shown below.

Deployment started in 2005, in 2006 first line installed. Finished one year ago. In addition there is an acoustic storey and several monitoring instruments. Biologists and oceanographers are interested in what is done, not just neutrino physicists.

The detector positioning is an important point, because lines move because of sea currents. Installed a large number of transmitters along the lines, use information to reconstruct minute-by-minute the precise position of the lines.

Collect triggers at 10 Hz rate with 12 lines. Detected 19 million muons with first 5 lines, 60 with the full detector.

First physics analyses are going on. Select up-going neutrinos, low S/N ratio with atmospheric muons is avoided this way. Rate is of the order of two per day using multi-line configuration.

Conclusions: Antares has successfully reached the end of construction phase. Data taking is ongoing, analyses in progress on atmospheric muons and neutrinos, cosmic neutrino sources, dark matter, neutrino oscillations, magnetic monopoles, etcetera.

David Saltzberg, Overview of the Anita experiment ( slides here)

Anita flies at 120,000 ft above the ice. It is the eyepiece of the telescope. The objective is the large amount of ice of the Antarctica. Tested with 8 metric tons of ice to test effect for detection. Done at SLAC. Observe radio pulses from the ice. A wake-field radio signal is detected. It goes up and down in less than a nanosecond, due to its Cherenkov nature. It is called Askaryan effect. You can observe the number of particles in the shower, and the measured field effect does track the number of particles in the shower. The signal is 100% polarized linearly. Wavelength is bigger than the size of the shower, so it is coherent. At a PeV there are more radio quanta emitted than optical ones.

They will use this at very high energy, looking for GZK-induced neutrinos. The GZK converts protons into neutrinos, 50 MPc around sources.

The energy is at the level of 10^18 eV or higher, proper time is 50 milliseconds, longest baseline neutrino experiment possible.

Anita has a GPS antenna for detection, and orientation which needs a fraction of a degree resolution. Solar powered. Antennas are pointed down 10 degrees.

This 50-page document describes the instrument.

Lucky coincidences: 70% of world’s fresh water is in antarctica, and it is the most quiet radio place. The place selects itself, so to speak.

They made a flight with a live time of 17.3 days, but this one never flew above the thickest ice, which is where most of the signal should be coming from.

The Askaryan effect gets distorted by antenna detection, electronics, and thermal noise. The triggering works like any multi-level trigger. Sufficient energy in one antenna, same for neighbors. L3 goes down to 5 Hz from a start of 150 kHz. L2 does coincidence between adjacent L1 triggers.

They put a transmitter underground to get pulses to be detected. Cross-correlation between antennas do interferometry, and gets position of source. The resolution obtained on elevation is an amazing 0.3 degrees, and for azimuth it is 0.7 degree resolution. The ground pulsers make even very small effects stand out. Even 0.2 degree tilt of detector can be spotted by looking at errors in elevation as a function of azimuth.

First pass of analysis of data: 8.2M hardware triggers. 20,000 of those point well to ice. After requiring upcoming plane waves, isolated from camps and other events, remain a few events. Could be some residual man-made noise. Background estimate: thermal noise, which is well simulated, and gives less than one event after all cuts, and anthropogenic impulsive noise, like iridium phones, spark plugs, discharge from structures.

Results: having seen zero vertical polarization events surviving cuts, constraints on GZK production models. Best result to date in the energy range from 10^10 to 10^13 GeV.

Anita 2 has 27 million better triggers, over deeper ice, 30 days afloat. Still to be analyzed. Anita 1 is doing a 2nd pass deep analysis of the data. Anita 2 has better data, expect factor 5-10 more GZK sensitivity from it.

Sanshiro Enomoto, Using neutrinos to study the Earth: Geoneutrinos. ( slides here)

Geoneutrinos are generated by beta decay chain of natural isotopes (U,TH,K). These all yield antineutrinos. With an organic scintillator, they are detected by inverse-beta decay reaction yielding a neutron and a positron. The threshold is at 1.8 MeV. Uranium and Thorium contribute in this energy range, while the Potassium yield is below it. Only U-238 can be seen.

Radiogenic heat dominates Earth energetics. Measured terrestrial heat flow is of 44 TW, and the radiogenic heat is 3TW. The only direct geochemical probe: deepest borehole reaches only 12 km, and rock samples down to 200 km underground. Heat release from the surface peaks off America in the Pacific and in south Indian ocean. Estimate is of 20 TW from radioactive heat, 8 from U, 8 from Th, 3 from K. Core heatflow from solidification etc. is estimated at 5-15 TW, secular cooling 18+-10 TW.

Kamland has seen 25 events above backgrounds, consistent with expectations.

I did not take further notes of this talk, but was impressed by some awesome plots of Earth planisferes with all sources of neutrino backgrounds, to figure out which is the best place for a detector studying geo-neutrinos. Check the slides for them…

Michele Maltoni, synergies between future atmospheric and long-baseline neutrino experiments ( slides here)

A global six-parameter fit of neutrino parameters was shown, including solar, atmospheric, rector, and accelerator neutrinos, but not SNO-III yet. There is a small preference for non-zero theta_13, coming completely from the solar sector; as pointed out by G.Fogli, we do not find a non-zero theta_13 angle from atmospheric data. All we can do is point out that there might be something interesting, suggest experiments to do their own analyses fast.

The question is: in this picture, were many experiments contribute, is there space left for relevance of atmospheric neutrinos ? Which is the role of atmospheric neutrino measurements ? Do we need them at all ?

At first sight, there is not much left for atmospheric neutrinos. Mass determination is dominated by MINOS, theta_13 is dominated by CHOOZ, atmospheric data dominate in determination of mixing angle, atmospheric neutrino measurements have highest statistics, but with the coming of next generation this is going to change. There is symmetry in sensitivity shape of other experiments to some of the parameters. On the other hand, when you include atmospheric data, the symmetry is broken in theta_13, which distinguishes between normal and inverted hierarchy.

Determination of the octant in $\sin^2 \theta_{23}$ and $\Delta m^2_{31}$. Also, the introduction of atmospheric data introduces a modulation in the $\delta_{CP} - \sin \theta_{13}$ plot. Will this usefulness continue in the future ?

Sensitivity to theta_13: apart from hints mentioned so far, atmospheric neutrinos can observe theta_13 through matter effects, MSW. In practice, the sensitivity is limited by statistics: at E=6 GeV the ATM flux is already suppressed; background comes from $\nu_e \to \nu_e$ events which strongly dilute the $\nu_\mu \to \nu_e$ events. Also, the resonance occurs only for neutrinos OR antineutrinos, but not both.

As far as resolution goes, MegaTon detectors are still far in the future, but Long-baseline experiments are starting now.

One concludes that the sensitivity to theta_13 is not competitive with dedicated LBL and reactor experiments.

Completely different is the issue with other properties, since the issue of the resonance can be exploited once theta_13 can be measured. resonant enhancement of neutrino (antineutrino) oscillations for a normal (inverted) hierarchy; mainly visible for high energy, >6 GeV. The effect can be observed if detector can discriminate charge, or, if no charge discrimination is possible, if the number of neutrinos and antineutrinos is different.

Sensitivity to the hierarchy depends on charge discrimination for muon neutrinos. Sensitivity to the octant: in the low-energy region (E<1 GeV), for theta_13=0 the excess of $\nu_e$ flux for theta_23 in one or the other side. Otherwise, there are lots of oscillations, but the effect persitst on the average. It is also present for both neutrinos and antineutrinos. At high energy, E>3 GeV, for theta_13 the MSW resonance produces an excess of electron-neutrino events. Resonance occurs only for one kind of neutrino (neutrino vs antineutrino).

So in summary one can detect many features with atmospheric neutrinos, but only with some particular characteristics of the detector (charge discr, energy resolution…).

Without atmospheric data, only K2K can say something on the neutrino hierarchy for low theta_13.

LBL experiments have poor sensitivity due to parameter degeneracies. Atmospheric neutrinos contribute in this case. The sensitivity to the octant is almost completely dominated by atmospheric data, with only minor contributions by LBL measurements.

One final comment: there might be hints of neutrino hierarchy in high-energy data. If theta_13 is really large, there can be some sensitivity to neutrino mass hierarchy. So the idea is to have a part of the detectors with increased photo-coverage, and use the rest of the mass as a veto: the goal is to lower the energy threshold as much as possible, to gain sensitivity to neutrino parameters with large statistics.

Atmospheric data are always present in any long-baseline neutrino detector: ATM and LBL provide complementary information on neutrino parameters, information in particular on hierarchy and octant degeneracy.

Stavros Katsanevas, Toward an European megaton neutrino observatory ( slides here)

Underground science: interdisciplinary potential at all scales. Galactic supernova neutrinos, galactic neutrinos, SN relics, solar neutrinos, geo-neutrinos, dark matter, cosmology -dark energy and dark matter.

Laguna: aimed at defining and realizing this research programme in Europe. Includes a majority of European physicists interested in the construction ove very massive detectors realized in one of the three technologies using liquids: water, liquid argon, and liquid scintillator.

Memphys, Lena, Glacier. Where could we put them ? The muon flux goes down with the overburden, so one has to examine the sites by their depth. In Frejus there is a possibility to put a detector between road and train tracks. Frejus rock is not hard but not either soft. Hard rock can become explosive because of stresses, and is not good. Another site is Pyhasalmi in Finland, but there the rock is hard.

Frejus is probably the only place where one can put water Cherenkov detectors. For liquid Argon, we have ICARUS (hopefully starting data taking in May), others (LANNDD, GLACIER, etc.). Glacier is a 70 m tank, with several novel concepts. A safe LNG tank, developed for many years by petrochemical industry. R&D includes readout systems and electronics, safety, HV systems, LAr purification. Must think about getting an intermediate scale detector.

The physics scope: a complementary program, a lot of reach in Memphis in searches for positron-pizero decays of protons, better for kzero in liquid argon. Proton lifetime expectations are at 10^36 years.

By 2013-2014 we will know whether sinsquared theta13 is larger than zero.

European megaton detector community (3 liquids) in collaboration with its industrial partners is currently addressing common issues (sites, safety, infrastructures, non-accelerator physics potential) in the context of LAGUNA (EUI FP8) Cost estimates will be ready by July 2010.

David Cowan, The physics potential of Icecube’s deep core sub-array ( slides here)

A new sub-array in ice-cube, called deep-core: ICDC. Originally conceived as a way to improve the sensitivity to WIMPs. Denser sub-arrays to lower the energy threshold, they give one order of magnitude decrease in the low-energy reach. There are six special strings plus seven nearby icecube strings The vertical spacing is of 7 meters, with 72 meter horizontal interstring spacing: a x10 density with respect to IceCube.

The effective scattering length in deep ice, which is very clear, is longer than 40 meters. This gives a better possibility to do a calorimetric measurement.

The deep core is at the bottom center. They take the top modules in each string as an active veto for backgrounds coming from muon events going down. On the sides, three layers of IC strings also provide a veto. These beat down the cosmic background a lot.

The ICDC veto algorithms: one runs online, finds event light intensity, the weighted center of gravity, and the time. They do a number of things and come up with a 1:1 S/N ratio. So ICDC improves the sensitivity to WIMPs, neutrino sources in the southern sky, oscillations. For WIMPs, an annihilation can occur in the center of the Earth or Sun. Annihilations to bbbar pairs or tau-tau pairs gives soft neutrinos, while ones into W boson pairs yield hard ones. This way, they extend the reach to masses of less than 100 GeV, at cross sections of 10^-40 cm^2.

In conclusion, ICDC can analyze data at lower neutrino energy than previously thought possible. It improves overlap with other experiments. It provides for a powerful background rejection, and it has sufficient energy resolution to do a lot of neutrino oscillation studies.

Kenneth Lande, Projects in the US: a megaton detector at Homestake ( slides here)

DUSEL at Homestake, in South Dakota. There are four tanks of water Cherenkov in the design. Nearby there’s the old site of the chlorine experiment. Shafts a km apart.

DUSEL will be an array of 100-150 kT fiducial mass Cerenkov detectors, at 1300 km distance from FNAL. The beam goes from 0.7 MW to 2.0 MW as the project goes along. Eventually add 100 kT of argon. A picture below shows a cutaway view of the facility.

Goals are accelerator-based theta_13, look at neutrino mass hierarchy, CP violation through delta_CP. For non-accelerator, the program includes studies of proton decay, relic SN neutrinos, prompt SN neutrinos, atmospheric neutrinos, and solar neutrinos. They can build up to 70m-wide tanks, but settled to 50-60m. The plan is to build three modules.

Physics-wise, the fnal beam has oscillated and disappeared at energy around 4 GeV. Rate is of 200,000 CC events per year assuming 2MW power (no oscillation, raw events). Neutrino appearance (electron kind) for nu and antinu as a function of energy gives oscillation, and mass hierarchy.

Reach in theta_13 is below 10^-2. For nucleon decay: looking in the range of 10^34. 300 kT per 10 y means 10^35 proton-years. Sensitive also to K-nu mode of decay, at the level of 8×10^33.

DUSEL can choose the overburden. A deep option can go deeper than Sudbury.

US power reactors are far from Homestake. Typical distance is 500 km. The neutrino flux from reactors is 1/30 of that of SK.

For a SN in our galaxy they expect about 100,000 events in 10 seconds. For a SN in M31 they expect about 10-15 events in a few seconds.

Detector construction: excavation, installing water-tight liner… Financial timetable is uncertain. At the moment water is being pumped down. Rock studies can start in September.

And that would be all for today… I heard many other talks, but cannot bring myself to comment on those. Please check http://neutrino.pd.infn.it/NEUTEL09/the conference site for the slides of the other talks!

Neutrino Telescopes Day 1 noteMarch 11, 2009

Posted by dorigo in cosmology, news, physics, science.
Tags: , , , , , , , ,

Below are some notes I collected today during the first day of the “Neutrino Telescopes” conference in Venice. I have to warn you, dear readers, that my superficial knowledge of most of the topics discussed today makes it very likely to certain that I have inserted some inaccuracy, or even blatant mistakes, in this summary. I am totally responsible for the mistakes, and I apologize in advance for whatever I have failed to report correctly. Also, please note that because of the technical nature of this conference, and the specialized nature of the talks, I have decided to not even try to simplify the material: this is thus only useful for experts.

In general, the conference is always a pleasure to attend. The venue, Palazzo Franchetti, is located on the Canal Grande in Venice. To top that, today was a very nice and sunny day. I skipped the first few “commemorative” talks, and lazily walked to the conference hall in time for coffee break. The notes I took refer to only some of the talks, those which I managed to follow closely.

This was a discussion of the SNO experiment and a description of the new telescopes that will start to operate in the expansion of the SNO laboratory. SNO is an acrylic vessel, 12m in diameter, containing 1000 tonnes of deuterium ($D_2 O$), with some additional 1700 tonnes of water for inner shielding, and 5300 tonnes for outer shielding. 9500 photomultiplier tubes watch it, quick to record the faint neutrino signals.

The detector is located deep underground, in the Creighton mine near Sudbury, Ontario, Canada. The depth makes for smaller cosmic-ray backgrounds than other neutrino detectors, at a depth where muons from neutrino interactions start to compete with primary ones.

SNO was designed to observe neutrinos in three different reactions:

1. In the charged-current weak interaction of a neutrino with a deuterium nucleus the neutrino becomes an electron, emitting a W boson which turns the nucleus into a pair of protons. This reaction has a energy threshold of 1.4 MeV, and the electron can be measured by the Cherenkov light it yields in the liquid.
2. Neutral-current interactions -where the neutrinos interact with matter by exchanging virtual Z bosons, are possible with all kinds of neutrinos, and they provide a signature of a neutron and a proton freed from the nucleus, if the incoming neutrino has an energy above 2.2 MeV.
3. Finally, elastic scattering both in water and deuterium can occur between neutrinos and the electrons of the medium.

SNO tries three neutron detection methods, which are “systematically different”: they rely on different physical processes and have thus different measurement systematics. First of all, in pure heavy water one can detect neutrons by capturing them into deuterium, with the emission of a 6.25 MeV photon.

Putting salt in the detector allows to get more gamma rays from neutron capture, because the sodium chloride allow neutron capture in 35Cl, and neutral current events can be separated from charged-current events using event isotropy.

In phase III they put in an array of long tubes of ultrapure Helium-3, and observe neutron capture and measure neutral current rates with an entirely different detection system.

Measurements showed that CC and NC reactions were not the same, fluxes were in a ratio of $R(CC/NC)=0.34 \pm 0.023^{+0.029}_{-0.031}$.

Phase III consists in inserting 40 strings on a 1-meter spaced grid in the vessel, for a total of 440 meters of proportional counters filled with 3He. The signal collected in phase III amounts to 983+-77 events.

Combined with the results of the KamLAND and Borexino experiments, the fit to SNO data constrains the angle $\theta_12$ to 34.4+-1.2 degrees, and $\delta m^2 = (7.59^{+0.19}_{-0.21})\times 10^{-5} eV^2$.

The future for SNO is to have it filled with liquid scintillator doped with Neodimium for double beta decay studies. 150-Nd is one of the most favourable candidates for double beta, with large phase space due to its high endpoint energy (3.37 MeV). It provides for a long attenuation length, and it is stable for more than 2 years. For double beta decays they expect to get to 0.1 eV sensitivity with a 1000 ton-mass detector.

Atsuto Suzuki: KamLAND

Atsuto discussed the history and the results of the KamLand experiment. There was a first proposal of the detector in 1994, a full budget approval in 1997 by the Japanese. In April 1998 the construction started, and in 1999 US-Kamland was approved by DOE. Data taking began in 2002. In August 2009 there will be a new budget proposal, for double beta decay studies.

Kamland consists in a Xenon-filled vessel, with an outside one filled with Gadolinium. Kamland wants detects neutrino oscillation with >100 Km baseline, exploiting the many nuclear reactors in Japan. The second goal is to search for geo-neutrinos: these are potential anti-neutrinos coming from fusion processes which could hypothetically take place at the center of the Earth.

Many reactors provide the source of neutrinos, a total of 70GW (12% of global nuclear power) at an average 175+-35 km distance from KamLAND. The largest systematic for reactor neutrino detection come from the knowledge of the fiducial volume (4.7%), the energy threshold (2.3%), the antineutrino spectrum (2.5%), for a total of 6.5%.

The experiment observed neutrino disappearance, measured the parameters of neutrino oscillations, and also put an upper limit of 6.4 TW for geo-neutrinos. Theoretical models, which predict the power at 3 TW, have not been excluded yet.

Gianluigi Fogli:  SNO, KamLAND and neutrino oscillations: theta_13.

Gianluigi started his talk with a flash-back: four slides which were shown at NO-VE 2008, the former instantiation of this conference. This came after the KamLAND 08 release, but before the SNO 2008 release of results.

What one would like to know is the hierarchy (normal or inverted), the CP asymmetry in the neutrino sector, and the $\theta_{13}$ mixing. Some aspects of this picture are currently hidden below the 1-sigma level. A recent example is the slight preference for $\sin^2_{13} = 0.01$ from the combination of solar and reactor 2008 data. They are consistent with zero but their combination prefers a value 1-sigma different.

In the second slide from 2008, the reason was discussed. A disagreement comes from the difference between solar data, SNO-dominated, and the kamLAND data at $\theta_{13}=0$. The disagreement is reduced for $\theta_{13} >0$. A choice of $\sin^2 \theta_{13}=0.03$ (instead of zero) gives a better fit of the two sets of data. It is a tiny effect, but with some potential for improvement, once final SNO data and further Kamland data will be available.

The content of Fogli’s talk was organized as a time-table of eight events, in two acts.

First: in May 2008 the effect was discussed independently by Balantekin and Yilmaz. Then, in May, SNO-III data was released. In June, our analysis giving $\sin^2_{13} = 0.021 \pm 0.017$ went to PRL, and then an independent analysis of S+K was given in August.

Concerning atmospheric and long-baseline neutrinos, there were results yielding 0.016+-0.010 from all data in our analysis, then comments on the atmospheric hint by Maltoni and Shwetz, then a new three-flavor atmospheric analysis from SK. Finally, just a month ago we saw the first MINOS results on electron neutrino appearance.

Act one: solar and kamland hint for $\theta_{13}>0$: Balantekin and Yilmaz discussed it. The release of SNO-III data saw a strong improvement in the data, and the result is slightly lower cc/nc ratio, so a slightly lower value of $\sin^2_{13}$ is preferred. Fogli here noted that the new data are ok from a model-independent viewpoint, that is, there is an internal consistency among SNO and SK. Also, there is consistency among neutral-current measurements and the standard solar model of 2005. On the other hand, also kamland data have their own internal consistency: they reconsruct the oscillation pattern through one full period. The fact that the solar and kamland datasets are ok, but they disagree on theta_12, unless theta_13>0, is thus intriguing.

Event 3: the hints of theta_13>0 from the global analysis. We have the hint plotted in the plane of the two mixing angles, and you see that the solar and Kamland region in sintheta_13 vs sintheta_12. the agreement is reached only if $\sin \theta_{13}$ is larger than zero. When they are combined, they find a best fit more than one sigma away from zero, 0.021+-0.017. The reason of the different correlation of the two mixing angles relies on the relative sign of mixings in the expression for the survival probability of neutrinos in SNO and Kamland. At low energy, in the vacuum the survival probability is given with an anticorrelation of $\sin^2 \theta_{12}$ and $\sin^2 \theta_{13}$. At high energy, adiabatic MSW (SNO), the sign is opposite.

Complementarity: solar and kamland data taken separately prefer theta_13=0. Combined they are 1.2 sigma away from zero.

Event 4 in the list given above was the analysis by Schwetz and Tortola and Valle: they also found a preference for $\theta_{13}>0$  at a slightly higher confidence level.

In conclusion, a weak preference for $\theta_{13}>0$ is accepted at 1.2-1.5 sigma. Is this preference also supported by atmospheric and acceleratr data ? In Fogli’s paper (0806.2649) they used as independent support for a nonzero value of the angle, an older hint coming from their analysis of atmospheric data with CHOOZ and long-baseline results.

The complication comes out in Act 2: event 5 is the older but persisting hint for $\theta_{13}>0$. It comes from the 3-neutrino analysis of atmospheric, LBL, and CHOOZ data. There one has to go in detail, by considering what one means when one talks of an excess of electron events induced by three-neutrino sub-leading effects. The calculations are based on a numerical evolution of the Hamiltonian along the neutrino path in the atmosphere and in the known Earth layers. However, semianalytical approximations can be useful. An important observable is the excess of expected electron events compared to the no-oscillation case.

The excess is given by a formula,$N_e/N_0-1 = (P(ee)-1)+rP(e \mu)$, where $P(ee)$ and $P(e\mu)$ are the oscillation probabilities, and R is the ratio of fluxes. The excess is zero when both $\theta_{13}$ and $\delta m^2$ are both zero, but can have contributions otherwise.

We have two kinds of matter effects that take place in the propagation. If one assumes a constant density approximation, and with a normal hierarchy, the three quantities can be given, where one can distinguish the theta_13, the delta_m, and the interference terms. All three effects can singularly dominate. The different terms help fitting the small electron excess in sub-Gev and multi-Gev data.

The atmospheric three-neutrino analyses by the SK Collaboration (in hep-ex/0604011) and Schwetz, Tortola, and Valle in 0808.2016 cannot directly compare with the one of Fogli, because they do not include the two sub-leading solar terms, since they make the assumption of one-mass-scale-dominance.

Sticking to his own analysis, Fogli continued by taking the two hints from solar+kamland results on one side, and atmospheric neutrinos+chooz+lbl on the other: they indicate a 1.6 sigma discrepancy from zero of theta_13. Combining all data together, $sin^2 theta_{13} = 0.016+-0.010$. This is the result of 0806.2649. Below are the results for the two angles together, showing their anticorrelation in the two simultaneous determinations.

Event 6: rather recent, in December of last year Maltoni and schwetz published 0812.3161, which includes the discussion of the preliminary Superkamiokande-II data. Using SK-I data they find at most 0.5 sigma from atmospheric neutrinos plus chooz data. This is weaker than Fogli’s 0.9 sigma, but shows similar qualitative features.

Event 7: a discussion of the data of SK-I, SK-II, and maybe SK-III, even if all these things are not yet published. There eists ongoing three-flavor analyses as reported in recent PhD theses using SK I+II data. Wendell, Takenaga. Unfortunately, none of the above analyses allows both theta_13 and $\Delta M>0$, and thus they do not include interference effects linear in theta_13, which may play some non trivial role.

Concerning the sub-Gev electron excess, effects persist in phases I and II, but slight excess present of upgoing multi-Gev evens is present in SK I but not in SK II. This downward fluctution may disfavor a non-zero value of theta_13, as noted by Maltoni and Schwetz.

From SK-III two distributions presented at neutrino 2008 by J.Raaf show that a slight excess of upgoing multi-Gev seems to be back, together with a persisting excess of sub-Gev data.

So the question is: SK-III shows both effects. Can this be interpreted away from statistical fluctuations ? This requires a refined statistical analysis with a complete set of data coming from SuperKamiokande.

Currently, there is an impressive number of bins in energy and angle, and 66 sources of systematics. These need to be handled carefully. Such a level of refinement is difficult to reproduce outside of the collaboration, In other words, independent analyses of atmospheric data searching for small effects at the level of 1-sigma are harder to perform now. So, it will be important to see the next official SK data release and especially the SK oscillation analysis, hopefully including a complete treatment of three flavor oscillation withboth parameters allowed to go larger than 0.

In the meantime, Fogli noted that he does not have compelling reasons to revise his 0.9-sigma hint of theta_13 coming from published SK-I data.

Finally, Event 8: this last one is very recent, concerns the first MINOS results on electron-neutrino appearance. These preliminary results have been released too recently, and it would be unfair to anticipate results and slides that will be shown later in this workshop, but

Fogli could not help noticing that the MINOS best fit for theta_13 sits around the chooz limit, and is away from zero at 90% C.L.

If we see the glass half-full, then we might have two independent 90% C.L. hints in favor of theta_13>0: one coming from Fogli’s global analysis of 2008, and one coming from MINOS, that can be roughly symmetrized and approximated in the form $\sin^2 \theta_{13} = 0.05 \pm 0.03$. A combination at face value gives a value of 0.02 +- 0.01, an indication at 2-sigma of a non-zero value of this important angle. In other words, the odds against a null theta_13 are now 20 to 1.

G.Testera:  Borexino

Borexino is a liquid scintillator detector. The active volume is filled by 270 liters of liquid scintillator contained in a thin nylon vessel. Light emitted is seen by Photomultiplier tubes. The outer volume is filled by the same organic material, but with a quencher in the buffer region. Water used as shield. The tubes are looking at Cherenkov light. Used for active muon veto. Borexino is a simple detector, but in practice the requirements needed for the radiopurity are tough to comply with.

The physics goals are a measurement in real time of flux and spectrum of solar neutrinos in MeV or sub-MeV range. Why measure solar neutrinos of low energy ? The LMA-MSW model predicts a specific behavior for the neutrino survival probability for the various types of neutrinos emitted in the sun. The shape of the prediction as a function of energy shows a larger survival probability at lower energy.

All data before Borexino measured higher energy. So Borexino wants to measure shape of survival probability as a function of energy, going lower. Measurement can constrain additional oscillation models. If we asssume that neutrinos oscillate and we take data of the survival probability, we get the absolute neutrino flux, and we might be able to measure the component of the CNO source in the neutrino flux, this can help constrain the solar models.

Borexino can also see antineutrinos (geo-nu), in gran sasso this will be relatively easy because background from reactor antineutrinos is small. We need statistics, several years to collect significant data. The signal to noise ratio provided by the apparatus is of 1.2. The detector has also sensitivity to supernova neutrinos. Borexino is thus entering the SNEW community.

Results of Borexino will be complementary to others. Taking data since mid march 2007. They have about 450 days of live time so far. The process of neutrino detection is elastic scattering on electrons. High scintillator yield of 500 photoelectrons per MeV, a high energy resolution, and a low threshold. No information on the direction of neutrinos, however. Scintillator is fast, can reconstruct the position with time measurements. Different answers to alpha and beta particles can distinguish the two. The shape of the energy spectrum allows to distinguish them. The energy spectrum is the only sign they can recognize.

The story of the cleanliness of Borexino encompasses 15 years of work. Careful selection of construction materials, special procedures for fluid procurement, scintillator and buffer purification during filling. Background from U and Th is very small, smaller than the initial goals. The purity of the liquid scintillator is very high.

If there is only a neutrino signal, the simulation shows that the Beryllium 7 neutrino signal is very well distinguishable, it shows a flat spectrum with an upper edge at 350 MeV. 14C is at smaller energy. 11C at high energy cannot be eliminated. Can be tagged some way, but not completely eliminated. At further higher energy there is the signal from Carbon-10.

In 192 days of lifetime there is a big Polonium peak and the edge of the Beryllium region, together with a contribution from Kripton. Data indicates also the presence of Bismuth-210. The rate of neutrinos from 7Be is of 49 counts per 100 T. They see an oscillation from these neutrinos because otherwise they would see 75 counts +- 4. The no-oscillation hypothesis is rejected at 4-sigma level. This is the first real-time measurement of oscillation of 7Be neutrinos.

Largest errors are coming from the fiducial mass ratio, and detector response function. These amount to 6% each.

Neutrino interactions in the earth could lead to regeneration of neutrinos: solar nu flux higher in the night than in the day, due to geometry. In the energy region of 7Be, they expect a very small effect. A larger effect would be expected in the low-solution, now excluded.

A new preliminary result: the day-night asymmetry for 7Be solar neutrinos. 422 days of live time are used for this. In the region where neutrinos contribute, there is no asymmetry seen.

Flux of Boron-8 neutrinos with low threshold: Borexino goes lower in energy threshold. In Borexino they go down to 2-3 MeV. After subtracting the muon contribution they see the oscillation of 8B neutrinos. By putting them together with 7Be, more points can be added to the survival probability plot. They are describing well the curve as a function of energy.

In conclusion, Borexino claims a first real time detection of the 7Be flux.

M.Nakahata: Superkamiokande results in neutrino astrophysics.

Kamiokande from 1983 to 1996 was a 16m high, 15.6m diameter tank with more than a thousand large photomultiplier  tubes. SK started in 1996. A 50,000T water tank, 32,000 T photosensitive volume.

After the accident they took data at SKII, then 2006 was SKIII and new electronics, and since September 2008 it is SK-IV. The original purpose of Kamiokande was a search for proton decay. Protons could be though to decay to positron plus neutral pion; but they wanted to measure different branching ratios. They made a detector with large coverage.

telescope, the advantage was directionality, provided by the imaging Cherenkov detector. And the The large photocollection efficiency is useful also for detecting low-energy neutrinos. As a second item is energy information. The number of Cherenkov photons is proportional to the energy of the particle. Another advantage is the particle identification. From the diffuseness of the ring pattern they can distinguish electron from muon events. The misidentification probability is less than 1%, very important when discussing atmospheric neutrinos.

The first solar neutrino plot at Kamiokande came from 450 days of exposure. E>9.3 MeV threshold. Saw an excess of neutrinos coming from the sun, but could not say much about the size. In Superk they had larger number, 22400 solar neutrino events, 14.5 per day, with very precise flux, with stat accuracy of 1% and syst of about 4%. SK info gave 8B flux and $\nu_\mu$ and $\nu_\tau$ fluxes.

SK will measure the survival probability of solar neutrinos as a function of energy, from 4 MeV down, and measure their spectrum distorsion.

From the supernova SN1987, SK observed 11 events in 13 seconds. Other 11 events were seen in that case from Baksan and IMB3. ASsuming we now got a new Supernova at 10 kpc, SK could measure directly energy information from the reaction. The event rate would discriminate models.

Adding Gadolinium in water can reduce backgrounds, because n capture yields a gamma ray, which gives 8 MeV energy, and the time is correlated (30 msec delay). If invisible muon backgrounds can be reduced by a factor of five using this neutron tagging, with 10 years of SK the signal will amount to 33 events, 27 from backgrounds, in energy of 10 to 30 MeV: they can thus see SN relic neutrinos. But they must first study water transparency, corrosion in the tank, etcetera, due to the addition of Gadolinium.

Atmospheric neutrino anomaly in Kamiokande: mu-e decay ratio was the first evidence. Data from 1983 to 1985 allowed to measure the ratio, 60% of the expectation in mu/e ratio. A paper was published in 1988. In 1994 they obtained a zenith angle distribution for multi-GeV events. In superK they got a much better result, and got sub-GeV electron-like and muon-like events.

Oscillation agreed very well with observed data. The latest plot of two-flavor oscillation analysis gives a $\delta m^2 = 0.0021 eV^2$, and angle theta consistent with 1.0.

And that is all for today!

Live video streaming of single top observation NOWMarch 10, 2009

Posted by dorigo in news, physics, science.
Tags: , ,