jump to navigation

D0 bags evidence for semileptonic dibosons October 24, 2008

Posted by dorigo in news, physics, science.
Tags: , , , ,
trackback

A week ago I discussed here the recently approved analysis by which CDF shows a small hint of WW/WZ signal in their Run II data, with one W boson decaying to a lepton-neutrino pair, and the other boson (either a W or a Z) producing a pair of hadronic jets.

Such a process is very hard to put in evidence in hadronic collisions, due to the large irreducible background of events due to one leptonic W decay accompanied by QCD radiation from the initial state of the collision. In fact, despite having been sought by many in Run I, no appreciable signal had surfaced in Tevatron data either from CDF or D0.

Now, D0 has really bagged it. They used a more performant selection method than the one used by CDF, and were a bit more bold in their use of Monte Carlo simulations. The result is that they find a very significant excess, amounting roughly to 960 events, in a total of nearly 27,000.

I encourage those readers who are unfamiliar with the basics of vector boson production at hadron colliders to read the introductory part of the former post on this topic, which I linked above. Here I will avoid repeating that introduction, and concentrate instead on the analysis details.

D0 uses a total of 1.1 inverse femtobarns of 1.96 TeV proton-antiproton collisions, acquired during Run II of the Tevatron. The samples of data are collected by triggers selecting a signal of a high-energy electron or muon, and a further requirement that a transverse energy imbalance of 20 GeV or more is requested, thus characterizing the leptonic decay W \to l \nu_l of one vector boson. Finally, the transverse mass of the lepton-missing transverse energy  system has to be larger than 35 GeV, reducing backgrounds from non-W events.

[The transverse mass is computed by neglecting the z-component of the particles momenta: if both particles are emitted perfectly transverse to the beam direction, transverse and total mass coincide. This is forced by the absence of a z-measurement of the neutrino momentum, since the energy imbalance it creates by escaping the detector cannot be measured along the proton-antiproton axis.]

Besides characterizing the leptonic W decay, two jets with transverse energy above 20 GeV are required. After this selection, the data contain a non-negligible amount of non-W backgrounds, constituted by QCD multijet events where the leptonic W is a fake; but the bulk is due to W+jets production, where the jets arise from QCD radiation off the initial partons participating in the hard interaction. Several Monte Carlo samples are used to model the latter background process, while the former is handled by loosening the lepton identification criteria in the data: the looser the lepton requirement, the larger this contamination, such that for really loose electron and muon candidates the samples are almost purely due to QCD multijet events.

Signal and backgrounds are separated using a multivariate classifier to combine information from several kinematic variables. This is the Random Forest algorithm, which I had the occasion to discuss in the past (two years ago a student of mine used it to discriminate hadronic top events in a similar dataset in CDF). The Random Forest output is highest (close to one) for signal events, while backgrounds are given a value closer to zero. The result of the classification is shown below: the excess for high values of RF output are due to the diboson signal (in red the signal content estimated by the fit).

A fit to the RF output provides the normalization of the signal and the background components, as shown above. Notice the blue “envelope” in part (b) of the plot: it is the systematic uncertainty due to background RF templates. Of course, the level of the blue curve is deceiving, since shape uncertainties are totally correlated among themselves; but the signal does stand out on top of it.

A plot of the dijet mass distribution confirms the interpretation, as shown below. The bottom part shows the data subtracted by background contributions (points with error bars), which compares well with the shape of the expected diboson contribution. D0 finds a combined WV (WW+WZ) cross section of 20.2 \pm 1.4 \pm 3.6 \pm 1.2 pb, where the first uncertainty is statistical, the second is systematic, and the third relates to the integrated luminosity uncertainty of the base of data used in the search. This compares well with the theoretical prediction of \sigma(WV)=16.1 \pm 0.9 pb.

In the plot above, the combined W/Z signal peaks at about 80 GeV, with a resolution of roughly 15 GeV; the background template uncertainty is again in blue, again underlining the difficulty of this measurement, which finds a signal excess exactly where the backgrounds peak.

One question I often hear asked in plots such as the one above is “why do W and Z boson peak at the same mass value ? They have a 10.7 GeV mass difference after all”. True, but the dijet mass resolution of the D0 detector is insufficient to tell the two signals apart, and what one observes is the combined shape. To be more precise, one should also add that the Z contribution in the plot is much smaller than the W one (about one third). Further, one should also point out that the heavy flavors produced by the Z boson will produce a underestimated Z mass reconstruction, due to the neutrinos often produced in the semileptonic decay of b- and c-quark jets. It is a fact that the Z \to b \bar b decay will peak at about 83 GeV after calibration of generic jet response, due to that effect alone…

I like to let the authors point out that “This work further provides a validation of the analytical methods used in searches for Higgs bosons at the Tevatron”, as in the conclusion of their paper. Indeed, the advanced methodologies by which the Tevatron experiments are setting more and more stringent limits on Higgs boson production are perceived by some as a bit uncautious. Things appear to be well under control, it instead transpires, once one can demonstrate that a signal known to be there can be indeed extracted from samples which have a very small signal-to-background ratio, as is the case of all Higgs searches.

Comments

1. Andrea Giammanco - October 25, 2008

Hi Tommaso,

> and were a bit more bold in their use of Monte Carlo simulations.

Do you mean that their analysis is more model-dependent?

> the heavy flavors produced by the Z boson will produce a underestimated Z mass reconstruction, due to the neutrinos often produced in the semileptonic decay of b- and c-quark jets.

I vaguely remember some proposal to have a different set of calibrations for jets with an identified lepton (muon in particular, since identifying electrons in jets is much less precise). This would take into account this specific problem.
Why aren’t they used? Maybe statistics is not enough? (I mean statistics in the calibration samples, not statistics for this specific study.)
Anyway, I think that the neutrinos are not the only reason for a miscalibration of b-jets… Even in hadronically-decaying b’s, the number of tracks and their share of energy are very peculiar with respect to normal jets (less tracks, with a few tracks which carry most of the energy, so if the calorimeter is not perfectly linear the response is different). Or am I wrong?

2. dorigo - October 26, 2008

Hi Andrea,

hmm, well. In a sense yes, they relied on Monte Carlo simulations more than CDF did in their analysis. However, I am not saying their analysis is worse -quite the contrary. I support D0’s way of performing this analysis, which is the search for a process which must exist and nobody doubts is there. In fact, I do note at the end that it is a good check of the methods now en vogue for the Higgs boson search.

As for the b-jet energy scale, and why b-jets are peculiar: of course you are right. I have studied that very issue in some detail in the last ten years. In fact, exactly 10 years ago I used soft lepton based corrections to b-jet energy, showing that the handful of Z->bb events in CDF Run I data were benefitting from the procedure, improving scale and resolution. See the plots in page 88-89 in my PhD thesis

And yes, you are right, even hadronic b-jets are non ordinary ones. Different hadronization, fragmentation function, the heavy quark mass. These have an impact on the calorimeter response, as you say…

Cheers,
T.

3. Andrea Giammanco - October 26, 2008

Thanks a lot for the answer.
By the way, I didn’t intend to diminish in any way the method! I was asking just to be sure that I had understood what you meant, i.e., that the MC is used boldly in the sense that more information on the kinematics is squeezed from the models, as in the Matrix Element method for the top mass. Is it correct?

4. Luboš Motl - October 26, 2008

Thanks for linking to the paper that has, sorry to say, told me more information per minute than your sophisticated text.

By the way, do you still like to link to xxx.lanl.gov? Cute. Normal people have been calling the main server arxiv.org for about a decade.😉

5. dorigo - October 26, 2008

Understood, Andrea. Yes, that is what I meant… Before the top discovery, when I started my education in hadron collider physics, at the Tevatron people used to not trust Monte Carlo simulations in the least (that was after the UA1 debacle…) A lot of water has flown under the bridges since then.

Hi Lubos,

of course for people with your level of instruction my posts may well sound uninformative. I still welcome you when you visit, especially since I note that you have considerably increased the pacateness of your comments.

Cheers,
T.


Sorry comments are closed for this entry

%d bloggers like this: