jump to navigation

New SM Higgs limits from D0 March 16, 2007

Posted by dorigo in news, personal, physics, science.
trackback

I just read the conference paper of the 1/fb-based search for associated WH production performed by D0 for winter 2007, and I learned a few things.

The D0 search is a tidy analysis. They have put together a very efficient neural-network B-tagging algorithm, which is the heart and bones of their counting experiment.The algorithm takes care of deciding whether a jet is likely to have been originated from b-quark fragmentation by using seven different observable quantities, related to the long lifetime of B-hadrons and their decay characteristics.

Correct and efficient identification of b-quark jets is crucial for low-mass Higgs boson searches because a light Higgs mostly decays to a pair of those things, and the relative rarity of b-quark jets in competing processes allows to reduce the backgrounds in the final data set.

[A short explanation of what I mention above: When b-quarks are emitted from Higgs decay or other high-energy processes, they create a stream of hadrons. The one carrying the largest fraction of the b-quark momentum is usually the one containing the original quark, and it is generically called “B-hadron”. This particle lives a very long life for subatomic standards: of the order of a trillionth of a second. This allows it to travel several millimeters before disintegrating in lighter bodies, and the precise backward tracing of the latter allows the B-tagging algorithm to figure out whether it was indeed a long-lived b-quark what created the jet.]

The D0 B-tagging algorithm works in two different modes: loose or tight. A loose B-tag is 70% efficient on well-behaved jets, but accepts 4.5% of false positives (light-quark or gluon jets incorrectly tagged). The tight B-tag has a lower 48% efficiency, but sports a tenfold increase in background rejection. These performances have been optimized for the Higgs search: the loose tagger is used to select events with two B-tags, and the tight tagger is applied to the remaining events. This combination is predicted to yield the best results.

To select WH candidates, D0 collects event containing a W signal (a well-identified charged lepton of high energy and a large amount of missing transverse momentum from the escaping neutrino) plus two or three jets. Before applying the B-tagging algorithm the data are studied and understood in terms of its main components with the help of Monte Carlo simulations of the contributing processes (W+jets production, top pair production, non-W QCD events, and the like). Then, a double loose B-tag or a single tight B-tag are requested, and a comparison of expected and observed events is performed.

Below you can see the dijet invariant mass of double B-tagged candidates. The expected Standard Model Higgs signal, overlaid to the sum of concurring backgrounds in light brown, is multiplied by 10!

And here are the numbers of candidates for the more sensitive samples, the double B-tagged ones: D0 observes 222 and 151 events with two and three jets, respectively, when 220+-31 and 166+-28 are expected. No excess is seen here, nor in the single B-tagged sample, and the expected Higgs contribution to these data is indeed too small to show up. So they proceed to set 95% confidence level limits to the WH cross section as a function of Higgs boson mass (a parameter on which the cross section depends).


You can see the observed limit as a continuous red line in the graph above, compared to the expected limit (dashed red line) and to previous results in the same final state by CDF and D0. I have to say the labeling of these lines is really confusing – I hope D0 will come up with an improved version of this plot soon. In any case, the important point is that the limit is still far above the predicted cross section (black dashed line on the bottom), meaning that the data is still insufficient to provide any information – on the existence, or on the absence of the standard model Higgs.

And in fact, seeing more and more plots like the one above is starting to wear out my enthusiasm. I have always said that the Tevatron has a chance on the Higgs discovery, and I still believe it, but getting any of those limit lines down to the predicted cross section is going to be impossible with even a six-fold increase in luminosity – what we hope we can collect by the end of 2009. Of course, the only way to be in the position to say something meaningful about the Higgs for the Tevatron experiments is to join forces and channels: the combination of many different results does improve matters dramatically (indeed, the Higgs boson can be sought in several different final states, each providing independent sensitivity). And yet…

I imagine a situation when ten different analyses see some small deviation -really small, at the level of less than one sigma each; or, more plausibly, let’s imagine some searches see a 2-sigma excess, a few others a 1-sigma deficit. Would I believe we have found the Higgs if their statistical combination yielded a 3-sigma, or even a 4-sigma discrepancy with respect to predicted backgrounds (really, 4-sigma is already more than we can hope for) ?

I think I could, by scrutinizing each result with care, get myself to be completely certain that the analyses have been done perfectly, and that the statistical combination is sound. Nevertheless, I doubt that I would then take the numerical combination and say “this is it”. I fear I would need something much, much more definite…

Sometimes I think I am growing old!

%d bloggers like this: