Dzero’s new limits on SM Higgs cross section April 23, 2007Posted by dorigo in news, physics, science.
The D0 collaboration has recently released their latest results on Standard Model Higgs boson searches. Ground is being broken, although it does not fully satisfy those of us who are anxiously waiting for the time when the cross section limits will start to exclude mass values still allowed by LEP II: for now, the limits set on Higgs production rate are still above the rate at which that particle is expected to show up.
The problem is that the Higgs boson is a rare particle. Proton-antiproton collisions do not normally produce it – that is, they do, but only once in a trillion times or so at the energy provided by the Tevatron accelerator. The signal of Higgs boson production and decay is often quite striking and well distinguishable, but it is so rare that amassing as large a statistics as possible is crucial for a positive identification.
D-zero has now analyzed an integrated luminosity of about one inverse femtobarn of data: that means one equivalent event produced by a process having a cross section Sigma of one femtobarn, according to the formula
N = Sigma x Luminosity,
where N is the number of events expected for a process of cross section Sigma [in units of cm^2], and Luminosity is the collected dataset size [in units of cm^-2]. When I say “one femtobarn”, I am making it easier for physicists and harder for everybody else: a femtobarn is 10^-39 cm^2, a mindboggingly small area. We are accustomed to working with those units, you are not… Sorry, buddy.
Anyway, if Higgs production has a cross section of 100 femtobarns (say), in a dataset of one inverse femtobarn there should be 100 Higgs events, right ? Right… Save that we have said how many Higgses were produced, not how many were fully reconstructed and collected by the data acquisition system. The difference may be even a factor of two or three in some particular cases.
D-zero analyzed 11 different final states of Higgs production. Three of those correspond to the so-called “diboson” signature arising from the H->WW decay. In that case, you search for two charged leptons produced by the subsequent (actually, instantaneous) W decays, and try to discriminate the signal from “ordinary” pairs of W bosons produced by electroweak boson pair production processes, which have a 100 times higher cross section and pollute your search sample. Why did I say three final states ? Because from two W bosons each decaying to either electron-neutrino or muon-neutrino, you can have two electrons, an electron and a muon, or two muons. Tau leptons or jets -which are also possible W decay products- provide a less crystal-clear signature, and are not used for a low-noise selection of W decays.
Ok, and how do they tell apart pp-> H->WW decays from non-resonant pp->WW production, then ? They can use the fact that H is a scalar particle: it has no spin. Having no spin, it tends to produce the pair of spin-1 W bosons with oppositely-aligned spin vectors: it does so to abide to angular momentum conservation, which is a well-respected rule in subatomic interactions. And the W bosons of opposite spin and opposite electric charge, traveling out of the Higgs creation point in opposite directions, will then produce two leptons traveling preferentially in the same direction, while non-resonant WW pairs have whatever spin they like and will produce many more lepton pairs flying out at large angle from each other.
Bingo! So, we count how many WW candidate events have the two leptons traveling close by, and we are done, right ? Hmm, not exactly… Things are never black or white in particle physics. The nice picture of a perfect discrimination of H->WW from non-resonant WW production is only an approximation, and real life is harder.
Above you see the azimuthal angle between the two leptons for backgrounds (in red) and Higgs signal (in blue, but multiplied fivefold to show better). Black points with error bars are the D-zero data. One cannot talk of great agreement of the data with backgrounds alone here, but no excess is evident either.
In the end, D0 uses the full information from the azimuthal angle difference of the two leptons in a global likelihood. Together with the dilepton angle for the three final states of H->WW search, the likelihood includes the invariant mass of two b-jets resulting from the decay H->bb, obtained from eight other decay signatures of WH or ZH pairs. When the Higgs decays to two b-jets, the reconstructed dijet mass is always close to the real Higgs mass, while random pairs of jets produced together with a W or Z boson will exhibit a broad mass spectrum.
Associated WH or ZH production is in fact another important means of producing Standard Model Higgs bosons. You search for a signal of the leptonic decay of the W or the Z, while this time you try to detect the decay to jet pairs of the Higgs. Of course, the Higgs is not forced to choose to decay to b-quark pairs if produced with a W and a Z, and to decay instead to two W bosons in case it is produced alone! Rather, D-zero searches for the two b-jets signature in WH and ZH production, and not in H production alone, because identifying a pp->H->bb signal by itself would be almost hopeless due to the huge concurring backgrounds from pp->gluon->bb (but see a discussion of that issue here ).
Anyway, back to the standard searches. The plot above shows the dijet mass for backgrounds (in red) and D-zero data (black points with error bars) for the WH search with double b-tagging. The blue histogram shows where and how the Higgs mass would appear if M(H)=115 GeV, but the normalization has been blown up by 20 times to make it visible.
It remains to tell you why pp->WH and pp->ZH production processes followed by H->bb provide eight distinct final states. First of all, the two jets originated from b-quark hadronization can be both b-tagged (we say a jet is b-tagged when a b-quark signal is detected inside the jet), but D-zero also allows one of the jets to fail b-tagging in their WH searches. So we have two distinct cases, DT (double tags) and ST (single tags), for each leptonic decays of the W: to electron-neutrino or to muon-neutrino pairs. Instead, double b-tagging of the jets is always enforced when the accompanying boson is a Z->dielectron, Z->dimuon, and Z->neutrino pair decay. Yes, also Z decays to neutrino pairs are allowed. While neutrinos do not produce a directly detectable signal, the imbalance in the energy released by the collision in the transverse plane is a story-telling signal that one or two energetic neutrinos have left the detector unnoticed.
To summarize, D-zero searches for eleven distinct signatures of Higgs boson production, computes a tale-telling variable capable of discriminating signal and noise for each signature, inserts a model of those variables for the expected backgrounds, and turns the crank. The crank is constituted by a method called CLs: basically, CLs is a statistical estimator computed as the ratio of the likelihood that what is observed is signal plus background divided by the likelihood that it is only background. Oh, and systematic uncertainties are properly dealt with, of course. Once everything is put in correctly, the result is a 95% confidence-level upper limit on the Higgs boson cross section as a function of Higgs boson mass.
Above, the limit (in units of the expected Standard Mode rate, meaning that a limit at “5” is a upper limit at 5x the expected SM Higgs cross section) is shown as a function of Higgs mass in black. The dashed red line shows instead what limit could have been set on average. The comparison allows to show that D-zero was lucky at high mass (they saw fewer events than predicted backgrounds there), and unlucky at low mass. Or, shameless speculative minds (NO, that does not mean ME) could argue that the limit at low mass is higher than expected because D-zero saw an upper fluctuation of the data in that region!
In conclusion, D-zero finds that the Higgs production rate cannot (at 95% confidence level) be higher than 8.4 times the expected SM rate if the Higgs mass is 115 GeV, while if it is 160 GeV the limit is more stringent, 3.7 times the expected SM rate. Not very exciting, right ? True, but please take into account the following:
- The Tevatron is expected to collect a total of about six times the luminosity used for these limits;
- CDF has a similar sensitivity, and its results are periodically (usually when a boss wants to make a trip to a nice conference) merged with those just discussed;
- Analyses, pretty much like good wines, get better as time goes by. I would not be surprised if the bang for the buck – sorry, for inverse femtobarn – increased by 20-30% in the next few years.
Put everything together – x6, x2, x1.3 = x16 sensitivity with respect to what is reported above – and you are bound to agree with me that the Tevatron still has a very good chance of excluding a meaningful region of Standard Model Higgs masses. As to whether they can observe a signal if there is one, well… That is harder, but still possible!
– – –
Oh, and – a disclaimer to end this. What you just read above (my compliments if you got this far with both eyes open) are just my opinions. They probably do not match those of the experiment, nor those of my institution, nor my own opinions tomorrow. This disclaimer is brought to you by the big mushrooms of my experiment – working together to make physics harder to understand.
And… when I write pp-> something, please understand, I never learned how to make bars above letters (bars label antiparticles). Of course the Tevatron collides protons with antiprotons, unlike the soon-to-be Large Hadron Collider at CERN. So “pp” is to be read “proton-antiproton”.