jump to navigation

Z to nothing, H to bees September 24, 2007

Posted by dorigo in news, physics.
trackback

One of the “golden” search channels for standard model Higgs bosons at the Tevatron is provided by the production of a ZH pair, followed by the decay of the Higgs to a pair of b-quark jets, while the Z boson decays to leptons. This associated production process is rare – even more than the production of a WH pair – but the signature may be very distinctive. It is not a surprise, therefore, that CDF and D0 are putting a lot of effort in this search.

Before I discuss it, though, let me digress for a minute, since I want to set the stage for the description of the search by mentioning what determines the rate of production of these bodies in a particle collision.

SOME PRELIMINARIES: VECTOR BOSON PRODUCTION

Even without a H attached, Z boson production is less frequent than W boson production in proton-antiproton collisions. There are two separate factors favoring the latter: it is instructive to consider them for a moment.

The first factor is due to the higher mass of the Z (M_Z = 91.188 GeV)  than the W  ($latex M_W=80.398 GeV$ ).  More mass means that more energy is required to the colliding quarks within proton and antiproton.  The average fraction of the beam energy required to each of the quarks taking part in a boson production is at least 0.5 M /980 GeV = 4.1 \% for a W and 4.65\% for a Z.  Now, since it is increasingly unlikely to find quarks in the proton as the energy fraction of the quarks increase, one expects that W‘s will be favored, albeit only slightly.

It is the second factor the one which has the largest impact in the production rate of W and Z bosons: the couplings of these particles to quarks are different. A coupling is a number which specifies the probability of a vertex in a subatomic process. Vertices are the points where particles interact: we may picture an electron emitting a photon with a line (describing the propagation of an electron in space-time) making a kink (the vertex), with a wiggly line (the photon) coming out. Or we may picture a Z boson decaying to a muon-antimuon pair as a line ending at a point (the vertex), where two other lines start. By the same token, we may picture a quark-antiquark pair joining at a vertex, where a W or a Z boson line is emitted.

If one looks up in the standard model the Wq \bar q' vertex coupling and the Z q \bar q vertex coupling, one discovers that they are expressed by the following tough-looking terms:

  • Wq \bar q': -i \frac {g}{\sqrt{2}} \gamma^\mu \frac{1}{2} (1-\gamma^5)
  • Zq \bar q: -i \frac {g} {\cos \theta_W} \gamma^\mu \frac {1} {2} (c_V^q- c_A^q \gamma^5)

You are right: those expressions are tough. They not only involve 4×4 matrices (the \gamma symbols); they are complex (gamma matrices are, and i itself is the imaginary unit symbol); and they include several mysterious variables. However, they can be computed by knowing that \theta_W is the famous “Weinberg angle”, which can be determined by experiment, and that the c_V, c_A factors are just simple numbers that depend on the structure of the group of symmetry on which the theory is based, SU(2).

Maybe I should just come to the conclusions: W production is three times more “frequent” at the Tevatron than Z production. Quotes are necessary: a W (Z) boson is produced once per three (nine) million 1.96-TeV proton-antiproton collisions, which means about 6 (2) times a second with the full-power of the Tevatron beams.

Now, a question: if the diagrams responsible for vector boson production are those shown on the left (where you should interpret the left black lines as two incoming quarks colliding to make a boson (W, Z or W*, Z*), and the lines exiting on the right as the decay products), and those for associated production with a H boson are those shown on the right, and further, if W to Z production is a factor of three, what will be the relative frequency of WH to ZH production ? 

If you answered “three” you guessed about right: the fact that we need far more energy to produce a pair of bosons than we need to produce one alone dumps both processes in more or less the same way (i.e., a lot!). If you answered “it depends on the Higgs coupling to W and Z bosons” instead, you know too much. As a matter of fact, the production factor is the most relevant, and the rest is fundamentally equal: off-shell Z production gets a slight increase by its larger width, and a further small difference is made by the Higgs-vector boson couplings. In the end we get the graph on the left, which shows the cross section for associated production of WH (in blue) and ZH pairs (in green) as a function of the unknown Higgs mass. You can thus see that indeed, the two curves stay ordinately one above the other, keeping a roughly constant factor of two between them. Also, note how the cross section descrease by an order of magnitude as the Higgs mass doubles: you basically go from requesting a total energy above 200 GeV on the left, to 300 GeV on the right. The increase of a factor 1.5 corresponds to a tenfold decrease of the rate, due to the steeply falling probability of finding very energetic quarks in the colliding bodies.

VECTOR BOSON DECAY AND THE FAVORED NEUTRINO COUPLING

So, ZH production is three times less frequent than WH production chez Tevatron. However, there is another factor to count: the Z prefers to decay to neutrinos than to leptons! The same vertex couplings discussed above are responsible, in fact:

  • \Gamma (Z \rightarrow \mu \mu)  \simeq ((c_V^\mu)^2 + (c_A^\mu)^2)
  • \Gamma (Z \rightarrow \nu \nu) \simeq ((c_V^\nu)^2 + (c_A^\nu)^2)

Now, since c_V^f = 1/2 - 2 \sin^2 \theta_W Q_f (where $latex Q_f$ is the electric charge of the fermion f), neutrinos have a vector coupling equal to 1/2 (Q_\nu=0!), while electrons, muons and taus have an almost vanishing value of c_V – because \sin^2 \theta_W is actually close to 1/4.  The decay to neutrinos is thus twice as large!

If one puts things together, the standard model predicts that one every five Z bosons decays to neutrino pairs – thus “vanishing” for all practical purposes; while only one in thirty decays to the striking signature of an electron-positron pair, and the same for muon pairs and tau pairs. The remaining 70% goes to quarks, making jet pairs which are really tough to distinguish from backgrounds.

The bottomline is: Z bosons are easy to see when they decay to electron or muon pairs, but that comes at a price – a smaller rate. If one could “see” the invisible Z \to \nu \nu decay, instead, one would get a much larger rate!

SEARCHING FOR ZH PRODUCTION

How to see Z decays to neutrinos ? They escape our detector unseen. Still, they collectively carry away the momentum originally imparted to the produced Z boson. Is that a sufficient signature of the escaping ghost ? 

The answer is -as with most quantum processes- yes and no. If ZH production happened “at threshold” all of the time, with both produced bosons created and decaying at rest in the laboratory, we would be done for: zero missing momentum to measure. But that is not the case: despite the fact that a production of two massive bodies already requires a lot of center-of-mass energy (which ultimately is paid by the colliding quarks, as we have discussed above), you can expect a large fraction of the produced Z bosons to carry away a sizable amount of transverse momentum.

Momentum transverse to the beam direction of a “vanishing” Z can be detected by adding up vectorially the transverse momentum of all other particles detected in the collision. One thus obtains what is called “missing Et”. A large value of missing Et means that something has escaped the detector unseen: an a Z \to \nu \nu decay is as good an hypothesis as another -if we forget the rarity of producing Z bosons.

The analysis performed by CDF uses 1.7/fb of collisions – 100 trillions of them. A trigger selects online events with a roughly-measured missing Et above 25 GeV, and two jets. The missing Et cut is then perfected offline after careful corrections, and is tightened to 50 GeV to reduce backgrounds coming from mismeasured hadronic jets in QCD multijet events.

A peculiarity of this analysis is that events with identified electrons or muons are explicitly rejected. This is a quite uncommon procedure in hadron collisions, where electron and muon signatures are worth their weight in gold. In this case, though, the presence of missing Et – a possible product of a single escaping neutrino rather than two from the Z – makes the sample enriched with leptonic W decays, which have to be rejected (they belong to the WH search and are considered in a separate analysis anyway).  

Finally, the Higgs decay is characterized by requesting that the two jets contain a loose b-quark tag or one of them contains a tight b-tag. I have explained elsewhere what b-tagging is, but in short, if you find evidence that a particle within the jet decayed after traveling a few millimeters from the interaction point, you are likely looking at a jet originated from b-quarks.

The b-tagging requirement rejects most backgrounds, but a check that event rates are understood as the sum of all the contributing processes is still needed. It is provided by an event count in two distinct “control regions”, which are known to be depleted of any signal contamination, and are mostly populated by QCD backgrounds (the first) and electroweak production (the second). 

In the first control region things are well understood both in rate and in shape of several kinematic variables. For instance, you can see in the plot below the distribution of the azimuthal angle between missing Et vector and the direction of the leading jet for experimental data (black points) and the sum of contributing processes: QCD (in green) dominates the rate, since the missing Et is due to a jet which was badly undermeasured, making leading jet and missing energy almost back-to-back, as if missing Et were signalling the direction of the second jet (shown are events with one tight b-tagged jet).

In the second control region are collected events discarded from the main selection because they contained a well-identified electron or muon. Here backgrounds are a much better balanced mixture of different processes, as is evident in the plot below, showing the distribution of the invariant mass of the two leading jets (shown are events with one tight b-tag):

The largest background is top pair production (in blue), which produces leptons, missing energy, and jets with b-tags. However, a sizable contribution is also due to W boson production (yielding both a lepton and missing Et) associated to b– or c-quarks (in purple). The experimental data is shown with black points with error bars. 

Once this check enforces the belief that the data is well understood, in rate and content, as the sum of several processes, the signal selection is tightened to make the signal more prominent, if one exists. The missing Et cut is raised to 70 GeV, killing most of the remaining QCD background, and the leading jet is required to have transverse energy exceeding 60 GeV. Also, the missing Et vector is required to point away from the jets, and its magnitude must be larger than 45% of the total transverse energy of all measured bodies participating in the event.

The event yield is then compared to expectations. In the single tag sample 443 events are observed, when the sum of background would predict 403.5 +-60.1; in the double tag sample 51 events are found, whereas backgrounds account for 39.9+-6.1. In both instances, more events are observed than predicted, but the most likely explanation is a fluctuation (less than 1-sigma difference in single tags, less than 2-sigma in double tags): statistically, such an occurrence happens three times in a hundred.

The dijet mass distribution in the two samples is shown in the plots below, where as usual experimental data is represented by black points with error bars, and backgrounds have a color code explained in the legend. A signal of ZH production (with M_H=115 GeV) 10-times larger than what is predicted by the standard model would contribute to the plots with a normalization and shape shown by the black histogram. 

Above, the single b-tagged sample. The largest contribution here is due to QCD events with heavy flavor production.

Above, the double b-tagged sample. Here backgrounds are smaller, and funnily enough, a spike -a totally insignificant one at that, but still a spike- at about 120 GeV is visible in the data.

Finally, of course, the limit. As has become standard, the computed limit to the rate of production of the searched process is convered in a “times SM rate”, which tells that “CDF excluded a Higgs boson production with an anomalous rate exceeding by N times the standard model predicted rate, at 95% confidence level“.

The limit is shown above, as a function of the Higgs mass. The black line is the observed limit at 95% CL, while the dotted line is what CDF expected to be doing with the analysis and the available data: the result is 2-sigma away from expectations because the observed rate is higher than backgrounds… In any case, the exclusion is not placing direct bounds on the mass range where a Higgs boson may exist according to CDF data, but this result, combined with all the others from different channels and techniques performed by CDF and D0, may eventually start to do just that.

Comments

1. Thomas D - September 24, 2007

Careful – you should never say ‘Weinberg angle’ if Sheldon Glashow is in the same room, or conference, or university. I have experienced this at first hand. Weak mixing angle is the preferred nomenclature… or just ‘theta-w’.

2. dorigo - September 24, 2007

Yes, I know that very well Thomas, but thanks for mentioning it. I was in fact about to write a sentence such as “but it should rather be called Glashow-Weinberg angle”. However, I thought the sentence was becoming too long, and it distracted from the focus already not clear. I tend in fact to introduce too many dependent sentences in my phrases, and the result does at times look awful.

Cheers,
T.

3. Thomas Larsson - September 25, 2007

It seems to me that this result is most important for low Higgs mass, where the limits from other channels are not so stringent?

4. dorigo - September 25, 2007

Hi Thomas L.,

indeed, the sensitivity provided by a H \to b \bar b decay dies out with the branching ratio to b’s for M_H >130 GeV.
Above that value, direct production of Higgs bosons followed by the decay H \to WW becomes a much better tool.

In the end, though, interesting information on the existence and mass of the Higgs boson will come only by combining all results together – none of them taken singularly comes close to providing independent constraints, with maybe the exception of the WW channel (but larger statistics that CDF and D0 will only get by the end of Run II).

Cheers,
T.

5. anomalous cowherd - September 26, 2007

Hi Prof. Dorigo

I really like this post, and the one of September 17, on Higgs searches. You have gone to a *lot* of trouble to provide sufficient background in the post that a broad audience can understand the results and their significance. And the results are well explained and presented. You have done an excellent job of presenting science in a way that is accessible to the public! And I enjoy being able to catch up on the present state of the CDF analyses.

I know that a post like this must take a lot of work to prepare. I’m sure that I speak for many readers when I say thanks for all your effort!

6. dorigo - September 26, 2007

Dear Anomalous,

it did take some time to write these posts, but it was a pleasure doing it… I love to write, and when I write about physics I feel it as a moral obligation to make things as easy to understand as possible – sometimes I manage, sometimes less so. In any case, I certainly appreciate your feedback!

Cheers,
T.

7. Ziplock - September 26, 2007

I’m sure that I speak for many readers when I say thanks for all your effort!

Yes, count me in. Very nice post.

And here a very crude question: how long it could take for CDF/D0 to either find or exclude a Higgs boson with mass in the 110–120 GeV range?

Cheers!

8. dorigo - September 26, 2007

Hi Ziplock, thank you, I appreciate your feedback.

As for your question, hmmm… Forever ? No, serious. I believe excluding the Higgs up to 120 GeV is going to take about 5/fb per experiment and the combination of all data, and some additional refining of the analysis techniques. Data should be in by end 2008, analysis and refining by fall 2009 or winter 2010 IMO. By then, the Tevatron experiments will have probably also excluded a certain region around 160 GeV.

To find it is much more problematic. I would say that a 3-sigma evidence at 115 GeV will require all the Run-II data (end 2009 = 6.5/fb per experiment), and a very careful analysis. Not before start of 2011 for sure.

These estimates picture a worse situation than what had been predicted in 2003 by the HSWG (I will link some results here if I have a chance, otherwise search for HSWG site:dorigo.wordpress.com). The reason is that these analyses are extremely complicated, and refining them to their optimum takes more than a man-century!

Cheers,
T.

9. Ziplock - September 26, 2007

Like I said, a very crude question that requires some dose of futurology to answer. So, thanks for answering it! Cheers.

10. dorigo - September 26, 2007

Yes. Futurology and speculation. Because the variability of results is huge: we might get lucky and see a 3-sigma evidence in two-three years, or be unlucky and not see it even with three times more data (which we won’t collect anyway). That is because of the fact that the signal is small, and even small fluctuations may make it observable or unobservable.

Cheers,
T.

11. Euclidistheway - September 28, 2007

Excuse my ignorance, but when the LHC starts up will the background “cloud” be even more obscuring of this (in)significant region? It looks like computer power has dual importance to the collider.

12. dorigo - September 28, 2007

Hi Euclid,

the LHC will indeed have more trouble spotting a light higgs, at 115 or 120 GeV, than at higher masses. That is due to the large background from QCD processes. In all cases, two years of running at low (10^33) luminosity should be enough to find the particle wherever it is. Computing power is no longer an issue now that a large grid of distributed computing has been constructed – but of course we would not be able to run CMS and ATLAS with the computing power of a cellphone(which was the case in particle physics experiments 25 years ago)…

Cheers,
T.


Sorry comments are closed for this entry

%d bloggers like this: