jump to navigation

Notes on the new Higgs boson search by DZERO March 2, 2009

Posted by dorigo in news, physics, science.
Tags: , , , ,
comments closed

Three weeks ago the DZERO collaboration published new results of their low-mass Higgs boson search. This is about the production of Higgs bosons in association with a W boson, with the subsequent decay of the Higgs particle to a pair of b-quark jets, and the decay of the W to an electron-electron neutrino or muon- muon neutrino pair: in symbols, what I mean is p \bar p \to WH \to e \nu_e b \bar b, or p \bar p \to WH \to \mu \nu_{\mu} b \bar b. I wish to describe this important new analysis today, but first let me make a point about the reaction above.

In order to make this blog more accessible than it would otherwise be, I frequently write things inaccurately: precision is usually pedantic and distracting. But here I beg you to please note a detail I will not gloss over for once: to be accurate, one should write p \bar p \to WH + X…, because what we care for is inclusive production of the boson pair. If we omit the X, strictly speaking we are implying that the two protons annihilated into the two bosons, with exactly nothing else coming out of the collision. While that reaction is possible, it is ridiculously rare -actually, the annihilation into ZH is possible, while the one into WH does not conserve electric charge and is strictly forbidden. Anyway, bringing along a symbol to remind ourselves of the fact that our projectiles are like garbage bags, which fill our detectors with debris when we throw them at one another, is cumbersome and annoying, while accurate. I hope, however, you realize that this is an important detail: Higgs bosons at a hadron collider are always accompanied by debris from the dissociating projectiles.

Two words on associated WH production and its merits

The associated production of the Higgs together with a W boson is the “golden” signature for low-mass Higgs hunters at the Tevatron collider. While producing the Higgs together with another heavy object is not effortless (you are required to produce the collision with more energetic quarks in the two colliding protons, and this makes the production less frequent), the W boson pays back with extra dividends by producing a very clean signature in its leptonic decay, and by allowing the event to be spotted easily by the online triggering system, and collected with high efficiency by the data acquisition.

If you compare the collection of WH events to the collection of directly produced Higgs bosons (p \bar p \to H +X, where again I prefer accuracy by specifying the X), you immediately see the advantage of the former: while their production rate is four times smaller and the leptonic W decay only occurs 20% of the times, this 0.25 x 0.2=0.05=1/20 reduction factor is a small price to pay, given the trouble one would have triggering on direct H \to b \bar b events: the decay to a pair of b quarks is the dominant one for low Higgs boson masses, but the common nature of b-jets makes it unobservable.

Higgs decays to b-quark pairs produced alone simply cannot be triggered in hadronic collisions, because they are immersed in a background which is six orders of magnitude higher in rate, namely the production p \bar p \to g \to b \bar b of bottom-antibottom quark pairs by strong interactions. Even assuming that the online triggering system of DZERO were capable of spotting b-quark jet pairs with 100% purity (which is already a steep hypothesis), the trigger would have to accept a million background events in order to collect just one fine signal event !

Yes, life is tough for hadronic signatures at a hadron collider. Even finding the Z \to b \bar b signal, which is a thousand times more frequent, is a tough business -it took CDF years to find a reasonable sample of those decays, while DZERO has not yet published anything on the matter. But the Tevatron experiments cannot ignore the fact that, if a low Higgs mass is hypothesized, the H \to b \bar b decay is the most frequent: the Higgs boson likes to decay into the heaviest pair of particles it can produce. If the total mass of a pair of W bosons or Z bosons is too heavy, the next-heaviest pair of decay products is b-quarks. This dictates the need to search for H \to b \bar b, and the trouble of triggering on such a process in turn makes the associated WH (or ZH) production the most viable signal.

The DZERO analysis

The new analysis by DZERO studies a total integrated luminosity of 2.7 inverse femtobarns. This corresponds to 150 trillion proton-antiproton collisions, but DZERO has netted almost twice as much data already by now, and it is only a matter of time before those too get included in this search: so one has to bear in mind that the statistical power of the data is soon going to increase by about 40%: the data increase corresponds to an increase in precision by the square root of two, or a factor of 1.41.

DZERO selects events which have an electron or a muon with high energy -the tag of a leptonic decay of the W boson-, missing transverse energy, and two or three hadronic jets. The presence of a energy imbalance in the plane transverse to the beam direction is a comparatively clean signature of the escape of the energetic neutrino produced together with the charged lepton by the W decay, and two jets are expected from the decay of the Higgs boson to a pair of b quarks. However, you might well ask, quid opus fuit tertium ?

No, I bet you would not ask it that way -for some reason, a reminescence of Latin sprung up in my mind. Quid opus fuit tertium – What is the matter with the third one ? The third jet is not specifically a signature of any one of the decay products of the WH pair we are after. However, if you remember what I mentioned above, we are searching for inclusive production of a WH pair: that means we accept the fact that the two projectiles also produced an additional energetic stream of hadrons in the final state. That possibility is by no means rare, and in fact it amounts to about 20% of the Higgs production events. By selecting events with two or three jets, DZERO increases its acceptance of signal events sizably.

A technique which has become commonplace in the hunt of elusive subnuclear particles is to slice and dice the data: categorizing events in disjunct classes is a powerful analysis strategy. By taking two-jet events on one side, and three-jet events on the other, DZERO can study them separately, and appreciate the different nuisances of each class. In fact, they further divide the data into subsets where one jet was tagged as a b-quark-originated one, or two of them were.

And they also keep separated the electron+jets and the muon+jets events: this also does make sense, since the experimental signatures of electrons and muons are slightly different, as are the resulting energy resolutions. In total, one has eight disjunct classes, depending on the number of jets, the number of b-tags, and the lepton species.

In order to decide whether there is a hint of Higgs bosons in any of the classes, backgrounds are studied using Monte Carlo simulations of all the Standard Model processes which could contribute to the eight selected signatures. These include the production of a W boson plus hadronic jets (“W+jets“) as well as the production of top quark pairs: both these processes produce energetic leptons in the final state; but another background is due to events which do not actually contain a lepton, and where a hadronic jet was mistook for one. The latter is called “QCD background” highlighting its origin in strong interaction processes yielding just hadronic jets: despite the rarity of a jet faking a energetic lepton, the huge rate of QCD events makes this background sizable.

Among the characteristics that can separate the WH signal from the above backgrounds, the identity of the parton originating the hadronic jets is a powerful one: b-jets are more rare than light-quark ones, but there must be two of them in a H \to b \bar b decay. DZERO uses a neural network which employs seven discriminating variables to select jets with a likely b-quark content.

The good thing with a neural-network b-tagger is that the output of the network can be dialed to decide its purity. And in fact, DZERO does exactly that. They start with a loose selection which has a rate of “false positives” of 1.5% (light-quark jets that are classified as b-tagged). If two jets have such a loose b-tag, the event is classified as a “double b-tag”; otherwise, the NN output requirement is made tighter, and “single-b-tag” events are collected by requiring that the b-tag has a better purity, with a “false positive” rate of 0.5%. These cuts have been optimized for their combined sensitivity to the Higgs signal.

Apart from b-tags, the signal displays a different kinematics than all backgrounds. Again, seven variables are used, which now describe the event kinematics: the transverse energy of the second-leading jet, the angle between jets, the dijet invariant mass, and a matrix-element discriminant, which is computed by comparing the probability density of the quadrimomenta of the objects produced in the decay in a WH event to that of backgrounds. In the figure above, the matrix element discriminant is shown for all the processes contributing to the class of W+2jet events with two b-tags. The output of the neural network shows that Higgs events fall in the right-side of the distribution, while backgrounds pile up mostly on the left, as can be seen in the figure below.

Results of the search

Since no signal is observed in the NN output distribution seen in the data, DZERO proceeds to set upper limits on the signal cross-section. For 2-jet events they use the NN output is used, while they use the dijet mass distribution for the 3-jet event classes. No justification is provided in their paper for this choice, which looks slightly odd to me, but I imagine they have done some optimization studies before taking this decision. However, I would imagine that the NN output is in principle always more discriminant than just one of the variables on which the network is constructed… Maybe somebody from DZERO could clarify this point in the comments thread, to the benefit of the other readers ?

At the end of the day, DZERO obtains limits on the cross section of the searched signal, which are still above the standard model predictions whatever the Higgs mass: therefore, they do not provide an exclusion of mass values, yet. These results, however, once combined with other results from CDF and DZERO, will one day directly imply that a SM Higgs cannot exist, if its mass is in a specified range. In the graph below you can see the limit set by this analysis on the WH production cross-section as a function of Higgs mass.

The black curve shows the 95% exclusion, while the hatched red curve shows the result that DZERO was expecting to find, based on pseudoexperiments. The comparison of the two curves is not terribly informative, but it does show that there were not surprises from the data.

The result can also be shown in the standard “LLR plot” above, which is showing, again as a function of the Higgs boson mass, the log-likelihood ratio of two hypotheses: the “background only” and the “signal+background” one. Let me explain what that is. For each mass value on the x-axis, imagine the Higgs is there. Then, with large statistics, the data would show a propension for the “signal plus background” hypothesis, and the LLR would be large and negative. If, instead, the Higgs did not exist at any mass value, the LLR would be large and positive. The two hypotheses can be run on pseudo-data of the same statistical power as the data really collected, thus producing the red and black hatched lines in the plot below. The two curves are different, but the red one does not manage to depart from the green band constructed around the black hatched one: that means that the data size and the algorithms used in the analysis do not have enough power to discriminate the two hypotheses, not even at 1-sigma level (which is the meaning of the width of the green band, while the yellow one shows two-sigma contours). The full black line shows the behavior of real data: they have a propension of confirming the background-only hypothesis at low mass, and a slight penchant for the signal+background one at about 130 GeV. But this is a really, really small fluctuation, well within the one-sigma band!

I think the LLR plot is a great way to describe the results of the search visually. It at once tells you the power of the analysis and the available data, and the outcome on the real events collected. Now, it takes twenty thick lines of text to explain it, but once you’ve grabbed its meaning…

The 1999/2003 Higgs predictions compared with CDF 2009 results February 13, 2009

Posted by dorigo in news, personal, physics, science.
Tags: , , ,
comments closed

Two years ago I used the combined Higgs search limits produced by the D0 experiment to evaluate how well the Tevatron was doing if compared with the predictions that had been put together by the 1999 SUSY-HIGGS working group, and later by the 2003 Higgs Sensitivity Working Group (HSWG), two endeavours to which I had participated with enthusiasm. The picture that emerged was that, although results were falling short of justifying fully the early predictions, there was still hope that those would one day be vindicated.

Indeed, I remember that when in 2003 the HSWG produced its report, we felt our results were greeted with a dose of scepticism. And we ourselves were a bit embarassed, because we knew we had been a bit optimistic in our predictions: however, that was the name of the game – looking at things on their bright side, for the sake of convincing funding agents that the Tevatron had a reason to run for a long time. I felt a strong justification for being optimistic in the incredible results on the top quark mass that the Tevatron had already started achieving: early prospects of measuring the top mass to a 1% uncertainty have in fact been surpassed by the combination of dedication of the scientists doing the analyses, and their imagination in inventing new precise methods.

We now have a chance to look back at the 1999/2003 predictions for the Higgs reach of the Tevatron with a rather solid set of hard data: the CDF combination, which I briefly discussed two days ago, is based on analyzed sets of data ranging from 2 to 3 inverse femtobarns, and the comparisons do not require a lot of extrapolations to be carried out.

If we look at the 1999/2003 predictions shown above (two basically coincident results, if one considers that the 2003 results were not accounting for systematic effects, which would worsen a bit the curves of sensitivity and bring them to match the older ones), we can read off the integrated luminosity that the Tevatron experiments needed to analyze in order to exclude, by combining their results, SM Higgs production at 95% confidence level. These numbers are as follows: for a Higgs mass of 100 GeV, 1/fb was considered sufficient; for a Higgs mass of 120 GeV, 2/fb were needed; 10/fb at 140 GeV; 4.5/fb at 160 GeV; 8/fb at 180 GeV; and 80/fb at 200 GeV. You can check them on the purple band in the graph above.

Now, let us take the actual expected limits by CDF with the analyses and the data they have based their new result upon (using expected limits rather than observed ones is correct, since the former are unaffected by statistical fluctuations). At 100 GeV, CDF has a reach in the 95%CL limit at 2.63xSM; at 120 GeV, the reach is 3.72xSM; at 140 GeV, 3.61xSM; at 160 GeV it is 1.75xSM; at 180 GeV 3.02xSM; and at 200 GeV, the reach is at 6.33xSM.

(Below, the 2009 combined CDF limits are shown by the thick red curve; the data I list above is based on the hatched curve instead, which shows the expected limit.)

How do we now compare these sets of numbers ?

Easy. As easy as 1.2.3.4 (well, not too easy, but that’s how it goes).

  1. We first scale up by a factor of two the 1999/2003 luminosity numbers needed for a 95% CL exclusion, which we listed above. We thus get, for Higgs masses ranging from 100 to 200 GeV in 20-GeV steps, needed integrated luminosities of 2,4,20,9,16,160/fb.
  2. Then, we take the actual luminosity used by CDF for the analyses that have been combined to yield the expected limits listed above. This is slightly tricky, since the combination includes analyses which have used 2.0/fb of data (the H \to \tau \tau search), 2.1/fb (the VH \to ME_T b \bar b search), 2.7/fb (the WH \to l \nu b \bar b, the ZH \to ll b \bar b, and the WH \to WWW searches), and 3.0/fb (the H \to WW search). In principle, we should weight those numbers with the relative sensitivity of the various analyses, but we can approximate it by taking an “average effective luminosity” of 2.4/fb for the 100 GeV Higgs search, 2.7/fb for the 120 and 140 GeV points, and 3.0/fb for the high-mass searches. This is appropriate, since the H \to WW search starts kicking in above 140 GeV.
  3. We now have all the numbers we need: we divide the expected luminosity needed for one experiment by the 1999/2003 study, found at point 1 above, by the effective luminosities found at point 2, and take the square root of that number: this means finding the “reduction factor” in the sensitivity that the actual CDF data suffers with respect to the data needed to exclude the Higgs boson. We find a reduction factor of 0.91, 1.22, 2.72, 1.73, 2.31, and 7.30 for Higgs masses of 100,120,140,160,180, and 200 GeV respectively.
  4. Now we are done. We can compare the “times the SM” limits of CDF with the numbers found at point 3 above. The ratio of the two says how much worse is CDF doing with respect to predictions, for each mass point. We find that CDF is doing 2.88 times worse than predictions at 100 GeV; 3.06 times worse than predictions at 120 GeV; 1.33 times worse at 140 GeV; 1.01 times worse at 160 GeV; 1.31 times worse at 180 GeV; and 0.87 times worse (i.e., 1.15 times better!) at 200 GeV.

The results of point 4 are plotted on the graph shown above, where the x-axis shows the Higgs mass, and the y axis this “shame factor”. I have given a 20% uncertainty to the figures I computed, because of the rather rough way I extracted the numbers from the 1999/2003 prediction graph. If you look at the graph, you notice that the CDF experiment has kept its (our!) promise (points bouncing around a ratio of 1.0) with its high-mass searches, while low-mass searches still are a bit below expectations in terms of reach (3x worse reach than expected). It is not a surprise: at low Higgs mass, the searches have to rely on the H \to b \bar b final state, which is very difficult to optimize (vertex b-tagging, dijet mass resolution, lepton acceptance are the three things on which CDF has been spending hundreds of man-years in the last decade). Give CDF (and DZERO) enough time, and those points will get down to 1.0 too!

Multi-muon news January 26, 2009

Posted by dorigo in news, personal, physics, science.
Tags: , , , ,
comments closed

This post is not it but no, I have not given up on my promise to complete my series on the anomalous multi-muon signal found by CDF in its Run II data. In fact, I expect to be able to post once more on the topic this week. There, I hope I will be able to discuss the kinematic characteristics of multi-lepton jets. [I am lazy today, so I will refrain from adding links to past discussions of the topic here: if you need references on the topic, just click on the tag cloud on the right column, where it says "anomalous muons"!]

In the meantime, I am happy to report that I have just started working at the same analysis for the CMS experiment! In Padova we have recently put together a group of six -one professor, three researchers, a PhD student, and a undergrad- and we will pursue the investigation of the same signature seen by CDF.  And today, together with Luca, our new brilliant PhD student, I started looking at the reconstruction of neutral kaon decays K^\circ \to \pi^+ \pi^-, a clean source of well-identified pion tracks with which we hope to be able to study muon mis-identification in CMS.

Meanwhile, the six-strong group in Padova is already expanding. Last Wednesday professor Fotios Ptochos, a longtime colleague in CDF, a good friend, and crucially one of the authors of the multi-muon analysis, came to Padova and presented a two-hour-long seminar on the CDF signal in front of a very interested group of forty physicists spanning four generations -from Milla Baldo Ceolin to our youngest undergraduates. The seminar was enlightening and I was very happy with the result of a week spent organizing the whole thing! (I will have to ask Fotios if I can make the slides of his talk available here….)

Fotios, a professor at the University of Cyprus, is a member of CMS, and a true expert of measurements in the B-physics sector at hadron machines. We plan to work together to repeat the controversial CDF analysis with the first data that CMS will collect -hopefully later this year.

The idea of repeating the CDF analysis in CMS is obvious. Both CDF and D0 can say something on the signal in a reasonable time scale, but whatever the outcome, the matter will only be settled by the LHC experiments. Imagine, for instance, that in a few months D0 publishes an analysis which disproves the CDF signal. Will we then conclude that CDF has completely screwed up its measurement ? We will probably have quite a clue in that case, but we will need to remain possibilistic until at least a third, possibly more precise, measurement is performed by an independent experiment.That measurement is surely going to be worth a useful publication.

And now imagine, on the contrary, that the CDF signal is real…

Some posts you might have missed in 2008 – part II January 6, 2009

Posted by dorigo in physics, science.
Tags: , , , , , , , , , , , ,
comments closed

Here is the second part of the list of useful physics posts I published on this site in 2008. As noted yesterday when I published the list for the first six months of 2008, this list does not include guest posts nor conference reports, which may be valuable but belong to a different place (and are linked from permanent pages above). In reverse chronological order:

December 29: a report on the first measurement of exclusive production of charmonium states in hadron-hadron collisions, by CDF.

December 19: a detailed description of the effects of parton distribution functions on the production of Z bosons at the LHC, and how these effects determine the observed mass of the produced Z bosons. On the same topic, there is a maybe simpler post from November 25th.

December 8: description of a new technique to measure the top quark mass in dileptonic decays by CDF.

November 28: a report on the measurement of extremely rare decays of B hadrons, and their implications.

November 19, November 20, November 20 again , November 21, and November 21 again: a five-post saga on the disagreement between Lubos Motl and yours truly on a detail on the multi-muon analysis by CDF, which becomes a endless diatriba since Lubos won’t listen to my attempts at making his brain work, and insists on his mistake. This leads to a back-and-forth between our blogs and a surprising happy ending when Motl finally apologizes for his mistake. Stuff for expert lubologists, but I could not help adding the above links to this summary. Beware, most of the fun is in the comments threads!

November 8, November 8 again, and November 12: a three-part discussion of the details in the surprising new measurement of anomalous multi-muon production published by CDF (whose summary is here). Warning: I intend to continue this series as I find the time, to complete the detailed description of this potentially groundbreaking study.

October 24: the analysis by which D0 extracts evidence for diboson production using the dilepton plus dijet final state, a difficult, background-ridden signature. The same search, performed by CDF, is reported in detail in a post published on October 13.

September 23: a description of an automated global search for new physics in CDF data, and its intriguing results.

September 19: the discovery of the \Omega_b baryon, an important find by the D0 experiment.

August 27: a report on the D0 measurement of the polarization of Upsilon mesons -states made up by a b \bar b pair- and its relevance for our understanding of QCD.

August 21: a detailed discussion of the ingredients necessary to measure with the utmost precision the mass of the W boson at the Tevatron.

August 8: the new CDF measurement of the lifetime of the \Lambda_b baryon, which had previously been in disagreement with theory.

August 7: a discussion of the new cross-section limits on Higgs boson production, and the first exclusion of the 170 GeV mass, by the two Tevatron experiments.

July 18: a search for narrow resonances decaying to muon pairs in CDF data excludes the tentative signal seen by CDF in Run I.

July 10: An important measurement by CDF on the correlated production of pairs of b-quark jets. This measurement is a cornerstone of the observation of anomalous multi-muon events that CDF published at the end of October 2008 (see above).

July 8: a report of a new technique to measure the top quark mass which is very important for the LHC, and the results obtained on CDF data. For a similar technique of relevance to LHC, also check this other CDF measurement.

D0 bags evidence for semileptonic dibosons October 24, 2008

Posted by dorigo in news, physics, science.
Tags: , , , ,
comments closed

A week ago I discussed here the recently approved analysis by which CDF shows a small hint of WW/WZ signal in their Run II data, with one W boson decaying to a lepton-neutrino pair, and the other boson (either a W or a Z) producing a pair of hadronic jets.

Such a process is very hard to put in evidence in hadronic collisions, due to the large irreducible background of events due to one leptonic W decay accompanied by QCD radiation from the initial state of the collision. In fact, despite having been sought by many in Run I, no appreciable signal had surfaced in Tevatron data either from CDF or D0.

Now, D0 has really bagged it. They used a more performant selection method than the one used by CDF, and were a bit more bold in their use of Monte Carlo simulations. The result is that they find a very significant excess, amounting roughly to 960 events, in a total of nearly 27,000.

I encourage those readers who are unfamiliar with the basics of vector boson production at hadron colliders to read the introductory part of the former post on this topic, which I linked above. Here I will avoid repeating that introduction, and concentrate instead on the analysis details.

D0 uses a total of 1.1 inverse femtobarns of 1.96 TeV proton-antiproton collisions, acquired during Run II of the Tevatron. The samples of data are collected by triggers selecting a signal of a high-energy electron or muon, and a further requirement that a transverse energy imbalance of 20 GeV or more is requested, thus characterizing the leptonic decay W \to l \nu_l of one vector boson. Finally, the transverse mass of the lepton-missing transverse energy  system has to be larger than 35 GeV, reducing backgrounds from non-W events.

[The transverse mass is computed by neglecting the z-component of the particles momenta: if both particles are emitted perfectly transverse to the beam direction, transverse and total mass coincide. This is forced by the absence of a z-measurement of the neutrino momentum, since the energy imbalance it creates by escaping the detector cannot be measured along the proton-antiproton axis.]

Besides characterizing the leptonic W decay, two jets with transverse energy above 20 GeV are required. After this selection, the data contain a non-negligible amount of non-W backgrounds, constituted by QCD multijet events where the leptonic W is a fake; but the bulk is due to W+jets production, where the jets arise from QCD radiation off the initial partons participating in the hard interaction. Several Monte Carlo samples are used to model the latter background process, while the former is handled by loosening the lepton identification criteria in the data: the looser the lepton requirement, the larger this contamination, such that for really loose electron and muon candidates the samples are almost purely due to QCD multijet events.

Signal and backgrounds are separated using a multivariate classifier to combine information from several kinematic variables. This is the Random Forest algorithm, which I had the occasion to discuss in the past (two years ago a student of mine used it to discriminate hadronic top events in a similar dataset in CDF). The Random Forest output is highest (close to one) for signal events, while backgrounds are given a value closer to zero. The result of the classification is shown below: the excess for high values of RF output are due to the diboson signal (in red the signal content estimated by the fit).

A fit to the RF output provides the normalization of the signal and the background components, as shown above. Notice the blue “envelope” in part (b) of the plot: it is the systematic uncertainty due to background RF templates. Of course, the level of the blue curve is deceiving, since shape uncertainties are totally correlated among themselves; but the signal does stand out on top of it.

A plot of the dijet mass distribution confirms the interpretation, as shown below. The bottom part shows the data subtracted by background contributions (points with error bars), which compares well with the shape of the expected diboson contribution. D0 finds a combined WV (WW+WZ) cross section of 20.2 \pm 1.4 \pm 3.6 \pm 1.2 pb, where the first uncertainty is statistical, the second is systematic, and the third relates to the integrated luminosity uncertainty of the base of data used in the search. This compares well with the theoretical prediction of \sigma(WV)=16.1 \pm 0.9 pb.

In the plot above, the combined W/Z signal peaks at about 80 GeV, with a resolution of roughly 15 GeV; the background template uncertainty is again in blue, again underlining the difficulty of this measurement, which finds a signal excess exactly where the backgrounds peak.

One question I often hear asked in plots such as the one above is “why do W and Z boson peak at the same mass value ? They have a 10.7 GeV mass difference after all”. True, but the dijet mass resolution of the D0 detector is insufficient to tell the two signals apart, and what one observes is the combined shape. To be more precise, one should also add that the Z contribution in the plot is much smaller than the W one (about one third). Further, one should also point out that the heavy flavors produced by the Z boson will produce a underestimated Z mass reconstruction, due to the neutrinos often produced in the semileptonic decay of b- and c-quark jets. It is a fact that the Z \to b \bar b decay will peak at about 83 GeV after calibration of generic jet response, due to that effect alone…

I like to let the authors point out that “This work further provides a validation of the analytical methods used in searches for Higgs bosons at the Tevatron”, as in the conclusion of their paper. Indeed, the advanced methodologies by which the Tevatron experiments are setting more and more stringent limits on Higgs boson production are perceived by some as a bit uncautious. Things appear to be well under control, it instead transpires, once one can demonstrate that a signal known to be there can be indeed extracted from samples which have a very small signal-to-background ratio, as is the case of all Higgs searches.

Omega b: the new baryon nailed by D0 September 19, 2008

Posted by dorigo in news, physics, science.
Tags: , ,
comments closed

Three weeks ago my attention was focused on the LHC start-up and on other less exciting things, and I overlooked a new important find by the D0 Collaboration: the discovery of the Omega_b baryon. Let me do justice to this new scientific result, although belatedly, in this post. I will first give a short introduction for non-experts in the following, and then discuss the details of the analysis in brief.

The Omega_b baryon, \Omega_b, is a funny particle. It is made up of a b-quark and two s-quarks. Since the b-quark has -1/3 electric charge like the s-quark, the Omega_b carries one unit of negative electric charge. It belongs to a baryon octet, eight particles of similar characteristics which you may obtain one from the other by successive exchanges of the three quarks. To explain what a baryon multiplet is, let me neglect the b-quark for a second and rather discuss a simpler scheme with just the three lightest quarks u, d, and s, which are the building blocks of the symmetrical states first compiled in the sixties, when particle theorists were just starting to fiddle with group representations to try and categorize the observed new particle states.

The simplest baryon decuplet is shown in the scheme shown on the left. As you might notice, there are three different axes along which one can classify the ten baryons belonging to the scheme. One, labeled by the letter “Q“, describes the electric charge of the states, and goes from -1 to +2, increasing toward the top right corner. The second, labeled by the letter “S”, describes the “strangeness” of the states. S is the number of strange quarks the baryons contain, and it increases instead as one moves down. Forget the third axis, it is of no use for you.

The one above is an example of the many possible representations of the symmetry group SU(3), in this case applied to describe the symmetries of quark flavors. You exchange a quark flavor with another by jumping along one of the three directions along the sides of the triangle, and you obtain a new baryon, whose properties are somehow connected to those of the former one.

In 1964, exactly the scheme above was drawn to predict that a new state should exist, the \Omega^-, at the bottom of the triangle. All nine other baryons had been already observed, and their organization in a decuplet was highlighted by the similarity of the mass of baryons belonging to each row: the upper four states are called Deltas: \Delta^-, \Delta^0, \Delta^+, \Delta^{++}, and all have masses of about 1.232 GeV; the second row contains three states called Sigmas: \Sigma^-, \Sigma^0, \Sigma^+, all with a mass of about 1.384 GeV; the third row has the two states called Xi, \Xi^-, \Xi^0, with a mass of about 1.533 GeV. It does not take a very smart crackpot to guess that a single state should exist to occupy the lower vertex of the triangle, and its mass could be well inferred from the linear progression above: each step down increased the mass by 140-150 MeV, the contribution due to the substitution of a “heavy” strange quark for one of the lighter d or u quarks. So the new state had to have a mass of about 1.533+0.145=1.680 GeV.

The discovery of the \Omega^-, in 1964, from the single, gold-plated event shown above (left, the bubble chamber image, and right, the decoding into particle tracks; the particle is produced by a beam entering from below, hitting a target in the chamber) was a true success of Gell-Mann’s and Zweig’s threefold way, the classification scheme of hadrons based on the SU(3) symmetry, which implied the existence of quarks, if only as mathematical descriptive tools. The discovery also made clear that a new quantum number was needed to describe these objects: if the \Delta^-, the \Delta^{++}, and the \Omega^- were each composed of three quarks of equal type, lying in the same quantum state (with their half-integer spins completely aligned, to give those baryons a total spin of 3/2), there was the absolute need for an additional quantum number for quarks, to make each component of the trio different from the others, or the Pauli exclusion principle would have to be abandoned. This new characteristics was soon identified with colour, the “charge” of strong interactions, which binds quarks together inside hadrons.

Now, let us fast-forward to 2008. We know baryons are quark triplets, we know we can organize them in multiplets of well-defined symmetry properties, we have found most of them. The \Xi_b states have recently been seen by both CDF and D0. So in principle, having observed the cousins of the Omega_b, nobody can really pretend to be surprised by the new discovery: it is just needed by the scheme. Nevertheless, finding the \Omega_b -measuring its properties, its mass, and the rate of its production in hadron collisions- is important. Actually, for theorists the thing which is way the most important is the production rate: understanding the production mechanisms is tough.

Our current understanding of the mechanisms whereby a energetic collision creates states like the \Omega_b is still rather sketchy. Quantum chromodynamics, the theory of strong interactions that bind colored quarks in colorless hadrons, can be used to calculate precisely the production rate of b and s quarks only in special cases; for others, some parametrizations are needed. The possibility that s-quarks come directly from inside the projectiles is also parametrized by “parton distribution functions”, which are measured experimentally; as for b-quarks, they  are hardly contained in the proton or antiproton. All in all, it is possible to predict, with some degree of uncertainty, how frequently we may obtain those three quarks in the final state; but predicting the probability of their binding into a (bss) triplet requires to understand the action of lower-energy phenomena, and it currently still requires a good dealof black magic. Because of these difficulties, the number of \Omega_b events produced for a given amount of Tevatron proton-antiproton collisions is an intrinsically interesting quantity.

The analysis by D0 searches for a very well-defined final state of the \Omega_b decay, one which does not include any neutral particles. It is shown in the graph on the right, where only full lines represent particles which are detected and measured in the detector. The presence of only charged particles in the final state allows the measurement of all the relevant particle momenta, and the reconstruction of the mass of the Omega_b candidate. The decay chain is spectacular, since it involves first the decay \Omega_b \to J/ \Psi \Omega, and then the cascade of the \Omega \to \Lambda K^- \to p \pi^- K^-, with three charged tracks in the final state which form a backward-reconstructed path, similar (although less striking) to the one of the first Omega- event observed in 1964. As for the J/ \Psi, it is easily reconstructed from the two muons it decays into.

D0 uses a method called “Boosted Decision Trees”, BDT for insiders, to increase the signal-to-noise ratio of their \Omega^- candidates, before combining their signal with that of the J/Psi decays. Several kinematic variables are used to discriminate the real \Omega^- decays from random track combinations. The method does a good job, as you may judge yourself by comparing the invariant mass spectrum of \Lambda \pi combinations before (left) and after (right) the BDT selection in the graphs. Notice that the red histogram comes from combining three tracks which have the wrong sign combination: a \Lambda signal with a positive kaon, which cannot possibly come from a \Omega^- decay. The combination carries exactly the same biases of the right-sign combination, and in fact it well-reproduces the shape and normalization of the background in the right-sign sample, both before and after the selection.

In the end, D0 reconstructs the mass of the \Omega_b baryon (see below) with a rather simple-minded approach. This is the only part of the analysis which made me frown. Why did they not do a full-fledged kinematical fit to extract the candidate mass with the best possible accuracy ? They in fact apply some hoonga-doonga correction to the reconstructed mass, forgetting for a moment that they have the moral obligation to use the full information provided by their precious detector. Here is what they do: they first compute the J/Psi mass from the two muon quadrimomenta; then they compute the \Omega^- mass from the lambda-kaon combinations; and then they go hoonga-doonga:

M_{\Omega_b}=M_{\Omega_b^{rec}} + (3.097 - M_{J/\Psi}) + (1.6724 - M_{\Omega^{-,rec}}).

That is, they just add the residual differences between true and reconstructed J/Psi and Omega masses to the measured \Omega_b mass. This is like putting a pair of flints as a cigarette lighter in a Ferrari Enzo. Rather hard to digest for me, but this is a first observation paper, so I will keep my criticism constrained. So, my congratulations to D0 for pulling this new result off! You can read the details of the analysis in this paper.

UPDATE: I made a typo in the post above (at least one, that is). The one I am referring to is important, however. It is in the part where I discuss hadron multiplets. Can you spot it ?

Upsilon polarization: a surprise from D0 August 27, 2008

Posted by dorigo in news, physics, science.
Tags: , , , , , ,
comments closed

I was surprised by the recent measurement by the D0 collaboration of the Upsilon polarization, which finds a sizable effect which really disagrees with the CDF result, obtained six years ago and based on a 12 times smaller dataset from Run I.

D0 has a large acceptance to muons, and so can detect with good efficiency the \Upsilon(1S) \to \mu^+ \mu^- decays. CDF has a slightly worse acceptance, but its momentum resolution is of a totally different class. Compare the mass distribution of Upsilon mesons published by D0 in their polarization paper, shown below (they refer to different bins of transverse momentum, left to right, and to different fit parametrizations, top to bottom; black points are the data, the red curve is the fit, and the black gaussians are the three Upsilon signals returned by the fit),

…with the Run I distribution by CDF on which their old result is based, in the plot below (where the data is the black histogram, and the curve is the fit):

Any questions ? Of course, the three Upsilon(1S), (2S), (3S) states merge together in a broad bump in the D0 signal plot, while they stand each on their own in the CDF plot. That’s resolution, baby. Muon momentum resolution is a thing on which Nobel prizes are done and undone.

Despite the lower resolution, D0 can statistically separate the three populations, and measure the Upsilon (1S) and (2S) polarization as a function of the meson transverse momentum. The polarization is defined by the number alpha:

\large \alpha = \frac {\sigma_T-2 \sigma_L} {\sigma_T+2 \sigma_L},

where \sigma_T and \sigma_L are the cross-sections for producing a transversely and longitudinally-polarized meson, respectively. The polarization can be measured from the decay angle of the positively-charged muon in the Upsilon rest frame.

D0 had a total of 260,000 Upsilon decays to play with, and they produced a very detailed measurement of the behavior of \Upsilon(1S) and \Upsilon(2S) polarization as a function of the meson P_T. This allows a comparison with NRQCD, a factorization approach to the calculation of quarkonium production processes which enshrines in universal non-perturbative color-octet matrix elements the non-computable part, and uses experimental data to fix them.

Confused? Don’t be. Let’s just say that NRQCD is a successful approach at determining several characteristics of charmonium production, and a test of its prediction of the dominance of \sigma_T at high transverse momentum -where gluon fragmentation is the main process for the production of quarkonium in the model- is quite useful.

Another thing to note is that understanding Upsilon production -particularly in the forward region- may be very important for the determination of parton-distribution functions of the proton at very small values of Bjorken x -the fraction of proton momentum carried by a parton. These measurements are very important for the LHC, where interesting physics phenomena will be dominated by very low x collisions.

So let me just jump to the results of the D0 analysis. The polarization plot for the 1S state is shown below. Black points are the D0 measurement, while the green ones show the old CDF result (by the way, it is a shame that CDF does not have a Run II measurement of the Upsilon polarization yet, and you can well say it is partly my fault, since a few years ago I wanted to do this measurement and then had to abandon it…).

As you can see, the D0 result shows a Pt dependence of polarization which is not well matched by the NRQCD predictions (the yellow band, which is only available above 8 GeV of Pt), nor by the two limiting cases of a kt-factorization model. What is worse, however, is that the result comes seriously at odds with the old CDF data points. Who is right and who is wrong ? Or are the two sets of points compatible ?

Well, this is one of those instances when counting standard deviations does not work. The two results have sizable correlated systematic uncertainties among the data points, so moving eight data points up by one sigma collectively may cost much less than \sqrt{8} standard deviations: in the limit that the systematics dominate and they are 100% correlated, moving all points up by 1 \sigma just costs one standard deviation… In the case of the D0 points, however, one does not have this information from the paper. One learns that the signal modeling systematics amount to anywhere between 1 and 15%, with the bin with maximum uncertainty being the second one from left; and that background modeling systematics range from 4 and 21%, with the first bin being the worst one. As for the old CDF result, I could not find a detailed description of systematics either, but in that case the precision of the measurement is driven by statistics.

In any case, congratulations to D0 for producing this important new measurement. And I now hope CDF will follow suit with their large dataset of Upsilons too!

Follow

Get every new post delivered to your Inbox.

Join 96 other followers