jump to navigation

Scalar quarks and the B+ cross section May 19, 2007

Posted by dorigo in news, physics, science.
trackback

In the previous post I left the discussion of the disagreement of B-hadron cross sections with theory hanging, in order to explain in rough terms what are the problems with computing theoretical estimates. These have been computed at “next-to-leading order” (NLO) in QCD, a level which is usually sufficient to provide a good agreement with measured quantities for most processes.

NLO QCD might fail with B-hadrons because of some subtleties of the like of threshold effects, large contributions from small-x phenomena, and large logarithms at high transverse momentum. Let me just not bother with those details. I will leave these effects unexplained, by just noting that a next-to-next-to-leading order (NNLO) computation might include some of their contributions and provide a better agreement with experiment – but NNLO calculations are much harder to produce, and they still do not exist for B-hadron production in proton-antiproton collisions. Instead, there are other, smarter ways to deal with the apparent inconsistency of theory and experimental data.

Indeed, in 2002 Matteo Cacciari and Paolo Nason produced a ground-breaking paper which re-examined the question. They demonstrated that most of the disagreement between data and theory could be accounted for by means of a more accurate matching of the perturbative QCD calculations with the modeling of the non-perturbative part of the process whereby the produced b-quark dresses up as a B-hadron: the so-called fragmentation.

In fact, while QCD calculations allow one to predict the rate of b-quark production as a function of the energy with which the quark is emitted, one missing piece has to be supplemented by a model (to be precise, a fragmentation function, D(z)) which describes how much of the quark energy will be retained by the B-hadron that the b-quark immediately forms after being created. Cacciari and Nason showed that a better agreement with the data was possible by merging the NLO calculation with a “next-to-leading logarithm” summation of large transverse momentum effects.

Explaining the details of their findings, and discussing the contributions of small moments of the fragmentation function to their calculation, is way beyond my limited powers – as Einstein once said, “you have not understood something until you can explain it to your grandmother“: and I think ultimately I have not understood the calculation well enough to report about it clearly and concisely here. So I will abstain from going deeper, and just quote the bottom line from their paper:

“…we find that an appropriate treatment of the fragmentation properties of the b quark considerably reduces the discrepancy of the CDF transverse momentum spectrum for the B mesons and the corresponding QCD calculation. […] Including experimental and theoretical uncertainties, the updated Data/Theory ratio can be written as 1.7+-0.5(exp.)+-0.5(theory).”

That was the status in 2002, which compared the latest theory developments to old Run I results for B-hadron cross sections: from a 3x discrepancy to 1.7x, with large errors. Now, after a boring and long introduction, I feel I can finally discuss the most recent CDF result on that quantity!

In fact, several cross section measurements of the B production cross section by CDF have been not only inconsistent with theory in the past, but also between each other to some level. A paper published last year by a few members of CDF, led by Paolo Giromini, discussed the possibility that the observed discrepancies could be due to pair production of scalar bottom quarks, which would contribute to some datasets more than to others due to their alleged 100% decay to leptons.

The same group of physicists has now obtained a much more precise measurement of the production of B+ mesons, using 740 inverse picobarns of data collected in Run II to reconstruct the exclusive decay to J/psi and K+ mesons. Note that being this a fully reconstructed, exclusive decay, no scalar quark contribution could ever be present here. These decays are selected from a large dataset of events containing a signal of J/psi decays to muon pairs, by selecting K+ candidate tracks and fitting it together with the two muons to a common vertex.

Let me see if I can add some more detail to the ugly line above. They first find J/psi candidates by constraining the muon tracks to originate to a common point in three-dimensional space; if the mass of the system is between 3.05 and 3.15 GeV the event is retained (the J/psi has a mass of 3.097 GeV, but resolution effects cause the signal to appear as a gaussian with a width of about 20 MeV). Then every other charged track in the event with transverse momentum larger than 1.25 GeV is considered a K+ candidate, and a fit to a common vertex is attempted. If the J/psi + track system has a combined transverse momentum larger than 6 GeV, the event makes it to the final selection. The mass of the system does show the characteristic peak of B+ decays, as shown below.

In this plot you can see the data plotted as a function of the invariant mass of the three tracks fit to a common vertex. On top of a falling distribution, populated by random association of J/psi mesons with additional charged tracks and by other heavy flavor decays, the B+ signal is clear and dominant. The red line is a fit of the signal region to a linear function and a gaussian.

The data can be divided in subsets according to the value of the transverse momentum of the B+ candidates: by doing so, one can measure the production rate as a function of that variable: something we call a “differential Pt distribution”, which provides a more stringent test of theory than a single cross section number. The result can be seen in the plot below.

In the figure you see the new measurement (blue points) compared with other determinations of the same quantity (empty circles for Run I obtained with the same exclusive decay channel, and triangles for Run II obtained by a inclusive measurement of J/psi mesons). The new result is significantly more precise, and it is more in line with the other Run II result. Both are well described by FONLL, the theoretical prediction of Cacciari and Nason, shown with a continuous red line. NLO QCD alone, instead, does a very poor job, with a underestimate of 2.67+-0.23 times of the data over the whole spectrum.

The measurement is thus another victory of the calculation method proposed by Cacciari and Nason. But there is more to it… In fact, I suspect that the authors of this nice new analysis will now move on to measuring the same B-hadron cross section using semileptonic decays: now that they have shown that an exclusive decay is perfectly in line with FONLL calculations, any discrepancy from it of the same quantity determined with other datasets might bring support to the idea that a scalar quark decaying semileptonically (and thus, contributing to one data sample and not to the other) is present in CDF data, as they speculated in the paper mentioned above.

This post would not be complete if I left without answering the question posed in the first part: what was it then, which made b-production different from c-quark production or top production ? Yes, the fragmentation mechanism. It turns out that the c-quark, being three times lighter than the b-quark, is less affected by the “large transverse momentum effects” to its fragmentation function. As for the top quark, it is so heavy that it decays before having time to create hadrons, and thus it does not need a detailed model of fragmentation: perturbative QCD works wonders at the high energy necessary to produce top quark pairs, and nothing else is needed. Nowadays, the experimental value of the top quark cross section is in perfect agreement with NLO theory, and both carry only a 10-12% uncertainty: for a top mass of 175 GeV CDF finds

sigma(tt) = 7.32+-0.85 pb

to be compared with the NLO QCD result of Kidonakis and Vogt [PRD68 (2003), p. 114014]:

sigma(tt) = 6.8+-0.6 pb.

Do you remember what I said in the last post about having too long a penis ? Well, since the experimental uncertainty will never go below 5-6% – the amount of systematic error due to the uncertainty on the integrated luminosity on which the measurement is based – the NLO result for top pair production is good enough: theoreticians can happily use their time to improve the precision of other estimates.

Comments

1. Doug - May 19, 2007

Could this Nature paper 26 April 2007; 446 (7139): 949 – 1116 relate to this scale of cross section plus other dimensions?

Robert D. MacPherson & David J. Srolovitz
‘The von Neumann relation generalized to coarsening of three-dimensional microstructures’ [p1053]
doi:10.1038/nature05745

Editor’s Summary
http://www.nature.com/nature/journal/v446/n7139/edsumm/e070426-09.html

2. dorigo - May 20, 2007

Hi Doug,

I cannot open your link to the nature article. Is it mis-spelled or has it been removed ?

T.

3. Doug - May 20, 2007

Hi Tommaso,

I am at a loss to explain why the link in my post above did not work for you.
Perhaps the server was down for maintenance or other reason at the time you attempted to access?

During the writing this post I was able access through both left and right click the Editor’s Summary URL as written in the above post.

On my home computer I cannot continue without a subscription to proceed to the article either in PDF or full text or to the supplement.
I need to use a subscription on a university computer.

There is a brief synopsis of the paper with one figure [1] on a Princeton IAS site ‘News Briefs‘: ‘Materials Science Problem Solved with Geometry’
http://www.ias.edu/newsroom/news-briefs/

The figure [1] resembles a polygonal equivalent of cylindrical tensors.


Sorry comments are closed for this entry

%d bloggers like this: