jump to navigation

Understanding b-quark cross sections May 18, 2007

Posted by dorigo in physics, science.

The production of b-quarks in proton-antiproton collisions was observed to be in mysterious disagreement with theoretical predictions during Run I at the Tevatron: CDF and D0 measured a rate of B-hadrons (the particles created when the b-quarks “dressed up”) higher than calculations performed with QCD (quantum chromodynamics, the theory of strong interactions) by a factor of two or more. In an era when perturbative QCD was showing impressive agreement with a wealth of data collected by experiments throughout the world, the production rate of b-quarks was bewildering – and to some, suggestive of supersymmetric particles contributing to the counts.

And the Tevatron was not alone: experiments at HERA (electron-proton) and LEP (in photon-photon collisions) were also measuring rates too high by a factor of three or more for b-quark production. To deepen the puzzle, HERA could see no disagreement in the production of hadrons containing c-quarks, and the Tevatron was finding a fine match in the rate of top quark production. What was it so special with the b-quark then ?

Strong interactions are “flavor blind”: they rule democratically all kinds of quarks. The amount of force holding quarks together is the same regardless of whether they are heavy of light, and whether they are charmed, beautiful, or strange; the strong force similarly ignores the sign and amount of electric charge they possess. In fact, gluons – the carriers of the force – only feel the color charge of quarks. Now, since the strength of the QCD interaction is directly proportional to the rate of occurrence of processes creating quarks (the stronger the force, the higher the chance of producing a given process), one would expect that either all production rates of quarks match the respective theoretical predictions, or they all differ. That is why the question posed above was so intriguing.

In order to understand more of the issue, discuss its solution, and appreciate more the latest results on b-quark production cross section, we need to delve a little into the world of calculating quantum phenomena.

Theorists cannot compute production rates of quantum phenomena with arbitrary accuracy. Despite a very simple and unambiguous definition of what one wants to calculate – say, “Take particles A and B, with momenta Pa and Pb, and determine how likely it is that they annihilate, producing particles C and D with momenta Pc and Pd” -, a perfect calculation involves the evaluation of an infinite number of sub-processes, each giving its own contribution to the phenomenon defined above: something which is impossible to do.

Luckily, experimentalists love to approximate. We live of handwaving arguments, back-of-the-envelope calculations, ball-park estimates, “cow-more cow-less” guesses. And besides, we usually get pissed off by theoretical predictions carrying a precision two orders of magnitude higher than our measuring instruments can ever achieve: we need it about as much as a man needs a two-feet-long penis – great for bragging, but ineffective and redundant.

So theorists are kind enough to invent approximation methods. In simple terms, they seek a classification scheme to group together all the largest additive contributions to the rate of the process under study, such that all what is left is a small correction. They can thus compute the sum of the elements they have singled out, and obtain a “leading order” estimate. Then they move to the remaining effects, again seeking a division according to the size of each contribution, if possible using the same partitioning scheme, and they obtain a “next-to-leading order” contribution. Each successive step is usually harder, because more terms have to be considered in the calculation: most of the times, calculations are performed at “next-to-leading” order, and are stopped there for lack of paper, shortage of coffee, or fear of irritating experimentalists: it is only for a few computations in quantum electrodynamics (QED), like the anomalous magnetic moment of electron and muon, that numbers correct to the tenth decimal place or more have been obtained. In fact, in those cases experiments were possible to measure those quantities with similar accuracy.

Within the realm of QCD things are tougher than in QED, because in addition to the diverging number of sub-processes to add together, there is the added complication that conceptually one cannot always expect the result of the simplest grouping procedure to be sufficiently precise: while in QED the simplest sub-processes are the most important, in QCD this is no longer necessarily true. That happens because of the large value of the QCD coupling constant: the interaction is strong, and so complicated diagrams with many interactions may provide large contributions to the calculation. The above is especially true for very low-energy phenomena involved in the process called fragmentation – the mechanism whereby a bare quark dresses up to form a hadron: low energy, in QCD, means a higher coupling constant. We will be back to the issue of fragmentation later.

[To be continued…]

%d bloggers like this: