jump to navigation

Massimo Passera: the muon anomaly and the Higgs mass – part II June 11, 2008

Posted by dorigo in physics, science.
Tags: , , , ,
trackback

Massimo Passera is an esteemed colleague in Padova. Our Physics department is not very big, but if one is immersed in one’s own work, the activities going on around are easy to overlook. In fact, I only became aware of Massimo’s recent study by checking the ArXiV for recent phenomenological papers -funny, if you think our offices are only a 30 second walk from one another.

I thus asked him to give a short review of the results of his work at the CMS-Padova analysis meeting which I chair monthly with my colleague Ezio Torassa. The analysis he performed has important implications for Higgs searches in CMS, and in general they shed light on the details of electroweak fits that check the internal consistency of the standard model. Massimo kindly agreed, and yesterday he presented his review.

In the first part of this post I gave some sort of background information on the physics of the muon anomaly, at a level which I hoped would keep non-physicist readers alive through the end. That first part now helps me to write a summary of Massimo’s talk in a way which is hopefully both understandable and lightweight. Let us see what I manage below, where I make use of some concepts I explained yesterday.

Massimo started by giving some background information on the measurements of anomalous magnetic moments for leptons. For electrons, the anomaly is measured with exquisite precision, and it provides us with a precise estimate of the fine structure function \alpha. The tau anomaly, on the other hand, it is very hard to measure, and so comparisons with theory -which is much more advanced in this case- are not meaningful.

For the muon, the experimental uncertainty has reached down to \pm 63 \times 10^{-11}: here the ball is on the court of theorists, who have to lower the uncertainty on their prediction if they want to challenge the experimental accuracy. Here are the numbers, for reference: experiment E821 finds a_ \mu = 116592080 \pm 63  \times 10^{-11} (PRD73 (2006) 072003); while theory predicts different things depending on the method, as we will discuss below:

The first numbers in the table above are those to which most credit is given nowadays, so one can really talk about a 3.something standard deviation between theory and experiment.

It is to be noted that the muon anomaly, measured so far with the best precision at Brookhaven in a dedicated experiment, could see the experimental uncertainty decrease to \pm 25 (units of 10^{-11} will be assumed throughout this post in the following) if a proposed experiment, E969, could see the light. The jury is still out on whether to fund that experiment, or even a more challenging one called “Legacy”, which would bring the error down further.

Theorists are not ready to compare their result with a muon anomaly experimentally determined with the accuracy promised by E969 or Legacy: the reason is that their current estimates have errors in the range of 60 or 70 parts in 10^-11, as shown above (numbers in parenthesis are the uncertainties). And when you compare two numbers, their difference -which ultimately tells us whether they agree or whether it is necessary to hypothesize unforeseen effects that make them different- is plagued with its own uncertainty: if \Delta = a_{exp}-a_{th} is the difference between experimental and theoretical determination of the anomaly, the error on \Delta is simply \sigma_\Delta = (\sigma_{exp}^2 - \sigma_{th}^2)^{0.5}, the quadratic sum of the two uncertainties. Now, it is easy to see that if one of the two contributions is much larger than the other, there is no gain in decreasing the smaller one.

A numerical example will clarify this point: if A_1=100 \pm 10 and A_2 = 140 \pm 17.5, their difference is \Delta = A_2-A_1 = 40 \pm (10^2+17.5^2)^{0.5} = 40 \pm 20: something we address colloquially as “a two-sigma effect”, meaning that the number is different from zero at the level of two standard deviations (40/20=2). Now imagine you halve the error on A_1: your effort will not repay you with much more insight in whether the two numbers differ, because the uncertainty on the difference will go from \pm 20 to \pm (5^2+17.5^2)^{0.5}, or \pm 18: your understanding of the value of \Delta has progressed by just 10%! Now go back to your funding agent and explain that!

After this introduction on the numbers we have to play with if we are to analyze the status of g-2 in the Standard Model, Massimo gave a short didactical review of the physics of the anomalous magnetic moment. I discussed the matter in the former post, so I will just note here that the issue of g being different from 2 is a problem which is exactly 60 years old: it was Schwinger, in 1947 (but he published in 1948), who first realized that quantum corrections affected its value. Theorists have continued increasing the precision of the their calculation of the anomaly due to quantum electrodynamical (QED) interactions during all this time, and their advancement is spectacular.

The QED contribution to the anomaly is in truth by far the largest, although, as I described yesterday, not the only one. It has been computed to incredible accuracy, up to what is called “four loop” level (diagrams with up to four loops of virtual particles): this is a result that took twenty years to achieve! Imagine sitting on a desk for 20 years, drawing thousands of complicated diagrams and computing them using fixed rules. A unforgiving, maddening task; but not a useless one: the four-loop QED contribution to the muon anomaly is six times larger than the current theoretical uncertainty! Without a complete four-loop computation, the muon anomaly would be utterly useless for our understanding of the Standard Model.

And the four-loop calculation is not the only hard piece as far as QED corrections go: once you fully compute diagrams to a given order, you have to estimate the contribution of the following order. The “five-loop” correction -a slight additive modification to the anomaly which attempts to account for the enormous number of diagrams including five fermion loops- have been estimated with what appears great precision by now. So, the QED part of the theoretical estimate of the muon anomaly is not what worries theorists the most these days: the total QED uncertainty amounts to one part (in 10^11,remember?), or one hundredth of a billionth.

The contribution from exchanges involving electroweak bosons (EW) is small, 195 parts at one loop level. But this has been computed up to two-loops, and it is precisely the first time that electroweak corrections to two-loops have been determined in any subatomic process. Thus, the total contribution to the muon anomaly from EW exchanges has decreased to 154, with an error of two parts only.

Massimo pointed out here that those diagrams are indeed sensitive to the Higgs boson mass -we in fact know that the Higgs does couple strongly to W and Z bosons: but a variation of the Higgs mass from 114 GeV (the lowest value it may have, otherwise LEP would have discovered it already) to 300 GeV changes the electroweak correction to a_\mu almost imperceptibly -and in fact, these variations are included as systematics in the 2-parts-per-hundred-billions error.

Now, of course, as I was mentioning yesterday, the most problematic contribution is the hadronic one, the one due to QCD diagrams. Even the simplest sub-diagram where a photon fluctuates into hadrons and then returns to be a photon is not calculable with perturbative QCD! That is because the dominant contributions come from the lowest energy circulating in the virtual loop of hadrons: and the lowest the energy, the harder it gets with QCD, since \alpha_s becomes unmanageably large and perturbation series are not meaningful.

So what one does is to take formulas from the ’60, the so-called optical theorem. This is not something I want to get into, but suffices to say that one uses the measured cross section of e^+ e^- \to h (hadrons) at very low energy. It is hard to use those measurements. Unfortunately, their contribution is large to the muon anomaly: 7000 parts! So to match the experimental precision on a_\mu it has to be known with 1% or better precision, to keep the overall error at 70 parts in 10^-11.

To determine the electron-positron cross section into hadrons, energy scans (beam collisions where the beam energy is varied in small increments to study the dependence of physical processes on the center-of-mass energy) have been made at Novosbirsk. They provide the best input for the hadronic contribution of the muon anomalous magnetic moment. There are other methods: In Frascati the KLOE experiments, which collides electrons and positrons at the energy of creation of the \phi(1020), a 1-GeV resonance, they use events with a “radiative return” to obtain a determination of the hadronic cross section at lower energies. Radiative return means that the electron or positron before colliding emit a real photon, which carries away a sizable fraction of the center-of-mass energy. The collision thus happens at a lower energy, and one may compute the cross section for that value, if -as KLOE- one can measure the outgoing photon energy.

A third method involves studying the hadronic decays of tau leptons, \tau \to \nu_\tau \pi \pi. With some black magic called “isospin rotation” one can transform the decay parameters into a spectral function which… Ok, I leave the details to another time: the bottomline is that this black magic works, but there are subtleties which cast some doubts on this method. In particular, “isospin violations”, i.e. non-perfect conservation of the symmetry called “isospin” may affect the result at the 1% level: just the precision we wanted to achieve.

It is unfortunate that the general consensus appears to be to discard the tau decay estimate of the hadronic contribution: that is because, if one were to use that method to estimate QCD contributions to the muon anomaly, the discrepancy we observe between experiment and theory -a 3.4 standard deviations difference- would almost totally go away! This can be seen by checking the last line in the table I pasted at the beginning of this mile-long post (it is line number 5 in the table).

A final contribution which cannot be estimated because it has never been measured is the so-called “light by light” scattering, shown in the graph on the right: three photons emitted by the muon line connect to a quark loop, the quarks do what strongly interacting particles do (exchange gluons by QCD interactions), and at the end they annihilate, vanishing in a single photon which carries the electromagnetic interaction of the muon. This appears at higher order in the QCD part of the calculation: no perturbative calculations are possible -too low energy-, and this time not even any measurements come to the rescue. So theoretical models are used, and they have a large uncertainty. Suffices to say that the contribution of the light-by-light scattering diagram has changed sign four times in the last twenty years! Nowadays, the sign of the contribution finds physicists in agreement, but its size is of the order of 100 parts: even a large relative uncertainty does not spoil totally the overall calculation of g-2. However, this is ultimately the limitation to our present capability of determining a prediction for the muon anomaly.

Once all is said and done, we have a discrepancy of 300 units between theory and experiment. The light-by-light scattering diagram is too small to be the single source of the disagreement. Other unforeseen QED or EW contributions are unthinkable. So, if we look the experimental measurement at face value, we have two possibilities only: either the difference is due to some new physics -new particles circulating in virtual loops, affecting the muon anomaly- or the hadronic calculation at leading order is wrong.

Massimo explained that this was the starting point of their analysis: what happens in the second case ? Does this error have other consequences that may be visible elsewhere ? We can change g-2 by changing the hadronic contribution, but this has consequences in other observable quantities.

The hadronic contribution we have been discussing with regards to a_\mu of course also impacts the calculation of the fine structure constant, albeit in a less critical way. The value of \alpha(M_Z)the value of the constant calculated for processes where the relevant energy scale is the large mass of the Z boson- is one crucial input in electroweak global fits to the Standard Model, which as we know have these days the ultimate goal of telling us not only that the Standard Model is alive and well -from the experimental standpoint, of course-, but to suggest what the heck the value of the Higgs boson mass really is.

Massimo showed in detail how a hypothetical modification of the hadronic cross section in different ranges of center-of-mass energy affects the fits. If we impose that the change in cross section “fixes” the discrepant value of the muon g-2 value, bringing it in agreement with the experimental determination, we
modify \alpha(M_Z) as shown in the figure below.

In the graph you see that the higher the energy where you modify the hadronic cross section enough to bring agreement in the muon anomaly, the larger the modification you have to make (because, for g-2, lower energies weigh more). This in turn modifies more the value of \alpha, the value shown on the y axis.

Now, using the modified values of the fine structure constant, Massimo can perform different global fits to electroweak observables, and obtain in every case a different upper bound (at 95% of confidence level) on the Higgs boson mass. This is shown in the figure below: on the y axis now there’s the allowed range of Higgs masses. The lower bound is experimental and cannot be touched: 114.4 GeV. The upper bound in the unmodified case is 150 GeV according to Massimo’s fits. This, however, gets smaller as we consider modifications to the hadronic cross section in different ranges of center-of-mass energy. Recall that the larger the energy, the more we have to change the cross section to bring g-2 of the muon in agreement with experiment: this is why we get a larger and larger effect on the Higgs mass upper bound.

The plot also shows the upper bound one would get if one assumed the starting value of the hadronic contribution to g-2 as the one obtained from tau hadronic decays: in that case, the discrepancy in g-2 is smaller, so the correction in the estimated hadronic cross section can be smaller, and the effect is milder: the upper limit moves around following the hatched red line in that case. Note, though, that even in the unmodified case the upper bound on the Higgs mass is only 138 GeV, if we accept the tau hadronic “isospin rotation” black magic.

Now, all this is good and fancy, but we have to ask ourself the question: is it realistic to move around the hadronic cross section in a given region of energy by several percents, to bring agreement with g-2 and thus obtain different results for the Higgs ? In other words, do the low-energy electron-positron collider data allow these variations, or are they ruled out by the Novosbirsk determinations ?

This is answered by just another plot, shown below. Here, on the y axis you see the variation you are required to make in the hadronic cross section to fix a_\mu, while on the x axis you still have the energy bin where you apply it. On each bar, representing a possible modification of y% of the cross section, you can read off the resulting upper bound on the Higgs mass. Massimo warns that the lowest modifications in cross section that are effective are of the order of a few percents, and this is already stretching the experimental results of low-energy electron-positron colliders. However, the study shows that under those circumstances, even forgetting about the tension in low energy data, one would trade the agreement in g-2 with a restricted range for the mass values of the Higgs boson, down to levels where the Higgs may live only in a very small region of parameter space.

In conclusion, the enlightening seminar by Massimo Passera taught me quite a few things. The 3.4 standard deviation between theory and experiment in the value of a_\mu may be a signal of new physics or a problem in hadronic cross sections. Which is more likely ? If you’ve read this blog long enough, you know
my very own, personal answer.

Comments

1. Kea - June 12, 2008

Which is more likely ? If you’ve read this blog long enough, you know
my very own, personal answer.

In other words, you now suspect that the fairy particle will be found under 130GeV at the LHC! Hah, hah!

2. dorigo - June 12, 2008

Hmmm Kea, I was only putting the two alternatives – new physics modifying the g-2 by just a teeny-tiny amount that makes-us-wondering-what-is-going-on-but-not-completely-sure, and some unaccounted systematics in low-energy QCD measurements-which-by-the-way-do-not-agree-with-alternative-methods tau decays), on a scale.

Whether there is or not a Higgs, I think is another matter. But I do believe we will find one… It might not be what we thought it is, though.

Cheers,
T.

3. DB - June 12, 2008

That’s terrific. I’ve been following the muon anomaly for some years now and have long held a bias in favour of the hadronic uncertaincies based on the tau data, but that’s the first time I’ve seen the connection to Higgs mass determinations. And I like the fact that the dotted red line (based on the tau data) remains completely within the current experimental and theoretical limits. So am I missing something if I conclude that the tau-picture doesn’t create “tension with low-energy data”?

I understand that it isn’t the intention of this blog to get into deep discussions over the nitty-gritty, but when you say that the general consensus is to discard the tau data, this is not reflected in the current (July 2007) Particle Data Group assessment:
“The discrepancy between the e+e− and τ -based determinations of aHad µ [LO] is currently unexplained. It may be indicative of problems with one or both data sets. It may also suggest the need for additional isospin-violating corrections to the τ data. Forthcoming low-energy e+e− and τ data may help to resolve this discrepancy and should reduce the hadronic uncertainty.” See Page 4 of

Click to access g-2_s004219.pdf

In Part 1 you said about the SM “This isn’t string theory”, but the ironic thing (and I’m no promoter of string theory) is that probably the one area where string theory seems to show some practical promise is precisely in getting a better handle on low-energy QCD behaviour. If this is really true, it would be an interesting demonstration if it could be successfully applied to this sort of problem. Perhaps a competent string theorist could comment on this.

Nitty gritty over!. I think this article could be your best yet!. You are doing a tremenduous job!

4. The muon anomaly and the Higgs mass - part I « A Quantum Diaries Survivor - June 12, 2008

[…] This link will bring you to the second part of this post. Possibly related posts: (automatically […]

5. dorigo - June 12, 2008

Hi DB,

well, thank you very much for the encouragement. I thought the matter could be of interest to a wider audience than other typical posts, so I made a little extra effort at making this both readable and informative…

About the nitty gritty: when I said the consensus is nowadays to prefer direct cross section measurements to tau decay information, I am reporting Massimo’s statement. He does know the details of the theoretical debate on this issue of course, but he might have been putting some personal flavor in his talk…

As for string theory coming to the rescue in low-energy QCD, I am all in favor of it if it happens: that’s really a “hic sunt leones” for HEP nowadays. At a panel discussion in PPC2008 the question was asked on what way to best invest one billion dollar to further our advancement in the understanding of HEP and cosmology. After hearing about giant space telescopes, cosmic ray exeperiments and TeV energy leptonic colliders, I was the only one who mentioned saving a fifth of the sum to build new experiments to understand low-energy hadronic interactions better!

Cheers,
T.

6. Predictions for SUSY particle masses! « A Quantum Diaries Survivor - September 2, 2008

[…] A more detailed discussion can be found in a report of a seminar by Massimo Passera on the topic, here and here. […]


Sorry comments are closed for this entry