## Latest global fits to SM observables: the situation in March 2009March 25, 2009

Posted by dorigo in news, physics, science.
Tags: , , , , , , , , , ,

A recent discussion in this blog between well-known theorists and phenomenologists, centered on the real meaning of the experimental measurements of top quark and W boson masses, Higgs boson cross-section limits, and other SM observables, convinces me that some clarification is needed.

The work has been done for us: there are groups that do exactly that, i.e. updating their global fits to express the internal consistency of all those measurements, and the implications for the search of the Higgs boson. So let me go through the most important graphs below, after mentioning that most of the material comes from the LEP electroweak working group web site.

First of all, what goes in the soup ? Many things, but most notably, the LEP I/SLD measurements at the Z pole, the top quark mass measurements by CDF and DZERO, and the W mass measurements by CDF, DZERO, and LEP II. Let us give a look at the mass measurements, which have recently been updated.

For the top mass, the situation is the one pictured in the graph shown below. As you can clearly see, the CDF and DZERO measurements have reached a combined precision of 0.75% on this quantity.

The world average is now at $M_t = 173.1 \pm 1.3 GeV$. I am amazed to see that the first estimate of the top mass, made by a handful of events published by CDF in 1994 (a set which did not even provide a conclusive “observation-level” significance at the time) was so dead-on: the measurement back then was $M_t=174 \pm 15 GeV$! (for comparison, the DZERO measurement of 1995, in their “observation” paper, was $M_t=199 \pm 30 GeV$).

As far as global fits are concerned, there is one additional point to make for the top quark: knowing the top mass any better than this has become, by now, useless. You can see it by comparing the constraints on $M_t$ coming from the indirect measurements and W mass measurements (shown by the blue bars at the bottom of the graph above) with the direct measurements at the Tevatron (shown with the green band). The green band is already too narrow: the width of the blue error bars compared to the narrow green band tells us that the SM does not care much where exactly the top mass is, by now.

Then, let us look at the W mass determinations. Note, the graph below shows the situation BEFORE the latest DZERO result;, obtained with 1/fb of data, and which finds $M_W = 80401 \pm 44 MeV$; its inclusion would not change much of the discussion below, but it is important to stress it.

Here the situation is different: a better measurement would still increase the precision of our comparisons with indirect information from electroweak measurements at the Z. This is apparent by observing that the blue bars have width still smaller than the world average of direct measurements (again in green). Narrow the green band, and you can still collect interesting information on its consistency with the blue points.

Finally, let us look at the global fit: the electroweak working group at LEP displays in the by now famous “blue band plot”, shown below for March 2009 conferences. It shows the constraints on the Higgs boson mass coming from all experimental inputs combined, assuming that the Standard Model holds.

I will not discuss this graph in details, since I have done it repeatedly in the past. I will just mention that the yellow regions have been excluded by direct searches of the Higgs boson at LEP II (on the left, the wide yellow area) and the Tevatron ( the narrow strip on the right). From the plot you should just gather that a light Higgs mass is preferred (the central value being 90 GeV, with +36 and -27 GeV one-sigma error bars). Also, a 95% confidence-level exclusion of masses above 163 GeV is implied by the variation of the global fit $\chi^2$ with Higgs mass.

I have started to be a bit bored by this plot, because it does not do the best job for me. For one thing, the LEP II limit and the Tevatron limit on the Higgs mass are treated as if they were equivalent in their strength, something which could not be possibly farther from the truth. The truth is, the LEP II limit is a very strong one -the probability that the Higgs has a mass below 112 GeV, say, is one in a billion or so-, while the limit obtained recently by the Tevatron is just an “indication”, because the excluded region (160 to 170 GeV) is not excluded strongly: there still is a one-in-twenty chance or so that the real Higgs boson mass indeed lies there.

Another thing I do not particularly like in the graph is that it attempts to pack too much information: variations of $\alpha$, inclusion of low-Q^2 data, etcetera. A much better graph to look at is the one produced by the GFitter group instead. It is shown below.

In this plot, the direct search results are introduced with their actual measured probability of exclusion as a function of Higgs mass, and not just in a digital manner, yes/no, as the yellow regions in the blue band plot. And in fact, you can see that the LEP II limit is a brick wall, while the Tevatron exclusion acts like a smooth increase in the global $\chi^2$ of the fit.

From the black curve in the graph you can get a lot of information. For instance, the most likely values, those that globally have a 1-sigma probability of being one day proven correct, are masses contained in the interval 114-132 GeV. At two-sigma, the Higgs mass is instead within the interval 114-152 GeV, and at three sigma, it extends into the Tevatron-excluded band a little, 114-163 GeV, with a second region allowed between 181 and 224 GeV.

In conclusion, I would like you to take away the following few points:

• Future indirect constraints on the Higgs boson mass will only come from increased precision measurements of the W boson mass, while the top quark has exhausted its discrimination power;
• Global SM fits show an overall very good consistency: there does not seem to be much tension between fits and experimental constraints;
• The Higgs boson is most likely in the 114-132 GeV range (1-sigma bounds from global fits).

## New bounds for the Higgs: 115-135 GeV!August 1, 2008

Posted by dorigo in news, physics, science.
Tags: , ,

From yesterday’s talk by Johannes Haller at ICHEP 2008, I post here today two plots showing the latest result of global fits to standard model observables, evidencing the Higgs mass constraints. The first only includes indirect information, the second also includes direct search resuls.

The above plot is tidy, yet the amount of information that the Gfitter digested to produce it is gigantic. Decades of studies at electron-positron colliders, precision electroweak measurements, W and top mass determinations. Probably of the order of fifty thousand man-years of work, distilled and summarized in a single, useless graph.

Jokes aside, the plot does tell us a lot. Let me try to discuss it. The graph shows the variation from its minimum value of the fit chisquared -the standard quantity describing how well the data agree with the model- as a function of the Higgs boson mass, interpreted as a free parameter. The fit prefers a 80 GeV mass for the Higgs boson, but the range of allowed values is still broad: at 1-sigma, the preferred range is within 57-110 GeV. At 2-sigma, the range is of course even wider, from 39 and 156 GeV. If we keep the two-sigma variation as a reference, we note that the $H \to WW$ decay is not likely to be the way by which the Higgs will be discovered.

Also note that the LEP II experiment limits have not been inserted in the fit: in fact, the 114 GeV lower limit is hatched but has no impact in the curve, which is smooth because unaffected by direct Higgs searches.

Take a look instead at the plot below, which attempts at summarizing the whole picture, by including the direct search results at LEP II and at the Tevatron (without the latest results however) in the fit.

This is striking new information! I will only comment the yellow band, which -like the one in the former plot- describes the deviation of the log-likelihood ratio in data and the signal plus background hypothesis. If you do not know what that means, fear not. Let’s disregard how the band was obtained and concentrate instead on what it means. It is just a measure of how likely it is that the Higgs mass sits at a particular value of mass, given all the information from electroweak fits AND the direct search results, which have in various degrees “excluded” (at 95% confidence level) or made less probable (at 80%, 90% CL or below) specific Higgs mass values.

In the plot you can read off that the Higgs mass has a preferred range of masses which is now quite narrow! $M_H = 120_{-5}^{+15} GeV$. That is correct: the lower 1-sigma error is very small because the LEP II limit is very strong. Instead, the upper limit is much less constrained. Still, the above 1-sigma band is very bad news for the LHC: it implies that the Higgs is very unlikely to be discovered soon. That is because at low invariant mass ATLAS and CMS need to rely on very tough discovery channels, relying on the very rare $H \to \gamma \gamma$ decay (one in a thousand Higgses decay that way) or the even more problematic $H \to \tau \tau$ decay. Not to mention the $H \to b \bar b$ final state, which can be extracted only when the Higgs is produced in association with other bodies, and still with huge difficulties, given the prohibitive amount of backgrounds from QCD processes mimicking the same signature.

The 2-sigma range is wider but not much prettier: 114.4 – 144 GeV. I think this really starts to be a strong indication after all: the Higgs boson, if it exists, is light! And probably close to the reach of LEP II. Too bad for the LEP II experiments – which I dubbed “a fiasco” in a former post to inflame my readers for once. In truth, LEP II appear likely to one day turn out to have been very, very unlucky!

## Massimo Passera: the muon anomaly and the Higgs mass – part IIJune 11, 2008

Posted by dorigo in physics, science.
Tags: , , , ,

Massimo Passera is an esteemed colleague in Padova. Our Physics department is not very big, but if one is immersed in one’s own work, the activities going on around are easy to overlook. In fact, I only became aware of Massimo’s recent study by checking the ArXiV for recent phenomenological papers -funny, if you think our offices are only a 30 second walk from one another.

I thus asked him to give a short review of the results of his work at the CMS-Padova analysis meeting which I chair monthly with my colleague Ezio Torassa. The analysis he performed has important implications for Higgs searches in CMS, and in general they shed light on the details of electroweak fits that check the internal consistency of the standard model. Massimo kindly agreed, and yesterday he presented his review.

In the first part of this post I gave some sort of background information on the physics of the muon anomaly, at a level which I hoped would keep non-physicist readers alive through the end. That first part now helps me to write a summary of Massimo’s talk in a way which is hopefully both understandable and lightweight. Let us see what I manage below, where I make use of some concepts I explained yesterday.

Massimo started by giving some background information on the measurements of anomalous magnetic moments for leptons. For electrons, the anomaly is measured with exquisite precision, and it provides us with a precise estimate of the fine structure function $\alpha$. The tau anomaly, on the other hand, it is very hard to measure, and so comparisons with theory -which is much more advanced in this case- are not meaningful.

For the muon, the experimental uncertainty has reached down to $\pm 63 \times 10^{-11}$: here the ball is on the court of theorists, who have to lower the uncertainty on their prediction if they want to challenge the experimental accuracy. Here are the numbers, for reference: experiment E821 finds $a_ \mu = 116592080 \pm 63 \times 10^{-11}$ (PRD73 (2006) 072003); while theory predicts different things depending on the method, as we will discuss below:

The first numbers in the table above are those to which most credit is given nowadays, so one can really talk about a 3.something standard deviation between theory and experiment.

It is to be noted that the muon anomaly, measured so far with the best precision at Brookhaven in a dedicated experiment, could see the experimental uncertainty decrease to $\pm 25$ (units of $10^{-11}$ will be assumed throughout this post in the following) if a proposed experiment, E969, could see the light. The jury is still out on whether to fund that experiment, or even a more challenging one called “Legacy”, which would bring the error down further.

Theorists are not ready to compare their result with a muon anomaly experimentally determined with the accuracy promised by E969 or Legacy: the reason is that their current estimates have errors in the range of 60 or 70 parts in 10^-11, as shown above (numbers in parenthesis are the uncertainties). And when you compare two numbers, their difference -which ultimately tells us whether they agree or whether it is necessary to hypothesize unforeseen effects that make them different- is plagued with its own uncertainty: if $\Delta = a_{exp}-a_{th}$ is the difference between experimental and theoretical determination of the anomaly, the error on $\Delta$ is simply $\sigma_\Delta = (\sigma_{exp}^2 - \sigma_{th}^2)^{0.5}$, the quadratic sum of the two uncertainties. Now, it is easy to see that if one of the two contributions is much larger than the other, there is no gain in decreasing the smaller one.

A numerical example will clarify this point: if $A_1=100 \pm 10$ and $A_2 = 140 \pm 17.5$, their difference is $\Delta = A_2-A_1 = 40 \pm (10^2+17.5^2)^{0.5} = 40 \pm 20$: something we address colloquially as “a two-sigma effect”, meaning that the number is different from zero at the level of two standard deviations (40/20=2). Now imagine you halve the error on $A_1$: your effort will not repay you with much more insight in whether the two numbers differ, because the uncertainty on the difference will go from $\pm 20$ to $\pm (5^2+17.5^2)^{0.5}$, or $\pm 18$: your understanding of the value of $\Delta$ has progressed by just 10%! Now go back to your funding agent and explain that!

After this introduction on the numbers we have to play with if we are to analyze the status of g-2 in the Standard Model, Massimo gave a short didactical review of the physics of the anomalous magnetic moment. I discussed the matter in the former post, so I will just note here that the issue of g being different from 2 is a problem which is exactly 60 years old: it was Schwinger, in 1947 (but he published in 1948), who first realized that quantum corrections affected its value. Theorists have continued increasing the precision of the their calculation of the anomaly due to quantum electrodynamical (QED) interactions during all this time, and their advancement is spectacular.

The QED contribution to the anomaly is in truth by far the largest, although, as I described yesterday, not the only one. It has been computed to incredible accuracy, up to what is called “four loop” level (diagrams with up to four loops of virtual particles): this is a result that took twenty years to achieve! Imagine sitting on a desk for 20 years, drawing thousands of complicated diagrams and computing them using fixed rules. A unforgiving, maddening task; but not a useless one: the four-loop QED contribution to the muon anomaly is six times larger than the current theoretical uncertainty! Without a complete four-loop computation, the muon anomaly would be utterly useless for our understanding of the Standard Model.

And the four-loop calculation is not the only hard piece as far as QED corrections go: once you fully compute diagrams to a given order, you have to estimate the contribution of the following order. The “five-loop” correction -a slight additive modification to the anomaly which attempts to account for the enormous number of diagrams including five fermion loops- have been estimated with what appears great precision by now. So, the QED part of the theoretical estimate of the muon anomaly is not what worries theorists the most these days: the total QED uncertainty amounts to one part (in 10^11,remember?), or one hundredth of a billionth.

The contribution from exchanges involving electroweak bosons (EW) is small, 195 parts at one loop level. But this has been computed up to two-loops, and it is precisely the first time that electroweak corrections to two-loops have been determined in any subatomic process. Thus, the total contribution to the muon anomaly from EW exchanges has decreased to 154, with an error of two parts only.

Massimo pointed out here that those diagrams are indeed sensitive to the Higgs boson mass -we in fact know that the Higgs does couple strongly to W and Z bosons: but a variation of the Higgs mass from 114 GeV (the lowest value it may have, otherwise LEP would have discovered it already) to 300 GeV changes the electroweak correction to $a_\mu$ almost imperceptibly -and in fact, these variations are included as systematics in the 2-parts-per-hundred-billions error.

Now, of course, as I was mentioning yesterday, the most problematic contribution is the hadronic one, the one due to QCD diagrams. Even the simplest sub-diagram where a photon fluctuates into hadrons and then returns to be a photon is not calculable with perturbative QCD! That is because the dominant contributions come from the lowest energy circulating in the virtual loop of hadrons: and the lowest the energy, the harder it gets with QCD, since $\alpha_s$ becomes unmanageably large and perturbation series are not meaningful.

So what one does is to take formulas from the ’60, the so-called optical theorem. This is not something I want to get into, but suffices to say that one uses the measured cross section of $e^+ e^- \to h$ (hadrons) at very low energy. It is hard to use those measurements. Unfortunately, their contribution is large to the muon anomaly: 7000 parts! So to match the experimental precision on $a_\mu$ it has to be known with 1% or better precision, to keep the overall error at 70 parts in 10^-11.

To determine the electron-positron cross section into hadrons, energy scans (beam collisions where the beam energy is varied in small increments to study the dependence of physical processes on the center-of-mass energy) have been made at Novosbirsk. They provide the best input for the hadronic contribution of the muon anomalous magnetic moment. There are other methods: In Frascati the KLOE experiments, which collides electrons and positrons at the energy of creation of the $\phi(1020)$, a 1-GeV resonance, they use events with a “radiative return” to obtain a determination of the hadronic cross section at lower energies. Radiative return means that the electron or positron before colliding emit a real photon, which carries away a sizable fraction of the center-of-mass energy. The collision thus happens at a lower energy, and one may compute the cross section for that value, if -as KLOE- one can measure the outgoing photon energy.

A third method involves studying the hadronic decays of tau leptons, $\tau \to \nu_\tau \pi \pi$. With some black magic called “isospin rotation” one can transform the decay parameters into a spectral function which… Ok, I leave the details to another time: the bottomline is that this black magic works, but there are subtleties which cast some doubts on this method. In particular, “isospin violations”, i.e. non-perfect conservation of the symmetry called “isospin” may affect the result at the 1% level: just the precision we wanted to achieve.

It is unfortunate that the general consensus appears to be to discard the tau decay estimate of the hadronic contribution: that is because, if one were to use that method to estimate QCD contributions to the muon anomaly, the discrepancy we observe between experiment and theory -a 3.4 standard deviations difference- would almost totally go away! This can be seen by checking the last line in the table I pasted at the beginning of this mile-long post (it is line number 5 in the table).

A final contribution which cannot be estimated because it has never been measured is the so-called “light by light” scattering, shown in the graph on the right: three photons emitted by the muon line connect to a quark loop, the quarks do what strongly interacting particles do (exchange gluons by QCD interactions), and at the end they annihilate, vanishing in a single photon which carries the electromagnetic interaction of the muon. This appears at higher order in the QCD part of the calculation: no perturbative calculations are possible -too low energy-, and this time not even any measurements come to the rescue. So theoretical models are used, and they have a large uncertainty. Suffices to say that the contribution of the light-by-light scattering diagram has changed sign four times in the last twenty years! Nowadays, the sign of the contribution finds physicists in agreement, but its size is of the order of 100 parts: even a large relative uncertainty does not spoil totally the overall calculation of g-2. However, this is ultimately the limitation to our present capability of determining a prediction for the muon anomaly.

Once all is said and done, we have a discrepancy of 300 units between theory and experiment. The light-by-light scattering diagram is too small to be the single source of the disagreement. Other unforeseen QED or EW contributions are unthinkable. So, if we look the experimental measurement at face value, we have two possibilities only: either the difference is due to some new physics -new particles circulating in virtual loops, affecting the muon anomaly- or the hadronic calculation at leading order is wrong.

Massimo explained that this was the starting point of their analysis: what happens in the second case ? Does this error have other consequences that may be visible elsewhere ? We can change g-2 by changing the hadronic contribution, but this has consequences in other observable quantities.

The hadronic contribution we have been discussing with regards to $a_\mu$ of course also impacts the calculation of the fine structure constant, albeit in a less critical way. The value of $\alpha(M_Z)$the value of the constant calculated for processes where the relevant energy scale is the large mass of the Z boson- is one crucial input in electroweak global fits to the Standard Model, which as we know have these days the ultimate goal of telling us not only that the Standard Model is alive and well -from the experimental standpoint, of course-, but to suggest what the heck the value of the Higgs boson mass really is.

Massimo showed in detail how a hypothetical modification of the hadronic cross section in different ranges of center-of-mass energy affects the fits. If we impose that the change in cross section “fixes” the discrepant value of the muon g-2 value, bringing it in agreement with the experimental determination, we
modify $\alpha(M_Z)$ as shown in the figure below.

In the graph you see that the higher the energy where you modify the hadronic cross section enough to bring agreement in the muon anomaly, the larger the modification you have to make (because, for g-2, lower energies weigh more). This in turn modifies more the value of $\alpha$, the value shown on the y axis.

Now, using the modified values of the fine structure constant, Massimo can perform different global fits to electroweak observables, and obtain in every case a different upper bound (at 95% of confidence level) on the Higgs boson mass. This is shown in the figure below: on the y axis now there’s the allowed range of Higgs masses. The lower bound is experimental and cannot be touched: 114.4 GeV. The upper bound in the unmodified case is 150 GeV according to Massimo’s fits. This, however, gets smaller as we consider modifications to the hadronic cross section in different ranges of center-of-mass energy. Recall that the larger the energy, the more we have to change the cross section to bring g-2 of the muon in agreement with experiment: this is why we get a larger and larger effect on the Higgs mass upper bound.

The plot also shows the upper bound one would get if one assumed the starting value of the hadronic contribution to g-2 as the one obtained from tau hadronic decays: in that case, the discrepancy in g-2 is smaller, so the correction in the estimated hadronic cross section can be smaller, and the effect is milder: the upper limit moves around following the hatched red line in that case. Note, though, that even in the unmodified case the upper bound on the Higgs mass is only 138 GeV, if we accept the tau hadronic “isospin rotation” black magic.

Now, all this is good and fancy, but we have to ask ourself the question: is it realistic to move around the hadronic cross section in a given region of energy by several percents, to bring agreement with g-2 and thus obtain different results for the Higgs ? In other words, do the low-energy electron-positron collider data allow these variations, or are they ruled out by the Novosbirsk determinations ?

This is answered by just another plot, shown below. Here, on the y axis you see the variation you are required to make in the hadronic cross section to fix $a_\mu$, while on the x axis you still have the energy bin where you apply it. On each bar, representing a possible modification of y% of the cross section, you can read off the resulting upper bound on the Higgs mass. Massimo warns that the lowest modifications in cross section that are effective are of the order of a few percents, and this is already stretching the experimental results of low-energy electron-positron colliders. However, the study shows that under those circumstances, even forgetting about the tension in low energy data, one would trade the agreement in g-2 with a restricted range for the mass values of the Higgs boson, down to levels where the Higgs may live only in a very small region of parameter space.

In conclusion, the enlightening seminar by Massimo Passera taught me quite a few things. The 3.4 standard deviation between theory and experiment in the value of $a_\mu$ may be a signal of new physics or a problem in hadronic cross sections. Which is more likely ? If you’ve read this blog long enough, you know