## The 1999/2003 Higgs predictions compared with CDF 2009 results February 13, 2009

Posted by dorigo in news, personal, physics, science.
Tags: , , ,

Two years ago I used the combined Higgs search limits produced by the D0 experiment to evaluate how well the Tevatron was doing if compared with the predictions that had been put together by the 1999 SUSY-HIGGS working group, and later by the 2003 Higgs Sensitivity Working Group (HSWG), two endeavours to which I had participated with enthusiasm. The picture that emerged was that, although results were falling short of justifying fully the early predictions, there was still hope that those would one day be vindicated.

Indeed, I remember that when in 2003 the HSWG produced its report, we felt our results were greeted with a dose of scepticism. And we ourselves were a bit embarassed, because we knew we had been a bit optimistic in our predictions: however, that was the name of the game – looking at things on their bright side, for the sake of convincing funding agents that the Tevatron had a reason to run for a long time. I felt a strong justification for being optimistic in the incredible results on the top quark mass that the Tevatron had already started achieving: early prospects of measuring the top mass to a 1% uncertainty have in fact been surpassed by the combination of dedication of the scientists doing the analyses, and their imagination in inventing new precise methods.

We now have a chance to look back at the 1999/2003 predictions for the Higgs reach of the Tevatron with a rather solid set of hard data: the CDF combination, which I briefly discussed two days ago, is based on analyzed sets of data ranging from 2 to 3 inverse femtobarns, and the comparisons do not require a lot of extrapolations to be carried out.

If we look at the 1999/2003 predictions shown above (two basically coincident results, if one considers that the 2003 results were not accounting for systematic effects, which would worsen a bit the curves of sensitivity and bring them to match the older ones), we can read off the integrated luminosity that the Tevatron experiments needed to analyze in order to exclude, by combining their results, SM Higgs production at 95% confidence level. These numbers are as follows: for a Higgs mass of 100 GeV, 1/fb was considered sufficient; for a Higgs mass of 120 GeV, 2/fb were needed; 10/fb at 140 GeV; 4.5/fb at 160 GeV; 8/fb at 180 GeV; and 80/fb at 200 GeV. You can check them on the purple band in the graph above.

Now, let us take the actual expected limits by CDF with the analyses and the data they have based their new result upon (using expected limits rather than observed ones is correct, since the former are unaffected by statistical fluctuations). At 100 GeV, CDF has a reach in the 95%CL limit at 2.63xSM; at 120 GeV, the reach is 3.72xSM; at 140 GeV, 3.61xSM; at 160 GeV it is 1.75xSM; at 180 GeV 3.02xSM; and at 200 GeV, the reach is at 6.33xSM.

(Below, the 2009 combined CDF limits are shown by the thick red curve; the data I list above is based on the hatched curve instead, which shows the expected limit.)

How do we now compare these sets of numbers ?

Easy. As easy as 1.2.3.4 (well, not too easy, but that’s how it goes).

1. We first scale up by a factor of two the 1999/2003 luminosity numbers needed for a 95% CL exclusion, which we listed above. We thus get, for Higgs masses ranging from 100 to 200 GeV in 20-GeV steps, needed integrated luminosities of 2,4,20,9,16,160/fb.
2. Then, we take the actual luminosity used by CDF for the analyses that have been combined to yield the expected limits listed above. This is slightly tricky, since the combination includes analyses which have used 2.0/fb of data (the $H \to \tau \tau$ search), 2.1/fb (the $VH \to ME_T b \bar b$ search), 2.7/fb (the $WH \to l \nu b \bar b$, the $ZH \to ll b \bar b$, and the $WH \to WWW$ searches), and 3.0/fb (the $H \to WW$ search). In principle, we should weight those numbers with the relative sensitivity of the various analyses, but we can approximate it by taking an “average effective luminosity” of 2.4/fb for the 100 GeV Higgs search, 2.7/fb for the 120 and 140 GeV points, and 3.0/fb for the high-mass searches. This is appropriate, since the $H \to WW$ search starts kicking in above 140 GeV.
3. We now have all the numbers we need: we divide the expected luminosity needed for one experiment by the 1999/2003 study, found at point 1 above, by the effective luminosities found at point 2, and take the square root of that number: this means finding the “reduction factor” in the sensitivity that the actual CDF data suffers with respect to the data needed to exclude the Higgs boson. We find a reduction factor of 0.91, 1.22, 2.72, 1.73, 2.31, and 7.30 for Higgs masses of 100,120,140,160,180, and 200 GeV respectively.
4. Now we are done. We can compare the “times the SM” limits of CDF with the numbers found at point 3 above. The ratio of the two says how much worse is CDF doing with respect to predictions, for each mass point. We find that CDF is doing 2.88 times worse than predictions at 100 GeV; 3.06 times worse than predictions at 120 GeV; 1.33 times worse at 140 GeV; 1.01 times worse at 160 GeV; 1.31 times worse at 180 GeV; and 0.87 times worse (i.e., 1.15 times better!) at 200 GeV.

The results of point 4 are plotted on the graph shown above, where the x-axis shows the Higgs mass, and the y axis this “shame factor”. I have given a 20% uncertainty to the figures I computed, because of the rather rough way I extracted the numbers from the 1999/2003 prediction graph. If you look at the graph, you notice that the CDF experiment has kept its (our!) promise (points bouncing around a ratio of 1.0) with its high-mass searches, while low-mass searches still are a bit below expectations in terms of reach (3x worse reach than expected). It is not a surprise: at low Higgs mass, the searches have to rely on the $H \to b \bar b$ final state, which is very difficult to optimize (vertex b-tagging, dijet mass resolution, lepton acceptance are the three things on which CDF has been spending hundreds of man-years in the last decade). Give CDF (and DZERO) enough time, and those points will get down to 1.0 too!

1. Kea - February 14, 2009

Sheesh. How on earth do you find so much enthusiasm for studying fairies?

2. C - February 14, 2009

Dear T.

Can you explain more about the LEP exclusion limits? Do they refer to any type of Higgs or only to those with SM like couplings. Please expand on that.
Thanks.
C.

3. dorigo - February 14, 2009

Well Kea, in truth what we are studying with the CDF data is WW, WZ, Drell-Yan, QCD, W and Z production… All these things are ingredients of the exclusion of the Higgs. So it is not just that…

Hi C,

yes, this is only valid for a SM Higgs. The MSSM Higgs has a production rate which depends on the point of the model you consider, with a much higher dimensionality than in the SM case. For this reason, when one shows exclusion limits for the MSSM, one excludes points in a plane (and specifies a few other parameters to which case the graph applies).

As for LEP: they studied electron-positron collisions at up to 208 GeV of c.m. energy, and could not see a solid signal. Now, the Higgs in ee collisions can be produced in three ways, but the dominant one is associated production with a Z boson. The Z weighs 91 GeV, and has a sizable width. To produce a Higgs boson together with it, the Higgs cannot be heavier than the collision energy minus the Z mass, roughly. They ended up excluding that the Higgs was lighter than 114.4 GeV at 95%CL: they were limited by the energy of the collisions more than by the data size integrated. With infinite statistics, LEP could have gone further, but the gains were dominated by the beam energy.

Cheers,
T.

4. Too Many Topics « Not Even Wrong - February 15, 2009

[…] will manage to see the Higgs or rule it out, see excellent postings by Tommaso Dorigo here and here. The bottom line is that by the time the LHC has enough data to start saying something about the […]

5. Hatim - February 16, 2009

Tommaso, as usual, I always learn from what you say, please accept my sincere thanks for the effort you put here. Yes, I agree it all depends on our efforts to improve the techniques we use in our hunt for the Higgs boson but I also believe we need some luck!

6. Daniel de França MTd2 - February 16, 2009

Hi Tommaso,

When the analisys of the 10pb combined results will be published?

7. Daniel de França MTd2 - February 16, 2009

What is the forecasted luminosity for Atlas and CMS in the initial run, this year, for the 100-200GeV interval?

8. dorigo - February 16, 2009

Daniel, 10 /fb, not 10/pb… It depends on when that much data will be collected! It is foreseen that the Tevatron will deliver a total of 10/fb by the end of 2010.

Atlas and CMS will start this fall, and will run for one year straight, probably at 8 or 10 TeV. They are expected to collect of the order of 100/pb each.

Cheers,
T.

9. Daniel de França MTd2 - February 16, 2009

100pb?? You mean, 10 000 more than Tevatron?

10. dorigo - February 16, 2009

no, I mean 100 times less.
T.

11. dorigo - February 16, 2009

… Maybe if you are confused you should read this:
https://dorigo.wordpress.com/2006/08/28/luminosity-and-cross-section/

In summary: 100/pb = 0.1 x 1/fb = 0.01 x 10/fb.
The LHC will start slow… The higher reach is granted by the fact that the energy will be 4- or 5-fold higher, which means a large increase in higgs boson cross-section. That means more bang for the buck – more Higgs for a given integrated luminosity.

Cheers,
T.

12. Daniel de França MTd2 - February 16, 2009

How much is the increase?

13. dorigo - February 16, 2009

Daniel, increase of what ? If you are not precise in your questions, you will never get precise answers.

Cheers,
T.

14. Daniel de França MTd2 - February 17, 2009

I’m sorry, the increase of the cross section of the Higgs.

15. 后来…We will win later… » Blog Archive » About Science-Nature - Race for ‘God particle’ heats up - February 17, 2009

[…] if compared with the predictions that had been put together by the 1999 SUSY-HIGGS working group, Read More|||During the last year, the Tevatron particle accelerator at Fermi National Accelerator Laboratory […]

16. higgs boson | Higgs boson - CERN LHC - Fermilab Tevatron (1) : i am font of mike..是的 his my god - February 17, 2009

[…] if compared with the predictions that had been put together by the 1999 SUSY-HIGGS working group, Read More|||During the last year, the Tevatron particle accelerator at Fermi National Accelerator Laboratory […]

17. higgs boson | Higgs boson - CERN LHC - Fermilab Tevatron (1) | We are the world…我们是世界？ - February 17, 2009

[…] if compared with the predictions that had been put together by the 1999 SUSY-HIGGS working group, Read More|||During the last year, the Tevatron particle accelerator at Fermi National Accelerator Laboratory […]

18. 流着泪的你的脸..im so sad » Blog Archive » higgs boson | Higgs boson - CERN LHC - Fermilab Tevatron (1) - February 17, 2009

[…] if compared with the predictions that had been put together by the 1999 SUSY-HIGGS working group, Read More|||During the last year, the Tevatron particle accelerator at Fermi National Accelerator Laboratory […]

19. Tony Smith - February 17, 2009

A 17 February 2009 BBC article by James Morgan said:
“… Fermilab … Dr Dmitri Denisov … said … “We now have
a very, very good chance that we will see hints of the Higgs
before the LHC will … I think we have the next two years to find it,
based on the start date Lyn Evans has told us.
And by that time we expect to say something very strong.
The probability of our discovering the Higgs is very good
– 90% if it is in the high mass range.
And the chances are even higher – 96% – if its mass is around 170GeV (giga-electron volts).
In that case we would be talking about seeing hints of the Higgs by this summer.”

The smaller the mass of the particle, the more difficult and time-consuming it will be for Fermilab to detect. …
But even at the lowest end of the range, the chances are “50% or above”, according to … Fermilab … Director Pier Oddone.

Professor Lyn Evans, LHC project leader …[said]… “The race is on … The Tevatron is working better than I ever imagined it could. They are accumulating data like mad. The setback with the LHC has given them an extra time window. And they certainly will make the most of it. … I think it’s unlikely they will find it before the LHC comes online. They may well be in a position to get a hint of the Higgs but I don’t think they’ll be in a position to discover it.
And of course, if it’s not in the mass range they think it is, they have no chance of discovering it at all. Pier Oddone put the odds at 50-50 but I think it’s less than that. In one year, we will be competitive. After that, we will swamp them.”…”.

Tony Smith

20. carlbrannen - February 25, 2009

Tommaso, an interesting article on the problem of replicating results published in peer reviewed articles. While they are mostly discussing subjects in other fields, (economics) it does seem appropriate in terms of particle physics.

21. dorigo - February 25, 2009

interesting article indeed, Carl. Thanks for pointing it out. I think that in particle physics we are disconnected enough from public interest that the secrecy of data and analysis methods are inconsequential. However, I do advocate the publication of the data together with the results.

Cheers,
T.

Sorry comments are closed for this entry