jump to navigation

The 2003 Higgs discovery predictions tested with hard data May 3, 2007

Posted by dorigo in news, personal, physics, politics, science.
trackback

These days I am spending my time writing a talk  on the status of CDF, for a meeting of the INFN on May 15 in Rome where funding for the italian particle physics experiments will be discussed. The task of giving a coherent view of the many ongoing efforts, the recent achievements, and the chances for future discoveries, is complex and stimulating.

Of course I plan to put great emphasis on the potential for Higgs discovery. We are at a turning point at the Tevatron, with both CDF and Dzero sitting on about 2 inverse femtobarns of data – a fourth of what they hope to collect by the end of 2009. The two experiments have both shown preliminary results of Standard Model Higgs searches based on one-inverse-femtobarn datasets, and now it is a good moment to evaluate how well they are doing with respect to how they said they would.

In 2003, the Higgs Sensitivity Working Group worked for six months to produce a re-evaluation of the discovery potential of the Tevatron for a SM Higgs, which had been assessed in 1999. I was part of the team both in 1999 and 2003, and am happy I could contribute to the estimates that were produced. You can see a summary in the plot below.

 

Basically you are looking at three different curves whose height (beware, the y axis in the plot is logarithmic) is proportional to the amount of data the two experiments need to collect in order to say something about the Higgs, as a function of the unknown mass of that particle. The lowest curve, in purple, shows the luminosity needed to exclude the existence of the Higgs, at 95% confidence level ; the middle curve, in green,  shows the luminosity needed to find an evidence of the Higgs “at 3-sigma” (a strong signal, but no discovery yet); and the top curve, in blue, shows the luminosity needed to obtain conclusive, “five-sigma” proof of the existence of the elusive boson.

The curves have a roller-coaster appearance because the searches are favored at low mass by the higher production rate of Higgs bosons, which mostly decay to pairs of b-quark jets there. The discovery reach gets worse as the Higgs mass grows and the rate drops, but there is a competing effect at about 160 GeV due to the onset of the very clear signature of the H->WW decay: that channel “opens up” when the Higgs weighs about twice the W. So there is an increase up to about 140 GeV – where the Higgs decay to b-quarks is not frequent and the WW(*) final state not yet dominant, a local minimum where the production of real WW pairs eases observation, and then a definitive rise for still higher masses.  

[The (*) symbol means that one of the two W bosons can be produced off-mass-shell, and thus cost less than 80 GeV of energy to be created. This allows the Higgs to decay to WW pairs even if it has less than 160 GeV of mass, but the process is quickly suppressed as its mass falls below 140 GeV or so.]

Also, note that the results of the 2003 study only refer to the low-mass range, and they sit slightly below those of the previous, 1999 study (which had no real data to confirm many of the assumptions). The 2003 study was made in a rush, and we could not include systematic effects. The probable worsening of the inclusion of systematic errors is roughly equivalent to bringing the 2003 segments up to match those of the earlier study.

Anyway, the way to read the plot is basically to decide an amount of data one believes the Tevatron experiments will collect, and draw a horizontal line. The parts of the three curves which lay below the chosen line then tells one what is the mass region where the Tevatron will exclude at 95%, find evidence of, or discover the Higgs boson.

The interesting part comes now. CDF is still in the process of combining their results on the various searches based on 1/fb datasets, but in Dzero they have already done their homework. The plot below -which I already discussed in some detail a couple of weeks ago- shows what they find.

 

The black curve shows the obtained 95% C.L. limit as a function of Higgs mass, plotted as a “times SM” factor: basically a pure number factoring out the known Standard Model Higgs cross section. Until the black curve is above unity there is no exclusion for the particle, but only a upper limit on a anomalous cross section. The red dotted curve shows what limit Dzero expected to be able to set, given their analyses and datasets. It is computed with a technique called “pseudo-experiments” which basically simulates the act of extracting a limit from pseudo-datasets constructed by mixing Monte Carlo events according to the expected collection rate and characteristics. 

Using the Dzero results above, and assuming CDF will do just as well (no reason to believe they won’t: and many individual analyses have shown that indeed the two experiments have similar sensitivities) we can make some quick and dirty calculations, and compare actual limits to 2003 expectations, to see if the latter are confirmed or what.

Beware, some caution is mandatory. First of all, many of the analyses Dzero combined together have not been fully optimized yet. These analyses are really complex, and they have to bring together and play in a concert all the most refined analysis tools – which in turn need data to be tuned and time to be understood. In particular, for low-mass searches both an efficient, low-noise b-quark tagging and the highest possible resolution on jet energy are critical points.

Another point to make is that some of the analyses have not been included in the combined limit because they are worthless until more statistics is put together. It is the case of associated production of top pairs and Higgs, or vector-boson fusion processes. They are expected to contribute only slightly, but everything counts in these kinds of search soups.

After the above caveats have been mentioned, let us work with the numbers. From the 2003 predictions we get that for a 100 GeV Higgs a integrated luminosity per experiment of 1/fb would be enough for a 95%CL limit (crossing point of x=100 with the purple curve). At 140 GeV, 10/fb are needed. At 160 GeV 4.5/fb are enough, and at 200 GeV 80/fb are required. Now, take those numbers and double them: you will then get the luminosity that a single experiment needs to achieve the same result (here we use the assumption that Dzero and CDF are about equal in their sensitivity).

Further assume that the discovery reach increases with the square root of the increase in luminosity: this is true as long as searches are statistically limited, something which is roughly true for all of the Higgs searches. Finally, also assume (for once, a conservative bit) that no further improvements will be achieved by the experiments over what Dzero is displaying right now.

If the above assumptions are correct, we can turn the tables around and compute the limit at 95% C.L. that one single experiment should be getting in 1/fb at various masses: the limit comes out naturally as a “times SM rate” number, meaning that it tells us how much larger than the predicted Standard Model rate the exclusion is expected to be.

We thus get that for a Higgs mass of 100 GeV, one Tevatron experiment should be setting a limit at sqrt(2)=1.4 times the SM cross section; at 140 GeV, the limit should be at 4.5xSM; at 160 GeV, 3xSM; and at 200 GeV, 13xSM. Now let’s compare with what Dzero is doing. Better still, let us compare with what Dzero expected to be doing given their analysis methodology and data: the red curve. This way we factor out the “chance factor” and get a more meaningful comparison with 2003 predictions – which indeed were meant to say “50% of the times, the experiments will exclude this or find that”.

The comparison is quite interesting indeed. We in fact find that at 100 GeV Dzero is doing 2.8 times worse than expected in 2003. At 140 GeV, it is doing 2.1 times worse. At 160 GeV, it is doing 1.4 times worse. And at 200 GeV, it is doing just 1.1 times worse than what one would extrapolate from the 1999/2003 study.

Now remember what we discussed above: analyses are tough, especially at low mass. We are, indeed, quite deep into Run II as far as data taking is concerned, but on the other hand analyses are only just starting to produce results close to the possible optimum. Recall that the best results on the top quark mass in Run I were produced by CDF and Dzero six years after the end of data taking, and the error shrunk by a factor of two from the earlier results based on the same dataset.

What that means, is that the Tevatron experiments are like a good wine: they get better with time. So, is a factor of 1.5 – 2.0 worse than expectation worrysome ? Does it mean that the 2003 predictions were far off the mark ?

My personal answer is a resounding NO. I am confident that CDF and Dzero will deliver results in line with expectations. And since I believe that the integrated luminosity delivered by the end of 2009 will most likely be around 7 inverse femtobarn per experiment (or so I extrapolate – will discuss that in another post soon), I think I can make some predictions here: a 115 GeV Higgs will produce 2.5 to 3.0 sigma evidence in Tevatron data if it is there. If not, an exclusion will be drawn up to 130 GeV. The region 155-170 GeV will also be excluded if the Higgs is not there, otherwise a 160 GeV particle will play the dirty trick of preventing CDF and Dzero from setting a limit there: because in that case they will see a small excess, too insignificant to allow any claim.

These are my two pence. Or, as Groucho Marx would say, “Those are my beliefs. And, if you don’t like them… Well, I have others”.

Comments

1. island - May 3, 2007

If it can be 160 GeV, then you know it will….
-Murphy’s Law…😉

2. Kea - May 4, 2007

Thanks once again, Tommaso, for an informative post. If it can, island?

3. Tripitaka - May 4, 2007

Great post and i love the Groucho Marx quote

4. island - May 4, 2007

Hi Kea, it’s not the best example, but the idea was that an excess at 160 GeV is the only place where CDF and Dzero can’t set a limit, so Murphy’s law makes things as frustrating as possible, since you can bank-on everything happening that can to make life *most* difficult.

What’s the strong anthropic prediction?… HAHA!

5. dorigo - May 4, 2007

Island, yes…Murphy rulez. But in this case there are so many things that can go wrong, that Nature (the bitch, not the magazine) will have a hard time picking a couple.

Kea, you are most welcome as always.

Hi Tripitaka, thank you. Just a note: the GM quote goes “Those are my principles…”, not “my beliefs”. I modified his sentence to fit the bill, but it works equally well.

Island, the strong anthropic principle predicts that humanity is bound to fail in the search for the Higgs, because this Universe was made exactly for that purpose.

Cheers to all,
T.

6. island - May 4, 2007

More like… Worlds are made via this process, and we can’t fail, because this is *our* thermodynamic function, so be a good slave bitch and keep looking…😉

7. carlbrannen - May 5, 2007

Along the line of Higgs and mass stuff, I’ve just finished a beautiful calculation, an exact relativistic correction to Newton’s gravitational equations of motion. The calculation uses Painleve coordinates which are of interest to QFT because they are a single chart that covers the entire black hole. These coordinates have hope of treating gravity as a force on a flat Minkowski space, compatible with the rest of the standard model.

Also, I will eventually read Foucalt’s pendulum and The Name of the Rose. My reason for starting with The Island of the Day Before was because the local used book store had a new copy of it for $2.

8. dorigo - May 5, 2007

Hey Carl, I am impressed! Your java applet is really cool!

http://www.gaugegravity.com/testapplet/SweetGravity.html

Cheers,
T.

9. carlbrannen - May 5, 2007

I’m glad you enjoy the Java applet (which is initialized with “knife edge” black hole grazing orbits), but it is nothing compared to what it will be when I am done. I will add a bunch of pull down menus to select initial conditions that demonstrate various facts about gravity, Newtonian and relativistic.

For example, I will have a bunch of points separated by only a small amount. When they orbit close to a black hole they will string out, thereby demonstrating tidal forces (sort of). Later I will update it to rotating black holes, make it 3 dimensional, and will even be able to do Lense-Thirring demonstrations.

I found the relativistic equations by writing proper time as an integral of the line element ds^2 over t. Then the calculus of variations gives three differential equations in x, y, and z with respect to t. That is, a = F/m like Newton. This is new work and an improvement over current methods of “post Newtonian” celestial mechanics. It is very simple, but I suppose I will write up a paper and submit it to arXiv.

The usual textbook GR method is to use affine parameters. This ends up with 4 differential equations, one of which, d^2t/ds^2 is redundant.

My calculation pissed off the local GR experts who “corrected” my work by showing how the usual 4 DEs can be obtained fom Christoffel symbols. Now they’ve arranged to have the thread locked, the punishment usually given to threads that discuss theories that blatantly violate relativity. I’m already laughing.

10. Not Even Wrong » Blog Archive » All Sorts of Stuff - May 5, 2007

[…] are changing the way the media works), your best bet is Tommaso Dorigo’s blog. His latest posting explains well what the current state is, and predicts that, with the data expected from the […]

11. island - May 5, 2007

But Carl, why didn’t you work with Chris Hillman more than to ignore her input? Yes, she is a real pain in the butt, but that’s because she is a self-taught expert, with no background in physics, and she really knows her stuff.

12. Kea - May 6, 2007

…with no background in physics, and she really knows her stuff.

Sorry, island, but I find this statement a little confusing…

13. island - May 6, 2007

Chris taught herself relativity, but she already had the math background as a professor of mathematics at one of the universities in the state of Washington, as I recall. As far as I know, Chris had no formal education in theoretical physics, but she does know relativity as well as any degreed physicist.

Anyway, to anyone that cares, Carl sent me an email explaining his problem with Chris.

14. carlbrannen - May 6, 2007

Island, I contributed to the miscommunication by repeatedly stating that I constantly make mistakes and appreciate corrections. In fact I’ve always tended to drop minus signs, forget to differentiate terms, lose factors of 2pi, etc., and I do appreciate people pointing these things out.

Now that I’ve been 29 for twenty years, my error rate has increased. Maybe that’s why full professors keep graduate students around. Anyway, my admitting these limitations got interpreted as me stating that I didn’t understand the fundamentals of general relativity, and when I said one thing, I really meant some other, wrong, thing.

15. SM Higgs limits: how well is CDF doing ? « A Quantum Diaries Survivor - August 17, 2007

[…] well is CDF doing ? August 17, 2007 Posted by dorigo in physics, science, news. trackback In a post about Higgs limit predictions last May I discussed how, with some assumptions and approximations, the combined limit then […]

16. The 1999/2003 Higgs predictions compared with CDF 2009 results « A Quantum Diaries Survivor - February 13, 2009

[…] by dorigo in news, personal, physics, science. Tags: CDF, D0, Higgs boson, Tevatron trackback Two years ago I used the combined Higgs search limits produced by the D0 experiment to evaluate how well the […]


Sorry comments are closed for this entry

%d bloggers like this: