jump to navigation

A seminar against the Tevatron! March 20, 2009

Posted by dorigo in news, physics, science.
Tags: , , , ,
trackback

I spent this week at CERN to attend the meetings of the CMS week – an event which takes place four times a year, when collaborators of the CMS experiment, coming from all parts of the world, get together at CERN to discuss detector commissioning, analysis plans, and recent results. It was a very busy and eventful week, and only now, sitting on a train that brings me back from Geneva to Venice, can I find the time to report with the due dedication on some things you might be interested to know about.

One thing to report on is certainly the seminar I eagerly attended on Thursday morning, by Michael Dittmar (ETH-Zurich). Dittmar is a CMS collaborator, and he talked at the CERN theory division on a tickling subject:”Why I never believed in the Tevatron Higgs sensitivity claims for Run 2ab”. The title did promise a controversial discussion, but I was really startled by its level, as much as by the defamation of which I felt personally to be a target. I will explain this below.

I have also to mention that by Thursday I had already attended to a reduced version of his talk, since he had given it on the previous day in another venue. Both I and John Conway had corrected him on a few plainly wrong statements back then, but I was puzzled to see he reiterated those false statements in the longer seminar! More on that below.

Dittmar’s obnoxious seminar

Dittmar started by saying he was infuriated by the recent BBC article where “a statement from the director of a famous laboratory” claimed that the Tevatron had 50% odds of finding a Higgs boson, in a certain mass range. This prompted him to prepare a seminar to express his scepticism. However, it turned out that his scepticism was not directed solely at the optimistic statement he had read, but at every single result on Higgs searches that CDF and DZERO had produced since Run I.

In order to discuss sensitivity and significances, the speaker made a un-illuminating digression on how counting experiments can or cannot obtain observation-level significances with their data depending on the level of background of their searches and the associated systematical uncertainties. His statements were very basic and totally uncontroversial on this issue, but he failed to focus on the fact that nowadays, nobody does counting experiments any more when searching for evidence of a specific model: our confidence in advanced analysis methods involving neural networks, shape analysis, and likelihood discriminants; the tuning of Monte Carlo simulations; and the accurate analytical calculations of high-order diagrams for Standard Model processes, have all grown tremendously with years of practice and studies, and these methods and tools overcome the problems of searches for small signals immersed in large backgrounds. One can be sceptical, but one cannot ignore the facts, as the speaker seemed inclined to.

Then Dittmar said that in order to judge the value of sensitivity claims for the future, one may turn to past studies and verify their agreement with the actual results. So he turned to the Tevatron Higgs Sensitivity studies of 2000 and 2003, two endeavours to which I had participated with enthusiasm.

He produced a plot showing the small signal of ZH \to l^+ l^- b \bar b decays that the Tevatron 2000 study believed the two experiments could achieve with 10 inverse femtobarns of data, expressing his doubts that the “tiny excess” could mean an evidence for Higgs production. On the side of that graph, he had for comparison placed a result of CDF on real Run I data, where a signal of WH or ZH decays to four jets had been searched in the dijet invariant mass distribution of the two b-jets.

He commented that figure by saying half-mockingly that the data could have been used to exclude the standard model process of associated Z+jets production, since the contribution from Z decays to b-quark pairs was sitting at a mass where one bin had fluctuated down by two standard deviations with respect to the sum of background processes. This ridiculous claim was utterly unsupported by the plot -which had an overall very good agreement between data and MC sources- and by the fact that the bins adjacent to the downward-fluctuating one were higher than the prediction. I found this claim really disturbing, because it tried to denigrate my experiment with a futile and incorrect argument. But I was about to get more upset for his next statement.

In fact, he went on to discuss the global expectation of the Tevatron on Higgs searches, a graph (see below) produced in 2000 after a big effort from several tens of people in CDF and DZERO.

He started by saying that the graph was confusing, and that it was not clear in the documentation how it had been produced, nor that it was the combination of CDF and DZERO sensitivity. This was very amusing, since sitting from the far back John Conway, a CDF colleague, shouted: “It says it in print on top of it: combined thresholds!”, then adding, with a pacate voice “…In case you’re wondering, I made that plot.” John had in fact been the leader of the Tevatron Higgs sensitivity study, not to mention the author of many of the most interesting searches for the higgs boson in CDF since then.

Dittmar continued his surreal talk with an overbid, by claiming that the plot had been produced “by assuming a 30% improvement in the mass resolution of pairs of b-jets, when nobody had not even the least idea on how such improvement could be achieved”.

I could not have put together a more personal, direct attack to years of my own work myself! It is no mystery that I worked on dijet resonances since 1992, but of course I am a rather unknown soldier in this big game; however, I felt the need to interrupt the speaker at this point -exactly as I had done at the shorter talk the day before.

I remarked that in 1998, one year before the Tevatron sensitivity study, I had produced a PhD thesis and public documents showing the observation of a signal of Z \to b \bar b decays in CDF Run I data, and had demonstrated on that very signal how the use of ingenuous algorithms could reduce by at least 30% the dijet mass resolution, making the signal more prominent. The relevant plots are below, directly from my PhD thesis: judge for yourself.

In the plots, you can see how the excess over background predictions moves to the right as more and more refined jet energy corrections are applied, starting from the result of generic jet energy corrections (top) to optimized corrections (bottom) until the signal becomes narrower and centered at the true value. The plots on the left show the data and the background prediction, those on the right show the difference, which is due to Z decays to b-quark jet pairs. Needless to say, the optimization is done on Monte Carlo Z events, and only then checked on the data.

So I said that Dittmar’s statement was utterly false: we had an idea of how to do it, we had proven we could do it, and besides, the plots showing what we had done had been indeed included in the Tevatron 2000 report. Had he overlooked them ?

Escalation!

Dittmar seemed unbothered by my remark, and he responded that that small signal had not been confirmed in Run II data. His statement constituted an even more direct attack to four more years of my research time, spent on that very topic. I kept my cool, because when your opponent offers you on a silver plate the chance to verbally sodomize him, you cannot be too angry with him.

I remarked that a signal had indeed been found in Run II, amounting to about 6000 events after all selection cuts; it confirmed the past results. Dittmar then said that “to the best of his knowledge” this had not been published, so it did not really count. I then explained it was a 2008 NIM publication, and would he please document himself before making such unsubstantiated allegations? He shrugged his shoulders, said he would look more carefully for the paper, and went back to his talk.

His points about the Tevatron sensitivity studies were laid down: for a low-mass Higgs boson, the signal is just too small and backgrounds are too large, and the sensitivity of real searches is below expectations by a large factor. To stress this point, he produced a slide containing a plot he had taken from this blog! The plot (see on the left), which is my own concoction and not Tevatron-approved material, shows the ratio between observed limit to Higgs production and the expectations of the 2000 study. He pointed at the two points for 100-140 GeV Higgs boson masses, trying to prove his claim: The Tevatron is now doing three times worse than expected. He even uttered “It is time to confess: the sensitivity study was wrong by a large factor!”.

I could not help interrupting again: I had to stress that the plot was not approved material and was just a private interpretation of Tevatron results, but I did not deny its contents. The plot was indeed showing that low-mass searches were below par, but it was also showing that high-mass ones were amazingly in agreement with expectations worked at 10 years before. Then John Conway explained the low-mass discrepancy for the benefit of the audience, as he had done one day before for no apparent benefit of the speaker.

Conway explained that the study had been done under the hypothesis that an upgrade of our silicon detector would be financed by the DoE: it was in fact meant to prove the usefulness of funding an upgrade. A larger acceptance of inner silicon tracking boosts the sensitivity to identify b-quark jets from Higgs decays by a large factor, because any acceptance increase gets squared when computing the over-efficiency. So Dittmar could not really blame the Tevatron experiments for predicting something that would not materialize in a corresponding result, given that the DoE had denied the funding to build the upgraded detector!

I then felt compelled to add that by using my plot Dittmar was proving the opposite thesis of what he wanted to demonstrate: low-mass Tevatron searches were shown to underperform because of funding issues, rather than because of a wrong estimate of sensitivity; and high-mass searches, almost unhindered by the lack of an upgraded silicon, were in excellent agreement with expectations!

The speaker said that no, the high-mass searches were not in agreement, because their results could not be believed, and moved on to discuss those by taking real-data results by the Tevatron.

He said that the H \to WW is a great channel at the LHC.

“Possible at the Tevatron ? I believe that the WW continuum background is much larger at a ppbar collider than at a pp collider, so my personal conclusion is that if the Tevatron people want to waste their time on it, good luck to them.”

Now, come on. I cannot imagine how a respectable particle physicist could drive himself into making such statements in front of a distinguished audience (which, have I mentioned it, included several theorists of the highest caliber, including none less than Edward Witten). Waste their time ? I felt I was wasting my time listening to him, but my determination of reporting his talk here kept me anchored to my chair, taking notes.

So this second part of the talk was not less unpleasant than the first part: Dittmar criticized the Tevatron high-mass Higgs results in the most incorrect, and scientifically dishonest, way that I could think of. Here is just a summary:

  • He picked up a distribution of one particular sub-channel from one experiment, noting that it seemed to have the most signal-rich region showing a deficit of events. He then showed the global CDF+DZERO limit, which did not show a departure between expected and observed limit on Higgs cross section, and concluded that there was something fishy in the way the limit had been evaluated. But the limit is extracted from literally several dozens of those distributions -something he failed to mention despite having been warned of that very issue in advance.
  • He picked up two neural-network output distributions for a search of Higgs at 160 and 165 GeV, and declared they could not be correct since they were very different in shape! John, from the back, replied “You have never worked with neural networks, have you ?” No, he had not. Had he, he would probably have understood that different mass points, optimized differently, can provide very different NN outputs.
  • He showed another Neural Network output based on 3/fb of data, which had a pair of data points lying one standard deviation above the background predictions, and the corresponding plot for a search performed with improved statistics, which had instead a underfluctuation. He said he was puzzled by the effect. Again, some intervention from the audience was necessary, explaining that the methods are constantly reoptimized, and there is no wonder that adding more data can result in a different outcome. This produced a discussion when somebody from the audience tried to speculate that searches were maybe performed by looking at the data before choosing which method to use for a limit extraction! On the contrary of course, all Tevatron searches of the Higgs are blind analyses, where the optimization is performed on expected limits, using control samples, and Monte Carlo, and the data is only looked at afterwards.
  • He showed that the Tevatron 2000 report had estimated a maximum Signal/Noise ratio for the H–>WW search of 0.34, and he picked up one random plot from the many searches of that channel by CDF and DZERO, showing that the signal to noise there was never larger than 0.15 or so. Explaining to him that the S/N of searches based on neural networks and combined discriminants is not a fixed value, and that many improvements have occurred in data analysis techniques in 10 years was useless.

Dittmar concluded his talk by saying that:

“Optimistic expectations might help to get funding! This is true, but it is also true that this approach eventually destroys some remaining confidence in science of the public.”.

His last slide even contained the sentence he had previously brought himself to uttering:

“It is the time to confess and admit that the sensitivity predictions were wrong”.

Finally, he encouraged LHC experiments to looking for the Higgs where the Tevatron had excluded it -between 160 and 170 GeV- because Tevatron results cannot be believed. I was disgusted: he most definitely places a strong claim on the prize of the most obnoxious talk of the year. Unfortunately for all, it was just as much an incorrect, scientifically dishonest, and dilettantesque lamentation, plus a defamation of a community of 1300 respected physicists.

In the end, I am really wondering what really moved Dittmar to such a disastrous performance. I think I know the answer, at least in part: he has been an advocate of the H \to WW signature since 1998, and he must now feel bad for that beautiful process being proven hard to see, by his “enemies”. Add to that the frustration of seeing the Tevatron producing brilliant results and excellent performances, while CMS and Atlas are sitting idly in their caverns, and you might figure out there is some human factor to take into account. But nothing, in my opinion, can justify the mix he put together: false allegations, disregard of published material, manipulation of plots, public defamation of respected colleagues. I am sorry to say it, but even though I have nothing personal against Michael Dittmar -I do not know him, and in private he might even be a pleasant person-, it will be very difficult for me to collaborate with him for the benefit of the CMS experiment in the future.

About these ads

Comments

1. Tony Smith - March 20, 2009

Tommaso, I am having trouble seeing or downloading the first image.

Tony Smith

2. Anonymous - March 20, 2009

That is indeed unfortunate that he would behave in an unproductive way. It would be good to see his slides so that a point-by-point written rebuttal could be made, hopefully that could turn it into a more productive discussion, if anything productive could come of it.

3. Anonymous - March 20, 2009

Then again, his seminar sounds like it was not even worth rebutting.

4. T - March 20, 2009

Hi Tommaso,

Outside of Tevatron folks, what was the general reception of the audience? Was he preaching to an all-to-willing choir here?

I worry about the growing rift between the LHC and Tevatron. Many of us work on both. Can’t be healthy. This kind of talk is an attention gathering device, with little difference between modern political talk shows ( a la Bill O’Reilly, etc. ). It has no place in our halls.

5. Tony Smith - March 20, 2009

Since Dittmar has never worked with neural networks,
and
since he referred to very-much-simpler counting experiments such as those used in the early 1990s with low numbers of events
maybe he just does not understand the sausage factory
(your term, Tommaso, used for the LLR Obs Higgs Tevatron Run II black line)
that are now used with high numbers of events.

Further, Dittmar seems to enjoy attacking large physics projects (other than CERN). In the book “The Final Energy Crisis” (Pluto Press 2008), he wrote a chapter called “Fusion Illusions” in which he attacks ITER.
Whether or not ITER will show a way to useful fusion energy,
it is a 10 billion Euro ($15 billion) project that could extend our knowledge of physics related to plasma fusion, and that amount of money is very small change compared to the many trillions of Euros and USA Dollars that are now being spent on trying to fix the New York-London financial system,
so I think that the attack by Dittmar on ITER is excessive and unwise, and an indicator that he seems to have some sort of personal hatred for large physics projects not his own.

Tony Smith

PS – Even though I have not seen the details of the sausage factory, I have enough faith in it to (as I said in comments in another thread here) offer to amend my Strega bet from 5/fb to 10/fb.
Is that OK with you Tommaso?

6. Bjoern Penning - March 20, 2009

Great. Thank you for this article! As one of the main analyzer of H->WW at the D0 side I read the slides already two days ago. The obvious ignorance (e.g. NN statements etc.) paired with the arrogance and choice of wording irritated me quite a bit. But I have to say I feel sort of sad for someone who uses this allegations as last resort.
Sensitivity or not… he would have had ten year time to join CDF or D0 and learn about high mass Higgs searches at hadron colliders. Reaching sensitivy or not, it wouldn’t have been a waste of time.

Bjoern Penning

7. a - March 20, 2009

In particle physics (unlike in cosmology and astrophysics) data are kept secret among the collaborations running the detectors. The competition for discovering the Higgs is strong.

In these conditions it is difficult to blindly trust complex global fits…

8. jeffwyss - March 20, 2009

Tommaso
Why do you end writing:
“I am sorry to say it, but even though I have nothing personal against Michael Dittmar -I do not know him, and in private he might even be a pleasant person-, it will be very difficult for me to collaborate with him for the benefit of the CMS experiment in the future.”

You do say you don’t have anything personal against him but then say it will be very difficult to collaborate with him! It is obvious that you have a personal problem with Dittmar! There is nothing wrong with having personal conflicts.There is no such thing as a purely scientific argument. Indeed I think it is un-scientific to think they exist and hypocritical to spead this false myth. Scientists are people and people come in all sizes, shapes and personality types. I do not not understand why the phrase “.. though I have nothing personal against [such and such a person]…” is so abused. I’ve heard it so often that it really rubs me the wrong way. Give the guy the finger and go on with your work. No one makes anybody work with people that don’t/cann’t/won’t get along. At least not in Science. In this we are very priviledged.

9. dorigo - March 20, 2009

Well, well, you guys are a busy bunch. Let me try answering briefly, within this sucking bandwidth I get through Vodafone wireless.

Tony: thanks, fixed -I think ?

Anon, sure, maybe not worth rebutting. But it would more or less like telling The National Enquirer to ignore Paris Hilton because she’s wasted ;-)

T, I agree. But I think that such episodes happen only because people have no data in CMS and ATLAS to work with (well, some of us have done a lot of work with cosmic rays, but it ain’t the same thing). In any case, M.D. sought attention with his talk rather than to make a point about Tevatron results. That is pretty evident to me.

Tony, yes, I do think Dittmar is old enough to be unwilling to learn the new tricks. But the truth is confirmed by what you say next: he likes to be controversial, and we all know it sells books.

Bjoern, thank you for your comment. As you might know, I myself sometimes let go with sceptical comments about this or that scientific result and -in fairness- I more often do it with experiments I am not part of… But there are boundaries, and to me those are way before defamation. MD crossed all boundaries.

A, it is true that there will be a lot of competition for Higgs searches in the forthcoming years. But today, I do not see it. There are two experiments working hard hand in hand, and two others watching from a distance…

Jeff, I understand what you say, but what I wrote just meant that although Michael Dittmar pissed me off with his unscientific approach, I have never had a chance to discuss chess, or politics, or anything else in the past. There are people whom I cannot stand in their daily job but with whom I am more than willing to hang around if the chance arises.

Cheers,
T.

10. dorigo - March 20, 2009

And Tony, I thought we had agreed already on the 10/fb, is that right ?
Cheers,
T.

11. cormac - March 20, 2009

Thankk you for sharing this T. – such is the usefulness of blogs. I too wonder about the reaction of the audience. Was there a general feeling that the seminar was misleading?
I hope so – it is not that uncommon for individual scientists to be unable to accept evidence (think Hoyle), but it would be ridiculous if a rift developed between CERN teams and Tevatron teams along such lines…

12. Gordon Watts - March 20, 2009

Nice. Two quick points:
- I don’t understand why he spent so much time on predictions – does that mean he currently thinks our low mass results are correct? If so, great! Why would he think our predictoins being wrong means our actual results are wrong? As both you and John pointed out there are lots of reasons why predictions don’t match data, having nothing to do with a mistake (though those sometimes do creep in).

- He doesn’t like BDT’s or NNs? It took the Tevatron experiments almost a decade to convince themselves that these techniques work. They aren’t simple. And if you don’t spend some time in understanding how they work then you for sure wouldn’t understand them or trust them. But once you do invest the time they do work, and they almost always beat the old method. :-)

Finally, his point about cross-checking the tevatron results – go for it, I say. I’m on ATLAS and I fully expect ATLAS to double-check the results (I’d be disapointed if they didn’t). This is science. It should be reproducable!! Now, as someone that was up and coming and/or looking to make a name for myself, that isn’t the mass region I’d probably spend most of my time working on!

13. Tony Smith - March 20, 2009

Tommaso,
thanks very much for taking the time and trouble to show in detail where Dittmar was being inaccurate (at best) or dishonest (at worst) in attacking Fermilab’s work.
Now the media, New Scientist, popular blogs, etc., whose journalistic research has not advanced beyond searching Google will see the truth, and maybe the public will be at least a little bit better informed (or less mis-informed).

Unfortunately, Dittmar’s criticism of ITER has been picked up by some relatively popular blogs (such as Association for the Study of Peak Oil and Gas Ireland and crisisenergetica.org and the German wikipedia entry on Kernfusionreaktor in wissen.spiegel.de ),
and
I have not seen any detailed rebuttal by anyone from ITER.
If you know anyone working on ITER, you might suggest that they do for ITER what you did for the Tevatron.

Also, thanks for fixing the image of the first graph – I can see it now,
and
yes, 10-/fb it is for the Strega bet, which means that (as long as the sausage recipe and data selection remain substantially the same)

you can win if at least one of the valleys moves or goes away even at 5/fb (maybe this summer)
but
I will have to wait until 10/fb (maybe late 2010) to claim a win.

I hope the 10/fb bet is a bit more meaningful than at 5/fb,
which as you pointed out was really about a 50-50 even bet.

It is sort of ironic that I am betting on your sausage factory being better at seeing physics than you (who helped build it) think it is.

Tony Smith

14. anonymous - March 21, 2009

While I agree that the speaker’s comments are offending and not well thought-through, I still want to address two points.
Looking at the sensitivity projections, it appears that Tevatron should have had the sensitivity to exclude the SM higgs in the region below 135 by now. Let’s take MH=115GeV. The curve claims that 2fb-1 per experiment would have been enough to exclude MH=115GeV? The Tevatron result at 115 is currently 2.4*SM cross-section. How is that not an overly optimistic prediction? With the current projections, Tevatron would need ~7fb-1 to exclude 115 Higgs, and that is in a optimistic sceanrio. I agree that the predictions in the high mass region do match with the reality though…
Also, many improvements that were suggested in the tevatron sensitivity study, were also overly optimistic, and in general were never achieved in practice, such as the impact of improvement in mass resolution.
By the way, I am a member of CDF, and am working on Higgs searches, but I think it is important to be self-critical. Many important points were addressed by this study, and a roadmap was set for the Higgs program. It is not a mortal sin that some of the predictions were off. And if people critisize that, well…

15. Luboš Motl - March 21, 2009

Come on, it’s just one irrelevant talk. You can’t act like a chicken little. I haven’t heard of that name – or have I forgotten – and I think that I never will.

Imagine that instead of your not too important manipulations with a few graphs, you’re doing serious physics – like string theory – and the critics like the fucked-up Columbia University scumbag use similar cheap pseudoarguments but are also supported by stupid enthusiastic journalists (and many people openly visit his terrorist website).

That could be a reason to be concerned but one irrelevant Dittmar is not.

Note that the basic strategy of criticism is indeed very similar to all other unqualified critics of science. It starts with the narrow-mindedness of the critic who is only able to imagine some very naive methods to find the correct answers, with the denial that any other method than a method that can be squeezed into his limited brain can possibly exist, and continues with the criticism of those who don’t fit into his limited mental box.

16. zupan - March 21, 2009
17. a - March 21, 2009

Tommaso, the competition is between TeVatron and LHC, and getting results is politically crucial for both. Someday somebody will claim to have a 3 or 5sigma evidence for the Higgs coming from a complicated analysis of data that are available only inside the collaboration that will gain from the discovery. Everybody knows that gaining 1 or 2 sigma by adjusting the arbitrary details of the analysis is very easy. Access to data will be denied to theorists, to people who built the accelerator, to people who paid it with taxes. This is not a healthy situation.

18. dorigo - March 21, 2009

#14, there is one clear reason why predictions do not match the data for low-mass higgs searches, and it is clearly written in the post: with our sensitivity study we were showing what we would obtain with significant upgrade of the silicon detector, a 50M$ upgrade that failed to get funded. The acceptance to Higgs bosons of low mass goes with the square of the b-tag efficiency if you do double-tagging (the most sensitive single channel), and a fortified silicon detector would have recovered at least a factor of 2 in sensitivity IMHO.
Further: I made this point elsewhere already: mass resolution improvements have not been perfected yet. Those are the most complicated thing to achieve. The sensitivity has improved with the amount of data collected but more than that with the time passed since 2002, since analyses have constantly been improved. One of those improvements is indeed getting the resolution of dijet mass as small as possible, and on this particular issue the Tevatron experiments are still underperforming. That does not mean that the predictions were wrong! The predictions never said when those results could be achieved, i.e. how much time it takes to reach that sensitivity! They say those are achievable figures given some luminosity. Remember that the Tevatron experiments have produced their best top mass determination for Run I six years after the end of data taking, and everything falls back into perspective.

Best,
T.

19. Occasionally anonymous - March 21, 2009

> I do not know him, and in private he might even be a pleasant person

I hope so, but belonging to the “ConCERNed for Mankind” mailing list one can say that there are at least two areas where he is not.
But I disperately believe that nobody can be an aggressive and arrogant crackpot for 100% of his time, so he is probably very pleasant and reasonable when he is not concentrated on expressing random skepticism about Higgs limits or about airplanes hitting big towers in New York.

20. mandeep gill - March 21, 2009

T- despite the comments above saying ignore him, i think you’re totally in the right to rebut his every point, and esp to lay bare his ridiculousity in your blog, to be recorded in public. The criticism of ITER particularly bugs me — it’s that lame internecine fighting for those who think the pie is limited and they want *their share* and to cover their turf. It’s about showing how each science project stands on its own, and deserves, or not, to be funded. Anyway, he seems pretty silly from all you say, and way too invested in the personal part of science. Which we can never get fully away from, as people say above (and that’s mostly a good thing, when the positive side of our passion for it shines through, but sometimes not so wonderful). And some of us dwell on the negative aspects of it more than others, alas.

21. Alberto Ruiz - March 21, 2009

Hi, Tomasso

I have just looked into the Michael talk and must say that I agree absolutely with your opinions. It is really astonishing to see such a low level talk at CERN, I hope at the end is irrelevant.

For those who are working in both sides, Tevatron and LHC, as you and myself, the best is to compete scientifically. I am very happy with the progress Tevatron has done in the last years, mainly due to the hard work of many people, some of them PhD students, whom deserve respect and admiration.

22. Collider Smackdowns « Not Even Wrong - March 21, 2009

[...] in particle physics and not regularly reading Tommaso Dorigo’s blog, you should be. His latest posting reports on incendiary claims by Michael Dittmar of the CMS collaboration that recent Tevatron Higgs [...]

23. anonymous#14 - March 21, 2009

quote: “Tevatron experiments are still underperforming. That does not mean that the predictions were wrong! The predictions never said when those results could be achieved, i.e. how much time it takes to reach that sensitivity”

I think this is pretty silly. The goal of the study was to reallistically assess possible improvements. If you say to the panel, that with 2fb-1 you will exclude higgs, they expect that you actually will exclude it with 2fb-1. Anyone can make fantastic assumption, which, if achieved, provide great outcome. That was not the goal of the study.
Anyway, don’t get me wrong, but I think the critisism of this study is sometimes hard to disagree with. That is not to say that I agree with the dittmar guy, who probably has never actually done anything himself, and it is to only critisize others.
On the other hand, as was said above, the Tevatron experiments are working extremely hard, and all the responses I personally heard recently, both from theorists and experimentalists, are very positive.

24. Charles Tye - March 21, 2009

Hi Tomasso,

Maybe I’m a bit slow, but after reading your blog for about 2 years, I finally noticed that the starry image at the top of the blog contains the outline of a certain famous part of the Carina nebula.

If Dittmar is reading your blog, you could shorten your rebuttal and summarise your position by simply drawing his attention to this image ;-)

25. Blake - March 22, 2009

If the average # of !’s per slide in your talk is >=3, you are 75% likely to be a kook.

26. anonymous - March 22, 2009

Let’s be real, here.
Dittmar is not a random guy critisizing just for fun the result. He understood first with Denner how to perform a search of the H->WW->llnunu channel before anybody else.
And, what is this magic about the neural networks? Is there anybody at Tevatron who knows the answer to the simplest question about how these ANN distributions are changed with QCD higher order corrections?

27. dorigo - March 22, 2009

Hi Mandeep,

I agree, I have the right to rebut his talk, but I do have the feeling that I am doing him a favor. There’s more than a couple science reporters who regularly read this blog, and through it Dittmar can easily get more advertisement than he deserves. It might even be a calculated shot, inserting a plot from my blog in his talk etc… Note that some anonymous commenter left a comment in another thread a week ago, advertising the talk by Dittmar. It might well be himself.

Thank you for dropping by and the support, Alberto!

Anon #14, who is silly is you. You did not understand my argument, apparently. I am saying that these analyses are difficult, and it might take some time before the full potential of those 2/fb is extracted. There are plots in this blog from past posts about the Higgs searches at the Tevatron that demonstrate this point dramatically: in the course of four years, the sensitivity of low-mass Higgs searches has increased with three or four times the rate expected by luminosity increase alone.

In other words, it is still too early to speculate about whether sensitivity predictions were on par: you must allow for some years of digestion of the data, like it or not.

Charles, yes I know what you are referring to, but I prefer words in prints to gestures.

Cheers,
T.

28. dorigo - March 22, 2009

Unreal Anonymous #26,

the fact that he “understood before anybody else” that H->WW signatures could be searched is a ridiculous statement, but of course in this column, which discusses how he managed to put together dozens of way more ridiculous, false statements, it is indeed excusable.

Perhaps you do not understand the point here: the fact that he is not a random guy, and the fact that he was not talking to a random audience but to professionals of the field, make his talk all the more outrageous. I would be ashamed to walk the corridors at CERN after giving such a horrible performance.

About your last question, I do not argue with people who do not leave their name. I can explain things to them, but you seem to know all of it and only want to be provocatory, so keep your ideas for your unnamed identity.

Cheers,
T.

29. anonymous - March 22, 2009

You are unnecessarily touchy… What I am saying here is that many people, including myself, have a huge respect for the following work with more than 100 citations:

“How to find a Higgs boson with a mass between 155-GeV – 180-GeV at the LHC.”
M. Dittmar (Zurich, ETH) , Herbert K. Dreiner (Rutherford) . ETHZ-IPP-PR-96-02, RAL-TR-96-049, Aug 1996. 12pp.
Published in Phys.Rev.D55:167-172,1997.
e-Print: hep-ph/9608317

So, like it or not, Dittmar understands this physics pretty well.
Why do you believe that my statement above was wrong given that this is the first paper which demonstrated how to find a WW final state by exploiting that the two leptons like to fly together and the absence of high pt jet activity in the signal.
Is there any other earlier publication than this one?

As far as my question, you know that I know
that you know that there is no such a study. Again, like it or not, the Higgs search that Tevatron is performing is very difficult and the way it is performed is not understood completely theoretically. I am not trying to be provocative; I just do not understand why you are trying to push down our throats that we should not think about and, why not, criticize the result.

As far as my identity, I have posted this and the previous e-mail using my full institute e-mail address, so you must know it.
From your answer, however, I am now very worried about the openness of fellow scientists in criticism and questions.

30. dorigo - March 22, 2009

Ok, sorry I had not checked that you indeed left an email address. I will try to not use that information in my answer to you ;-)

So, the point boils down to this: shapes of Neural Network outputs for the many contributing backgrounds are still not known to the Nth order, so we should not trust the results of those algorithms.

I think it is a fair point which, unfortunately, Dittmar never made in his talk :)

In any case, I think systematic uncertainties in the background shapes do cover the reduced knowledge of the effects you mention. Those shapes are studied in many different kinematic regimes, aka control samples. They are known to reproduce those data sets well. There are systematic uncertainties in any physical measurement, and the fact that we do not know everything yet does not mean we cannot make measurements…

Anyway: I am not pushing anything down anybody’s throat. I have defended the reputation of the Tevatron searches and results at the seminar, and I reported about that here. I never implied you should believe everything without criticism. I only implied that DIttmar made a fool of himself because his arguments were ridicule, and got ridiculed.

Cheers,
T.

31. Paolo - March 23, 2009

Hi. I have a very basic curiosity: given that, as far as I know, it’s pretty easy in general to find people (i.e, old school statisticians, for short) not liking much NNs for many reasons, are there studies proving that in this specific application area precisely NNs (vs other non-linear but more theoretically tractable statistical models) are by far the most powerful methods? For sure there are, I would just like to know some basic references… Thanks!

32. dorigo - March 23, 2009

Paolo, I am not aware of such literature. This is pretty much the state-of-the-art in HEP, and the appropriate place to search for a good reference would be Physical Reports or some similar review magazine.

Best,
T.

33. Ben Kilminster - March 23, 2009

Hi,

I am co-convener of the Higgs group at CDF.
There has been some mention of our sensitivity at low mass (mh < 135 GeV) not being up to projections. I would like to point out that our sensitivity to low mass Higgs has been consistently improving. This is more meaningful than projections. This is achieved progress.
This is why the search is worth continuing. Each analysis update, we make improvements to analysis technique such as increasing lepton ID efficiency, improving identification of b-quark jets using complementary algorithms, using alternate sub-detectors to verify measurements in primary detectors, and seeking better discriminants to distinguish signal from background. We are also adding in new Higgs search channels. In many ways we are doing better than the 2000 Higgs report suggested we could do. We have found new techniques that were not envisioned in the 2000 Higgs report.

Here is a plot which demonstrates this improvement for a Higgs boson mass of 115 GeV/c^2 :
http://www-cdf.fnal.gov/physics/new/hdg/results/combcdf_090116/figures/twotimescdf115jan09log.gif

The x-axis is our integrated luminosity, how much data we analyze.
The y-axis is the multiple of the Higgs cross-section we are sensitive to, with 95% Confidence Level. This plot is “2*CDF”, because these are extrapolations from real CDF analyses, which we are doubling the dataset to approximate combining our results with the DZero experiment. In 2005, we were at the green dot. If we had merely just added new data to the analyses, our sensitivity would have followed the green curve. Instead, we concentrated on
improving analysis technique. These improvements have made us
perform better than extrapolations of just adding new data. As we’ve jumped from green to red to blue to purple to gray, we
are improving sensitivity, and the goal of being sensitive to the
actual Higgs boson cross-section is becoming more and more in reach.

I make no claim of how well we can do by the end of the Tevatron run. You can look at the plot and connect the dots into the future as you see fit. What must be clear from our past performance though is that we have not run out of ideas on how to make our low mass Higgs sensitivity even better. Based on our continuing analysis improvements, and the high rate of Tevatron data we are being delivered, there is every reason to be optimistic.

34. mathematician - March 23, 2009

It’s pretty surprising, if true, that neural net methods are considered reliable enough for particle physics applications. Is there any published survey of which NN methods are being used for what type of experiments?

35. Tony Smith - March 23, 2009

mathematician said “… It’s pretty surprising, if true, that neural net methods are considered reliable enough for particle physics applications. …”.

As Gordon Watts said in comment 12 above:
“… It took the Tevatron experiments almost a decade to convince themselves that these techniques … NNs … work.
They aren’t simple.
And if you don’t spend some time in understanding how they work then you for sure wouldn’t understand them or trust them.
But once you do invest the time they do work,
and they almost always beat the old method. …”.

As to what sort of survey to read about such NNs, Tommaso said
“… the appropriate place to search for a good reference would be Physical Reports or some similar review magazine …”.

As to my personal view of such NNs,
the NNs allow you to analyze huge amounts of data that would be very hard (maybe impossible) to analyze the old-fashioned way,
and
if you are careful (and as far as I have seen CDF and D0 have been very careful) the analysis you produce with NNs is very useful and interesting.
One thing I do miss with them is the ability to look at kinematics of individual events (like in the old days of the 1990s when Erich Varnes could write a PhD thesis showing detailed kinematics of a few events),
but that is what happens when the NN sausage machine combines huge numbers of individual events:
you cannot look at a slice of the sausage and see the details of individual events,
any more than the financial derivative hedge-fund people can look at an individual Credit Default Swap and see the details of individual mortgages involved.
I will say that I am very impressed by the morality of the CDF and D0 people as compared to the immorality of the financial derivative people:
The CDF and D0 people have NEVER abused their control of the details of the NN programs to “rig the game” and get self-serving results,
while
the financial derivative people have abused their control of their derivative programs to “rig their game” to create a vast pyramid that totals $500 trillion in nominal value of listed derivatives. When you add in the $600 trillion or so of unlisted over-the-counter derivatives, you see that the mess they created is on the order of
a QUADRILLION USA dollars.
If the USA prints up a Quadrillion USA dollars to cover the mess,
then would you be happy holding such a diluted currency?
If the USA does not print up a Quadrillion USA dollars to cover the mess, then many very influential financial/political people will go broke. It will be interesting to see what happens.

Tony Smith

36. mathematician - March 24, 2009

re: #35,

1. “Physical Reports or similar review magazine” is not a reference.

2. Financiers can and do look at the individual loans inside a CDS “sausage”. Are you saying that this is a greater level of transparency than available to particle physicists using Neural Nets to aggregate experimental data? That would be pretty shocking. I would like to see some more concrete description of what NN methods are being used for what purpose.

37. dorigo - March 24, 2009

Dear Ben,
thanks a lot for your clarification. Indeed, I have tried to explain that sensitivity predictions are asymptotic in time. I suspect that by now those who argue on this issue do so for reasons other than finding the truth on the matter.

Mathematician, as surprising as it looks, yes, we have worked really hard on them, and they have become a well-understood tool. But not everybody can be expected to understand the level of scrutiny that CDF and DZERO have put into these tools to convince themselves these are reliable.

Cheers,
T.

38. mathematician - March 24, 2009

If NN have become a well-understood tool tested for over a decade (for instance, understood to the point that reliable significance estimates can be extracted), there must be some published documentation of this understanding. Where can somebody, who is not a member of CDF or DZERO, read about what applications of NN are considered well understood and reliable in the particle community, what sort of testing was performed, etc?

39. Tony Smith - March 24, 2009

mathematician (whoever she or he is – it is annoying to reply to someone who will not put their name on their assertions) said:
“… Financiers can and do look at the individual loans inside a CDS “sausage”. …”.

My view of that statement by mathematician:
as Pauli often said: “Das ist Ganz Falsch”.

The Chicago Fed Letter article “The role of securitization in mortgage lending” in November 2007 by Richard J. Rosen said:
“… mortgage-backed securities (MBSs) … assume that an issuer has collected 1,000 mortgages each worth $100,000 … This $100 million pool of mortgages can be used to back 10,000 bonds, each worth $10,000 … Each bond … has a similar claim on all payments … The illustrative example … has a much simpler structure than the typical securitization issued by a private sector firm …Pools of MBSs are sometimes collected and securitized … The first default losses are allocated to the most junior class of bond … Early prepayments are allocated to the A class …”.

Clearly, it is not known exactly which mortgages will default or be prepayed, so it is not known into which MBS any given mortgage will fall,
and
equally clearly, it is not possible to identify any particular mortgage (like events in NN physics) with any particular MBS (like slice of the the NN sausage), and the valuation of MBSs (like the physics interpretation of NN) is done by complicated math/statistics stuff.

In the case of NN physics, it seems to me that CDF and D0 do an honest and effective job and therefore their NN results are useful and good,
while
the financial community has used the complexity of their stuff to hide a huge pyramid scheme that is so dysfunctional and bad that, according to a 23 March 2009 Financial Times article by Jamil Anderlini:
“… China’s central bank on Monday proposed replacing the US dollar as the international reserve currency with a new global system controlled by the International Monetary Fund. …”.

Tony Smith

40. Anonymous - March 24, 2009

mathematician:

Here is a (long, and incomplete) list of relevant journal references:
http://neuralnets.web.cern.ch/NeuralNets/nnwInHepRefRev.html

41. Anonymous - March 24, 2009

And note those are just the ones from 10-20 years ago. There are probably 3-4 times that number in the last 10 years, probably a more thorough search would turn up a decent list of them.

42. a - March 24, 2009

Tommaso, why should we believe in the result of a complicated neural network search for the higgs, given that the actual data are not available outside the experimental collaborations?

Notice that I am not questioning Tevatron. I am questioning the standard way of managing collider data. It is obsolete and dangerous, as we no longer have many colliders, run by different people, that can cross-check the results. And mistakes still happen, just look at your previous post “D0 refutes CDF’s multimuon signal”. And that it a simpler analysis.

Today we need that all LHC and Tevatron data are made available to everybody.

43. dorigo - March 24, 2009

Dear a,

some scepticism is very healthy in our job. However, these large collaborations have such refined internal refereeing methods, and are so full of people who love their scientific reputation more than they love their families, that it is really preposterous to insist that the results are “wrong”, or just not believable, in the case of the Higgs search, which is very carefully crafted.

Please also note that while I myself let go with some scepticism with analyses I do not know in detail, I always do in a pointed way, by inquiring on specific points. Usually, I get direct answers, which satisfy my curiosity and satiate my scepticism. Instead, saying “oh, it’s wrong, because NNs cannot be trusted” is a rather myopic way of referring to these searches.

I do agree, however, that data should be public. But not in a widespread way, because data can be manipulated to one’s desires. The best warranty that only correct results are squeezed out of Tevatron data, I believe, is to leave only CDF and DZERO draw conclusions from it, and maybe allow paper referees to access it in the review process.

Cheers,
T.

44. Andrea Giammanco - March 24, 2009

Dear Mathematician and dear all,
there is nothing conceptually different, neither in the positive nor in the negative aspects, between relying an analysis on a single discriminating variable and basing it on a very complex multivariate discriminant, for example the one coming from a NN.
The only reason why NNs are looked at with suspicion, is that they are powerful discriminants. A big power brings big responsibilities ;)
I mean, that if you just apply a sharp cut on some simple variable and then count how many events pass, and by subtracting the background and dividing by the efficiency*luminosity you call it cross section, you are not so sensitive to the fine details of the distribution of that variable. But if, instead, you use a powerful statistical method which makes a complete use of the entire distribution, then you must work much harder to convince the HEP community that what you did gives the correct answer. If you are not using one variable but many, for example by combining them in a multivariate analysis (for example feeding them to a NN), then the amount of work becomes enormous. And the task is realistically achievable only when the detector and the modelling of “trivial” physics effects have been validated for a proper time.

By the way:
There is no answer to Mathematician’s question because each time that a NN is used (or whatever other method, for what can matter), the paper where it is used has to demonstrate that it makes sense for that particular case. We are not interested to the details of implementation, it can even be a pure black box, but at the end of the day we cannot publish if we don’t demonstrate that the black box gives the correct answer when applied on control samples. A lot of ingenuity is needed to figure out which are the most wise and complete control samples, and a lot of paranoia is developed in the process.
But this is true also when you just use the simplest method that can come to your mind. The only difference, again, is that if the statistical method is simple then it’s (usually) easier to figure out a proper control sample.

An important collateral point: the implementation of your NN can even be *wrong*, from the point of view of a computer scientist, and still the result of the analysis be correct. This only means that your particular NN could not achieve its full potential, by mistake; but for the potential that it could achieve, its result can be cross checked and validated using Nature itself.
I know more than one example of analyses were the NNs were used in a “wrong” way (e.g., in many cases the authors were not aware that there is a long specialized literature which explains that exactly for their case it’s much more convenient to use 2 hidden layers instead of 1, or to use a NN with N outputs instead of training N independent nets, etc.), and still I had no objections against the reliability of the physics result.

45. Andrea Giammanco - March 24, 2009

Anyway, it’s a good objection to say (as was pointed out by many in this thread) that when a complex analysis requires a complex validation, a very short paper with just a summary of the main plots is not enough to form a complete judgement.
But here we are discussing about the very first announcement of the combined results; usually more detailed companion papers are already available, each focused on a different channel or a different aspect of the analysis, or promised to be published soon.

46. Andy - March 24, 2009

Back to anonymous’s point about NLO effects on the NN output shapes for WW… both CDF and D0 have studied this, by reweighting the input variable shapes to those with larger and smaller WW system pt. CDF used a modified pythia with more jet radiation, and D0 used Sherpa vs. Pythia. The differences in the final NN output shapes were then taken as a systematic uncertainty. We feel we generously covered the possible effects from higher-order QCD corrections.
We (D0) also included systematics for other higher-order effects, such as the inclusion of the gg->WW process.

47. Higgsterics - March 24, 2009

In answer to “some anonymous commenter left a comment in another thread a week ago, advertising the talk by Dittmar. It might well be himself.”: No, he wasn’t. I confess I did it and I’m now fully
repentant. The talk was greatly embarrassing (for the speaker).

48. Tom Rizzo - March 24, 2009

It’s clear from the beginning that this guy has an agenda…I’m actually suprised that he was allowed to speak at all given that. I am more interested in the reaction of the general audience at CERN who are NOT Tevatron experimenters like yourself & John C. What were their reactions to this ‘ in the
hallway’ later??

49. claire - March 24, 2009

Hi,
does anybody know what are the input variables of the NN used for the new exclusion limits?

50. Higgsterics - March 24, 2009

I’m not a Tevatron guy so I guess I fall in the category of general audience at that talk. It was pretty clear that the speaker was immune to any criticism or argument, a clear sign of crackpots. He is (was?) not a crackpot, so there must be a reason for his behaviour but I believe it was obvious for anybody attending that talk that it was simply a loss of time. It was often ridiculous and embarrasing.

51. a - March 24, 2009

hi,

thank you for the answer… but I do not fully agree.

We will have to deal with doubtful 5 sigma anomalies. If the collaboration will allow anybody to check the data, and external people will tell “they did dirty tricks, but it is reasonable”, then the case will be strong.

If instead the collaboration will just tell “our internal referees agree that we did a perfect work”, then it will be harder to trust the results. For sure, most experimentalists honestly do the work they love. But a big collaboration needs to reach an agreement, and here politics can contaminate science. We know that collaborations claim 3sigma anomalies when they need more funds.

So I insist that it is important that collider data are made public, and that your idea of “maybe allow paper referees to access data” is not enough. In other fields (like cosmology and astrophysics) data are routinely made public to everybody, opening the door to some guys who just like to criticize, but also to guys who notice new features in the data (like the WMAP haze, etc), and this was more effective than referees.

By the way, a curiosity. 13 years ago the LSND collaboration claimed an anomaly, and a student wrote a separate paper disagreeing with its collaboration. After many years the LSND anomaly has not been seen by any other experiment. Does anybody know what happened to that student? Is he still alive?

52. LA CAZA DEL HIGGS/ GUERRA LHC-TEVATRON « GRAZNIDOS Weblog - March 29, 2009

[...] lenguaje fuerte que normalmente no está asociado con los físicos de partículas – mire el enlace follow the link  para leer las opiniones de  Dorigo sobre la presentación en su [...]

53. Mistery solved - April 1, 2009

The only rational explanation for the surprising behavior of Dittmar is finally clear to me: in a machiavelic move, he must have been hired by the Tevatron people to boost their reputation around CERN. At least that´s the effect it had on me.

54. Dorigo contra Dittmar, un combate de boxeo dialéctico contra el Tevatrón en el CERN « Francis (th)E mule Science’s News - April 5, 2009

[...] survivor.”  Nos relata el combate de boxeo dialéctico Dorigo contra Dittmar en “A seminar against the Tevatron!” Tras 35 años volvemos a revivir el combate de Foreman contra Ali, en el que el segundo [...]

55. Just a link « A Quantum Diaries Survivor - April 5, 2009

[...] with amusement (and some effort) a spanish account by Francis (th)E mule of Michael Dittmar’s controversial seminar of last March 19th. I paste the link here for several reasons: since I believe it might be of [...]

56. Arvid - September 11, 2009

Dittmar has been doing some obnoxious hackery on uranium resources availabiliity at the Oil Drum (http://www.theoildrum.com/tag/michael_dittmar). He seemed something of a crank, but given that I didn’t know very much about I discounted it. Reading your blog post above it seem that is just his modus operandi. We have a word for people like that here in Sweden, we call them rättshaverist.


Sorry comments are closed for this entry

Follow

Get every new post delivered to your Inbox.

Join 100 other followers

%d bloggers like this: