The typical tediousness vs interest chart June 6, 2007Posted by dorigo in humor, physics, politics, science.
Large collaborations of high-energy physics experiments are sometimes slow in releasing their results for public consumption. Reputations are at stake, and physicists like to check and recheck their results before they sign them off. I have discussed in detail elsewhere on this site the review process of the CDF experiment, which is the only one I know for direct experience. There, I criticized it, because I think it is a bit too baroque. However, CDF has a great reputation of publishing only excellent and correct results, so I should keep quiet and live of the benefits the whole system has provided me: more than 200 publications with my name on them, of which I am really, really proud.
Anyway, coming to the topic of this post: Tony Smith in a comment on the former post asked how long it will take D-zero to publish their searches of MSSM Higgs bosons. Of course I have no clue. However, my 15-years experience in CDF allows me to discuss the general trends of the tediousness of the review process when studied as a function of the importance of the result that is being published. My discussion should probably apply to any large particle physics experiment.
There are, in my model, three different trends.
1) On the low interest side, as the relevance of the result approaches zero, there is an asymptote, called “Asymptotic irrelevance“. It is due to the fact that a really superfluous result will encounter such opposition that it will never make it to the public – which is to say, an infinitely long review process.
2) There is then a minimum for analyses done with a method already used in the past, which just use a little bit more statistics and shrink by a small amount the error bars of the previous determinations of the measured physical quantity. In CDF examples of such analyses are those measuring a cross section for a well-known phenomenon. Nobody wastes their time triple-checking things already checked in the past, when the analysis was new, so these results get public quite quickly. I dubbed this region “Routine minimum“.
3) As the interest of the result grows, so does the length of the review process. People want to be sure things are done the right way, techniques are optimized, and in general there is a lot of interest in the collaboration, which drives lots of feedback that the authors need to properly address. Also, there may be in a rare instances a growing envy for the authors by competing groups, which makes things slightly harder for them. One reaches a maximum of the tediousness for the “Flagship analyses”, those for which the experiment was really built: in CDF, it is the case of the top quark mass, for instance: for that reason, the peak of tediousness is called “Flagship pickiness“.
4) For exciting results which exceed the interest of flagship analyses, the trend reverses abruptly. We are here in a region where time is of the essence: a brand new result on a quantity never measured before, the observation of a new particle. In these cases, the collaboration works like a single man to get the result out of the door as quickly as possible. It was the case of the observation of oscillations of Bs mesons in CDF recently: despite the incredible complexity of that analysis, the review process was fast – not light, but extremely effective. The regime we approach here is that of a “Nobel urgency“, when it becomes more important to get there first rather than wearing the right tie.
So here is the big picture: