A first for me – just rejected a paper June 30, 2008Posted by dorigo in personal, physics, science.
Tags: godparenting, refereeing
I am not new to the job of refereeing scientific papers, but most of my experience comes from internal reviews. That mandate is called godparenting in CDF; it consists in carefully checking an analysis even before a paper draft exists, and then babysitting the process ending in the paper being submitted to the publisher. It is a long journey at times, and in more than one occasion it required me to even do real work to double check some results. I have been a godparent of a total of eight papers, and I was always on the pushing side (as per the true mandate of CDF, which expects godparents to help speed up the process once they are convinced on the soundness of results): I never tried to hinder publication of the work of my peers. And I claim that the fact that godparents are not anonymous did not have any weight in my decisions…
Instead, the refereeing of papers written by other collaborations is something I have started doing only in recent times. Here, one is doing things anonymously, and in principle there is room for showing one’s own true pickiness or malignity. I am not malign with other people’s efforts, but boy, I sure can be picky. And this time I was – but I think I took the right decision.
I just sent back to the publisher my comments on a paper recently submitted for publication by a large collaboration, which has published more than a hundred papers before this one. I cannot disclose the name of the experiment nor the topic of the paper, unfortunately. I can still say what was wrong with it, which in the end forced me to decide against publication if no further work will be done by the authors.
I found the paper rather sloppy in several aspects. Lack of precision, vagueness of some statements, numbers lacking uncertainty estimates, carelessness in describing experimental techniques. But these are minor sins; what really upset me, however, was to see in one of the main figures which accompanied the paper a systematic disagreement between data and Monte Carlo which was not commented in the text nor sized up in any way. Not a small effect: something big, which certainly propagates to the main results of the work. It is as if you were candidly showing you did not understand the composition of your data, but still wanted to extract results from it.
I can well imagine the reason for the inaccuracy. These days many people involved in experiments that have stopped running are ramping up their involvement in the LHC experiments, so they cannot spend their time on keeping the level of publications of their former experiment as high as they would like. They still want to “get the paper out”: unfinished business is as bad in high-energy physics as it is in any other human activity, from landing a plane to transplanting a liver. However, nothing is an excuse for lack of accuracy in a scientific paper, in my opinion. If something is below some threshold, why publish it ?
Sure, the main author could be waiting for his Ph.D. and he or she needs the publication as we need the air we breathe – but that does not move me: as scientists we have to produce analyses of the highest quality we can, and demanding a high level is the only way we can oppose a trivialization of our discipline. Besides, I think it will require just a couple more weeks to the authors of the paper I reviewed to figure out the source of the systematic effect I pointed out, and produce a much improved draft.
Still, the question is nagging me. Wasn’t I the one in favor of the most open diffusion of scientific information ? Open diffusion certainly means less control on accuracy. So why did I reject the paper ?
I believe I did so because I know that the reputation of the collaboration which signed it is high, and I have thus just enforced some “quality control” on its output, having been given a chance to do it. I have no doubt the paper will come out, hopefully improved, in a near future.