Signing papers March 16, 2008Posted by dorigo in physics, politics, science.
Anybody who wishes to make a career as a scientist has to reckon with the annoying fact that the single most important building block in the whole process is the publication – preferably on a refereed journal. Regardless how much you are brilliant and knowledgeable, you cannot expect to be hired only because of your looks or your speech. This is even more true in systems which do not use the “reference letter” system, such as in Italy – where candidates for a position are not allowed to let illustrious personalities of the field to speak on them behalf, and where any application for a research position has to be complemented with large envelope containing a copy of all one’s publications.
In some cases, publications are analyzed by their “impact factor” which depends on the number of citations the paper generated. Other measures include the so-called H-index, which summarizes in a single two-digit number the scientific production of a candidate: it corresponds to the number H of papers signed by the author which got at least H citations each.
Publishing something worth to be cited is tough. And producing a research worth writing down is only a part of the job: the non-trivial rest is getting it approved by the refereeing process. However, large collaborations such as particle physics experiments make it much easier for individuals to obtain a thick list of articles with one’s name on them: agreements vary, but in most cases anything that is published has to carry the names of all members on it. That is very convenient: by belonging to a collaboration, one feels relieved of the need to self-promote oneself.
Of course, a publication which you directly contribute to –by being one of the main authors of the underlying study, or by having developed a software or hardware tool which is critical for the success of the investigation- is more important for your curriculum, and you will be well-advised to highlight it in the list you attach to your resume. But even in the absence of anything you directly contribute to, you will not come empty-handed in front of the next job search committee.
The mechanism outlined above makes any search committee’s work harder. If they want to do their job properly, search committees need to assess the weight of a candidate’s contribution to any paper presented for consideration, and it is quite hard to do it for publications with 700 names on them – and more so if there are 100 of them, often irrelevant ones, rather than one or two important ones. For that reason, papers with few authors are very valuable: they stand out, and their relevance is easier to recognize, they might run the risk of getting fewer citations –and thus being less valuable – but your contribution to them cannot be questioned any longer. But how does one manage to publish a paper with few authors, if one is a member of a large collaboration where the policy is to have all names on every article.
Well, there are ways to do it. I can describe what happens in the CDF experiment at the Tevatron as a case on which I am informed. In CDF, a paper describing analysis results usually gets submitted to either Physics Review Letters (PRL) or Physics Review D (PRD). Papers discussing more technical issues typically get submitted to Nuclear Instruments and Methods (NIM), and in the latter case one can propose a short list of names as authors who “specifically contributed” to the work presented. The process of putting names on the article becomes incremental, and the default author list is circumvented. Most collaborators will avoid begging for their name to be inserted in a paper they did not even know had been written, and short lists will result.
Despite the fact that NIM is less “prestigious” a magazine than PRL or PRD, the game is worth playing. But what defines what is technical and what is not ? A technical publication must not contain real physics results, for which a very well-defined approval process is enforced. Hardware descriptions, analysis methods, sub-detector performance studies: these are things that easily pass as “short author list” papers in CDF. However, the system can be gamed to some extent. A recent case in CDF was an analysis which did not look at real data, and only used detector simulations to assess the discovery reach of the experiment for some very exotic new physics process. The work had been produced by a few colleagues collaborating with some theorists which had an idea and wanted to test it in more detail than with a simple “idealized detector” model. They requested a green light to the experiment, but several colleagues of mine objected – and I did too.
The matter is in fact quite debatable. CDF considers his own property not just the data, but –correctly, in my opinion- also its very accurate detector simulation, which required years of unrewarding work to be tuned and perfected. If CDF allowed its members to contact individually a theorist with a good idea and do sensitivity studies on this or that new process, we would be doing a poor service to Science: we would end up with lots of unexploited good ideas. A few individuals would get nice publications in their resume, at the expense of those who did not take part in the process directly. It is much better, in my opinion, if the collaboration as a whole considers a search, produces an analysis on the data, and publishes the model and the results together. The theorist, in the latter case, can be referenced in the paper, or even figure as a visiting scientist and be included in the long author list.
Eventually, though, experiments with strict policies about publication issues like CDF have to reckon with the fact that the data they produce are not private property, but a world heritage. On the sad day when the Tevatron shuts down for good, it will be utterly nonsensical to keep the data private. It will be the time when the Tony Smiths out there will finally have a way to prove their point!