jump to navigation

Scientific Bang for the Buck January 5, 2008

Posted by dorigo in computers, mathematics, news, physics, politics, science.
trackback

A concept worth a preprint, specifically Bruce Knuteson’s “A Quantitative Measure of Experimental Scientific Merit“, physics.data-an/0712.3572v1. And certainly a preprint worth a look, if only for making up one’s mind on the scientific merit of working at MIT. It came out on Christmas day on the ArXiv.

Jokes aside, I found the paper quite entertaining, and at times indeed surprising. While I find Bruce’s approach to the problem of assessing the scientific merit of a proposed experiment or analysis rather dangerous, and the explicit formulation of priors for the probability of discovering new physics in this or that experiment vaguely reactionary, I admit the paper brings home a point, which is however its premise rather than its thesis: review committees, as well as search committees, move in the dark. I am still in doubt on whether the exercise of endlessly debating over priors is a valid substitute to good-old preconceptions and biases. 

Bruce is quite up-front from the very beginning in stating what is the main purpose of his study:

“In the context of determining which research program to pursue, review committees often must decide the relative scientific merits of proposed experiments. Within large experiments, deciding which analyses to emphasize requires similar decisions”.

Which gets me to raise the first objection – or rather a comment: It is remarkably radical to talk about “which analyses to emphasize”. I find that the concept, in fact, is a bit a too business-like way of doing physics in a large experiment. At the Tevatron we certainly need to emphasize the top mass measurement, the B mixing, and the Higgs searches these days, but we do not need a computation of entropy decrease to know it; emphasizing other analyses (which means, please note, de-emphasizing others) because of some pre-arranged prior (the estimated probability that a gluino is there, for instance) smells of a covert way of depriving scientists working in the collaboration of their wonderful inventiveness, of their freedom to be guided by their nose, by their intuition.

It is not a chance, it seems, that Knuteson is one of the authors of a complex automated machinery for new physics searches, a device producing hundreds of histograms of kinematical variables describing any combination of physics objects (high-Pt electrons and muons, jets, missing Et, photons, etcetera) in search for discrepancies with the standard model: is number-crunching winning its battle with scientific minds as much as it has won the chess challenge with our best grandmasters ?

The paper starts with a definition of the surprise content of the result of an experiment. It does so by using information theory, arriving at the wanted measure of the merit of an experimental result as the entropy decrease in the state of knowledge relative to the particular physics question investigated. Here is the synopsis of the discussion up to Section II, in Knuteson’s words:

“The essential thesis of this article is summarized in two sentences.

  • The appropriate quantification of scientific merit of a proposed experiment or analysis (before it is performed and its outcome is known) is the reduction in information entropy the experiment or analysis is expected to provide [...].
  • The appropriate quantification of scientific merit of an experiment or analysis after the result is known is the information gained from the result [...].”

Fair enough: if one knew what is the chance of the Tevatron discovering new physics in Run II, or the LHC finding something beyond the Higgs, one could certainly be able to tell how well the money was spent in building those experiments. Using the reduction in information entropy is a principled way to quantify the appropriateness of the investments.

But here, in fact, comes the nice part: the paper goes on to delve with the question by specifically working out priors. In Section III, Knuteson uses priors derived in the Appendix to estimate the “scientific bang for the buck” (SBFB) of existing experiments, and even that of past experiments discovering the Psi, the W and Z bosons, and so on. One learns that the probability of the Tevatron Run II finding new physics is 20%, and that the probability that the LHC will see something new is 90%. 

Using those numbers and the cost of the experiments, the SBFB of the LHC is computed at a mere 0.001, while the Tevatron stands a giant at 5.0! Also worth noting is the specific search for single top production at the Tevatron, which – due to the low surprise factor – has a SBFB of 0.00001. Ironically, in the same table Knuteson includes the SBFB of the experiment of flipping a coin: the SBFB of the experiment is zero, not that different from the global search for new physics at the LHC!, although, to be fair, zero and 0.001 are indeed quite different when you take the logarithm.

As far as completed experiments go, one learns instead that the tau discovery stands at a SBFB of 5.0, soundly beating runner-up J/psi discovery at 0.2, with the top quark discovery at an amateurish 0.0004. The table is long, and you can search for your favorite HEP result, and judge for yourself on whether the Nobel Prize to Rubbia was’t indeed a bit hasty.

In earnest, the summary of Bruce’s paper is very direct in clarifying the rather limited scope of the proposed quantification method:

“Use of information content or information gain to evaluate the scientific merit of experiments requires the estimation of the probabilities of qualitatively different outcomes, and the reader may object that the problem of quantifying an experiment’s scientific merit has simply been reformulated in terms of the estimation of the probabilities of possible experimental outcomes. At worst, this reformulation significantly changes and focuses the discussion. The fact that there is not a well-developed literature to point to for the justification of these a priori probabilities emphasizes the fact that until now the importance of these probabilities has not been properly recognized [...]“

However, he argues that

“The reader may object to the very idea of constructing an explicit figure of merit [...] Such a reader misses the point that this is done (implicitly, if not explicitly) every time a decision of resource allocation is made. It is surely in the field’s best interest for such evaluations to be made in the sharpest, most open, most quantifiable, and scientifically best motivated framework possible”.

Which, to my biased ears, sounds like, “come on, we all know that the allocation of funding to science is made by fools, so let’s give ‘em some only partially random numbers to base their decisions upon and we will contain the damage”.

I do not mean to criticize the paper too much. It is a quite principled and tidy study of the problem. I think one cannot do much better in terms of finding a suitable figure of merit than what Knuteson did. I disagree with the very concept, though. But maybe I am too old-fashioned and I miss the point: scientific funds are not allocated wisely. On that, I think, we all agree.

Update: being away on vacation obviously does not help one staying in touch with what happens elsewhere on the web. I only now got aware of two other posts on this same topic: one at Superweak and one at Collider Blog. Backreaction also discusses it shortly.

Update 2: a detailed discussion of the statistical aspects of Knuteson’s paper is also available at Deep Thoughts and Silly Things.

About these ads

Comments

1. superweak - January 5, 2008

I suspect your biased ears have misunderstood the intent of the last paragraph you quote. Economics of course rears its head everywhere – even if we had infinite money, we do not have infinite time, luminosity, or manpower, and these resources need to be allocated in some way. In the case of my experiment, we have to decide how to allocate luminosity to different energy running points, and a 90 MeV difference in the center of mass energy causes impassioned arguments. An analogous technical issue at a hadron machine would be trigger bandwidth: I hope you agree that it would be foolish to choose triggers based purely on some random person’s intuition.

The point of the paper is that we already prioritize what physics we do based on prejudices and payoff per resource expended. Why so many SUSY studies, and not walking technicolor instead? Because our priors give weight to SUSY discovery. Why Z’ searches? Because the discovery payoff is so large compared to the effort required. If one has a mind of a certain bent, an obvious thing to do is to try to quantify these tradeoffs and prioritizations that everyone does. It’s less romantic, but conceivably more efficient.

At any rate I’m much more concerned with the ‘garbage in, garbage out’ feature of the priors involved than I am with the overall concept…

2. dorigo - January 5, 2008

Hi Superweak,

I guess I have been a bit too harsh in reviewing the paper – maybe my respect for Bruce, who is a colleague and a very skilled physicist, has not propagated to the text. However, I did appreciate the point, but I did not agree with it.

I was not suggesting, of course, that we should give no scientific input to the funding committees – just the opposite. Indeed, garbage in, garbage out. It is a tricky business and we should be very wary of what we feed into the system. Imagine you presented those figures of merit to a giant funding agency before the Tevatron Run II and the LHC were finalized. Factors of 5 versus 0.001 ? How could a meaningful discussion arise with such pre-determined biases toward running the Tevatron now and spending the LHC money for the next twenty years in free beer and chips at the users center ?

And you mention trigger bandwidth. It makes me smile, because I have been around at hadron colliders to see so many battles at trigger meetings – indeed, when triggers are discussed the best and the worst of a physicist emerges. But it is the very soul of a collaboration when people argue passionately about doing this or that kind of physics! I do not want this to be replaced with a naked list of numbers. Worse still, I shiver at the thought that people may discuss priors endlessly, without having a clue (unless they cheat) of the final outcome in terms of the SBFB and critical ensuing decisions in running the experiment.

One example ? The Tevatron’s single top discovery is valued, in Bruce Knuteson’s paper, at a ridiculously low value – only inches away from zero, because everybody believes that single tops are indeed produced by EW interaction. In quite the same fashion, the W and Z discoveries had a ridiculously low scientific value a priori if we stand by those tables. By jove, we NEEDED to find them even if we were CERTAIN of their existence. How could we progress in our understanding of electroweak physics if we neglected that maybe redundant but unavoidable step ??

The SBFB gives numbers, and that I fear. Because numbers are hard to argue with, once priors have been agreed upon. 5.0 versus 0.001 ? Forget the LHC!

Cheers,
T.

3. Andrea Giammanco - January 5, 2008

The fact that the tau discovery is rated much more than the J/psi discovery is the most clear proof of the fact that his methodology is flawed.
The tau lepton was “more unexpected” than the J/psi, but its relevance for the understanding of nature was (although not small) much much smaller than J/psi.
The J/psi provided evidence of the charm quark, and the existence of the charm quark solved the main trouble with the Standard Model (the flavour-changing neutral currents problem), which came out to be much reinforced. And the charmonium spectroscopy provided confirmation and further insight on QCD, too.

I guess that it would be very easy to fix these inconsistencies between the results of this method and one own’s preferred order of importance, just by playing with the formulas a bit. Which makes the whole idea behind this paper a huge masturbation: instead of saying “I think that LHC is more important that Tevatron” (which is what I think, indeed) I only have to spend time by tweaking the formulas.

4. dorigo - January 5, 2008

Hi Andrea, I agree. The problem is that the value of a search, or a find, depends on too much more than the surprise factor. Take the LHC: the value of a discovery of new physics at the LHC far exceeds the scientific content by itself, because on it depends much of the mid-term future of the whole of high-energy particle physics. One could even push the reasoning quia absurdum further, and claim that if there were one single entity of new physics accessible at either the Tevatron or the LHC, it had better show up in the latter, because of the backfiring potential of having to declare the LHC a useless project.

Cheers,
T.

5. Judging experiments by a priori theoretical expectations « Collider Blog - January 5, 2008

[...] (5-Jan-2007)  There are nice discussions of this issue at Quantum Diaries Survivor and Deep Thoughts and [...]

6. More commentary on judging experiments by their surprise discovery level « Collider Blog - January 5, 2008

[...] not from me – it is quite good! If you find this topic interesting, then you should read the post Scientific Bang for the Buck, by Tommaso Dorigo, and comments to that post. Also, there is an intriguing discussion on the Deep [...]

7. goffredo - January 5, 2008

Could the paper be a prank? The title of the paper does make me laugh.

Jeff

p.s. I contributed to CDF in the 90s and felt at the time that a nobel for the nice top discovery would have been excessive prize for a fully mature discipline that dominated for some time the scene, funding and public interest. By contrast the W and Z nobel a decade earlier made more sense because it really was an impressive establishment of a NEW way of doing particle physics studying very high-pt event at colliders using general purpose detectors. This approach became mainstream in the following years (LEP, SLD, planned detectors at SSC, CDF, D0, and now CMS, Atlas, etc. etc.). The top discovery was mainstream (in the good sense of the word). The UA1 and UA2 were in many ways revolutionary and marked a break with the previous way of doing high energy particle physics. I have a more gut admiration for what CERN did than what was well done at Fermilab.

8. dorigo - January 5, 2008

Hi Jeff,

no, I believe Knuteson’s paper is the result of some serious thoughts on the matter. Unless his sense of humor is much more evolved than mine. I think the math is sound, and I believe he has a few good points to make.

My criticism is on the very concept of quantifying scientific value – something as ethereal as sanctity. In fact, by the same token we could quantify sanctity – take the piousness factor of the candidate as the quantity of self-punishment multiplied by the square of time spent praying, and work out the Sanctity Bang per Year by dividing the result by the length of the candidate’s life… This, I believe, could be a good candidate for a prank paper, not Knuteson’s.

Cheers,
T.

9. goffredo - January 5, 2008

“Not everyhting that counts can be counted, and not everything that can be counted counts.”

In all frankness what makes me chuckle (I am in a good mood this evening!) is precisely the insistence on math! To me it looks like a rain-man’s way of trying to encapsulate people without even bothering to look them in the eyes, an easy way of trying to say something profound without doing the real home-work which is NOT math but the study of history and sociology. I suspect that any insightful sentences could have been made just as well by simple and honest handwaving (without the math).

10. D - January 5, 2008

My subjective probability that Bruce Knuteson will turn out to be right about this stuff, Quaero and all the rest, win a Nobel and make all the rest of us feel like idiots is 0.005.

Of course, it would’ve been 0.02 3-4 years ago.


Sorry comments are closed for this entry

Follow

Get every new post delivered to your Inbox.

Join 102 other followers

%d bloggers like this: