jump to navigation

Some posts you might have missed in 2008 – part II January 6, 2009

Posted by dorigo in physics, science.
Tags: , , , , , , , , , , , ,
comments closed

Here is the second part of the list of useful physics posts I published on this site in 2008. As noted yesterday when I published the list for the first six months of 2008, this list does not include guest posts nor conference reports, which may be valuable but belong to a different place (and are linked from permanent pages above). In reverse chronological order:

December 29: a report on the first measurement of exclusive production of charmonium states in hadron-hadron collisions, by CDF.

December 19: a detailed description of the effects of parton distribution functions on the production of Z bosons at the LHC, and how these effects determine the observed mass of the produced Z bosons. On the same topic, there is a maybe simpler post from November 25th.

December 8: description of a new technique to measure the top quark mass in dileptonic decays by CDF.

November 28: a report on the measurement of extremely rare decays of B hadrons, and their implications.

November 19, November 20, November 20 again , November 21, and November 21 again: a five-post saga on the disagreement between Lubos Motl and yours truly on a detail on the multi-muon analysis by CDF, which becomes a endless diatriba since Lubos won’t listen to my attempts at making his brain work, and insists on his mistake. This leads to a back-and-forth between our blogs and a surprising happy ending when Motl finally apologizes for his mistake. Stuff for expert lubologists, but I could not help adding the above links to this summary. Beware, most of the fun is in the comments threads!

November 8, November 8 again, and November 12: a three-part discussion of the details in the surprising new measurement of anomalous multi-muon production published by CDF (whose summary is here). Warning: I intend to continue this series as I find the time, to complete the detailed description of this potentially groundbreaking study.

October 24: the analysis by which D0 extracts evidence for diboson production using the dilepton plus dijet final state, a difficult, background-ridden signature. The same search, performed by CDF, is reported in detail in a post published on October 13.

September 23: a description of an automated global search for new physics in CDF data, and its intriguing results.

September 19: the discovery of the \Omega_b baryon, an important find by the D0 experiment.

August 27: a report on the D0 measurement of the polarization of Upsilon mesons -states made up by a b \bar b pair- and its relevance for our understanding of QCD.

August 21: a detailed discussion of the ingredients necessary to measure with the utmost precision the mass of the W boson at the Tevatron.

August 8: the new CDF measurement of the lifetime of the \Lambda_b baryon, which had previously been in disagreement with theory.

August 7: a discussion of the new cross-section limits on Higgs boson production, and the first exclusion of the 170 GeV mass, by the two Tevatron experiments.

July 18: a search for narrow resonances decaying to muon pairs in CDF data excludes the tentative signal seen by CDF in Run I.

July 10: An important measurement by CDF on the correlated production of pairs of b-quark jets. This measurement is a cornerstone of the observation of anomalous multi-muon events that CDF published at the end of October 2008 (see above).

July 8: a report of a new technique to measure the top quark mass which is very important for the LHC, and the results obtained on CDF data. For a similar technique of relevance to LHC, also check this other CDF measurement.

Hats off to Lubos Motl. November 21, 2008

Posted by dorigo in internet, personal, physics, science.
comments closed

No kidding, this post is to finally thank Lubos, who today shows he is much above the average on a very important quality: being able to acknowledge a mistake. Not just that: he is also surprisingly capable of putting aside bad feelings, anger, enmity he sometimes display. The comment he left this morning in my thread (which comes from his private email address and has the right IP) just shows that, plus more. We must tip our hat to him today.

This is a lesson that truth is worth something to him, too. I had lost hope about it on this particular issue. Of this, I myself apologize, and I also apologize to Lubos for the sarcasm I directed at him in my former posts. It was maybe understandable given what he had written about me in his blog, but still excessive.

UPDATE: To clarify (see a comment by Tripitaka below), in the second paragraph above I meant to say that I had incorrectly interpreted the reactions of Lubos to my explanations as a unwillingness to admit his mistake. On this I was wrong and I apologize for my suspects.

Saving a good text from a few mistakes November 21, 2008

Posted by dorigo in internet, personal, physics, science.
comments closed

After spending some time with my family this evening, I found the courage to delve into Lubos Motl’s last post about the “ghost sample” cross-section issue. I must admit he wrote overall a good post -mainly the first part is good, at least-, which however contains quite a few inaccuracies -besides, of course, insisting on his original mistake. I think it is a good idea to spend a few lines pointing out the few mistakes of that text, which can be then used with profit.

Fine: so, 742/pb or 2100/pb?

For me, the interesting part of his post starts with the subtitle “Fine: so, 742/pb or 2100/pb?”, which comes after a few lines of text he could have avoided. The first mistake, unfortunately, comes right then at the first paragraph:

Of course, the total integrated luminosity, 2,100/pb (two thousand and one hundred inverse picobarns) must be used as the denominator, as Matt Strassler explains in detail  […]

However, there follows a rather clear account of what luminosity is and other interesting information. At the end of the section, however, he falters again:

Now, there’s no doubt that the total integrated luminosity (of proton-antiproton beams) used to suggest the “lepton jets” in the recent CDF paper is 2,100/pb: see e.g. the second sentence of the abstract. If you want to keep things simple, the right denominator has always been 2,100/pb and there is nothing to talk about. But still, you may ask: why the hell Tommaso Dorigo is talking about 742/pb? Isn’t he supposed to know at least some basic things here?

The problem is, the CDF paper is not very clear. Lubos is totally correct: the abstract does quote 2100/pb. This in fact alleviates his guilt a bit, because he gets deceived.

The study first uses 742/pb, and only after page 28 an analysis of the larger dataset, 2100/pb which include the initial 742/pb, begins. The reason is that the smaller dataset, collected until 2005, did not withstand a complicated online selection called “prescale” which basically is enabled whenever the rate of proton-antiproton collisions is too high for data acquisition (which can save to disk no more than about 100 events per second).

Whenever the detector gets flooded with too high rates, prescaling factors are applied to specific triggers, such as the dimuon trigger which collected the 1400/pb used only in the second part of the study. The dimuon trigger until 2005 did not have a prescale, so it is much easier to use that dataset for cross sections and rates.

This is why the CDF paper uses 742 inverse picobarns of data until page 28, when kinematics is studied with more data (at that point, absolute rates are not important anymore, so CDF includes all the data in one single set).

Silicon vertex tracking

Then, a second subsection, titled “Silicon vertex tracking” starts. Here, Lubos falters again. He discusses the SVT trigger, which is not used to collect “ghost events” by the CDF study, nor by the former study of the correlated bb cross section. It is only used for some control samples, but he ignores this fact. It would have been better if he avoided discussing the SVT altogether, because it creates the conditions for another blunder:

“Now, only a subset of the events are picked by the strict SVT criteria: the jets in these events are said to be “b-tagged”. The precise percentage depends on how strict criteria the SVT adopts: it is partly a matter of conventions. In reality, about 24.4% of the events that excite the dimuon triggers also pass the strict SVT filter: this percentage is referred to as the “efficiency” of the (heavy flavor) QCD events. The silicon vertex tracker may also choose the events “loosely”; in that case, the efficiency jumps to 88% or so. However, if you assume that there is no new physics, pretty much all events in which the dimuon trigger “clicks” should be caused by heavy flavors – essentially by the bottom-antibottom initial states.”

Not even wrong! Lubos is confused. He confuses the SVT, which is an online trigger (not used by this analysis), with offline SVX requirements applied to the muon tracks used to select a sample where the composition is studied in detail. This is a minor mistake, although it shows just how much one can confuse matters by being careless.

Also wrong is that the SVT may select events loosely: again, it is offline selections that can do that: SVT has fixed thresholds, being an online algorithm implemented on hardware boards. But let’s not blame Lubos for not knowing the CDF detector.

More nagging is his other mistake above, also highlighted in red: by no means the simple selection of the dimuon trigger only selects bottom-antibottom! Indeed, that only accounts for 30% of the data or so. But there is an even more nagging mistake in the paragraph: he calls bottom-antibottom “initial states“, while those are FINAL states of the hard process. You have a negligible chance to find (anti)bottom quarks in the (anti)proton, so you only get them as the final product of the collision! Lubos, please use correct terminology if you want to have a chance to be taken seriously!

Unfortunately, inaccuracies pile up. Here is the very next paragraph:

“In these most special 24.4% events, bottom-antibottom pairs “almost certainly” appear at the very beginning. So at the very beginning, it looks like you just collided bottom-antibottom pairs instead of proton-antiproton pairs. If you now interpret the Tevatron as a machine where you effectively collide bottom-antibottom pairs, it has a smaller luminosity because only a small portion of the proton-antiproton collisions included protons and antiprotons that were “ready to make heavy flavor collisions”. Even though the remaining 75.6% dimuon events probably also contained bottom quarks, you discard the collisions as inconclusive.”

Amazingly, Lubos really means it: he thinks bottom-antibottom quark collisions happen at the Tevatron in numbers. Yes, he means it: “looks like you just collided bottom-antibottom pairs”. This is slightly embarassing. However, I must give Lubos a few points here for making a serious attempt at explaining things at a layman level. Let’s move on.

“You may define the corresponding fraction of all the events and normalize it in the same way as you would do with bottom-antibottom collisions. Assuming that the bottom quarks are there whenever the SVT says “Yes”, the integrated luminosity of this subset is just 742/pb, not 2,100/pb. The collisions up to this day that have passed the intermediate, loose SVX filter, give you the integrated luminosity of 1,426/pb or so.”

Again, not SVT triggering, but offline SVX cuts. Anyway: alas, Lubos, it really is that difficult, isn’t it ? This is very, very wrong, as a reader, Dan Riley, well explains in a thread here. HEP experimentalists do not do that: they do not assign integrated luminosity to subsets.

Integrated luminosity is a number which applies to a sample of data, and then, whatever cuts or further selections you make, that number remains. To make an example: you have 1000/pb of integrated luminosity, it corresponds to 10,000 events of some rare kind. The cross section of those events is of course 10,000/1000=10 pb. Now, imagine you select 5% of the data by requesting the presence of a high-Et jet. This sample has 500 events (5% of 10,000), but its integrated luminosity is still 1000/pb. Only, when you compute the cross section, you do not just do \sigma = N/L, but rather \sigma = N/(\epsilon L), where \epsilon stands for the efficiency of the cut. One may say it is a convention (since \epsilon L still has units of integrated luminosity), but it in fact avoids the mistake Lubos gets into.

The data used for the studies mentioned in the paper correspond to 742/pb. All of the data! Both the subset of data selected with tight SVX cuts (143k events), or the subset of data making the ghost sample (153k events), or the total sample (743k events) which includes both subsets.

As I already mentioned, the CDF publication is not clear about this, since in the introduction it mentions the larger integrated luminosity used for later checks of the kinematics, from page 28 on. Here Lubos is utterly confused: he splits integrated luminosity in different subsets, deceived by the fact that there is a rough proportion between the two datasets sizes and the two subsets of integrated luminosity collected without prescale until 2005, and with prescale after then.

Then, another bad paragraph, unfortunately:

“So is it OK for someone to write 742/pb in the denominator when he calculates the cross section of the “lepton jets” ghost events? The answer is, of course, No. It’s because these “new” events are actually argued not to include bottom quarks as the initial states. For example, Giromino et al. claim that the Higgs is produced and subsequently decays to various h1, h2, and/or h3 pairs (and 16 tau’s at the very end). Nima and Neal use various supersymmetric particles instead. So you can’t normalize the initial states with the assumption that the bottom quarks are there in the initial states because they are not there.”

Again foncused. True, the “new” events do not include bottom quarks. But NOT as initial states, for god’s sake!!! Anyway, it is “Giromini”, Paolo Giromini. And of course, integrated luminosity is the same for all samples considered this far in the paper, and indeed, it is always in the denominator. Always 742/pb, never an ounce more. Sorry, Lubos. Not your lucky day.


The third subsection is called “Tables”. It is here that we get a glimpse of the faulty reasoning of Lubos, which got him stuck on accusing me of a mistake:

“Open the CDF paper on page 16. The set of all dimuon events – 743,006 – is divided to the 589,111 QCD events and our 153,895 ghost events. In the second column of this Table II, you see that only 143,743 events passed the tight SVX filter, neither of which was a ghost event.

Now, if you switch to page 12 and look at Table I, you may add the entries to get 143,000+ and to see that exactly these tight SVX-positive events correspond to the (smaller) integrated luminosity of 742/pb, as the caption of Table I says. For another “written proof” that the 742/pb luminosity corresponds to tightly SVX-filtered collisions, and not all (unfiltered) collisions as Tommaso seems to think, see page 11/52 of Giromini’s talk.”

What I highlighted in blue this time is the source of Lubos’ confusion: indeed, the 143k events which were used in the past analysis by CDF (the measurement of correlated b \bar b cross section) belong to a dataset comprising 742/pb. But the rest of the data belong to it too!

The mistake of Lubos is to not reason like an experimentalist: he believes integrated luminosity follows subsets and divides accordingly, while it is a constant. The data (before any selection) amounts to 742/pb. Then, the tight SVX cuts select 143k events, or the loose cuts select more, but all samples derived from the original one all have the same denominator: 742/pb. Only, they get different efficiency factors at the denominator (the \epsilon symbol used above).

Ok, I made this post longer than it needed be. Sorry to have bored many of you, but I felt there were still quite a few readers around who had not a clue yet of whom they should believe.

A note to those of you who are still undecided: I built the CMX chambers installed in CDF, with which the data we have been discussing was collected, with my very own hands, between 1999 and 2000. I have worked for CDF since 1992. I have signed the paper on anomalous muons, and I have followed a six-month-long review process before the publication. I befriend the main author, Paolo Giromini, and I have discussed Strassler’s paper with him over the phone at length. Do you not think it is a bit arrogant for a retired theorist to believe he can win an argument on such an exquisitely experimental matter with me ? I am not boasting: I am just stating a fact. Lubos is arrogant. This time, he got a lesson. Lubos, I still like you, but please, don’t mess with me on these matters.

Lubos the experimentalist November 20, 2008

Posted by dorigo in internet, personal, physics, science.
comments closed

Let us get one thing straight at the beginning of this post, which centers on Lubos Motl’s arrogant pretense of being able to understand details of experimental collider physics better than the experimentalists who did the measurement.

The heart of the matter is the effective cross section of “ghost events” collected by the recent CDF study. The paper says in the abstract that the integrated luminosity used in the study is 2100 inverse picobarns: a number which basically says how many proton-antiproton collisions were used to collect the data where the “ghost events” are found.

When, then, one sizes up at 153,000 the “ghost sample”, the effective cross section of these events is easy to compute: it is just \sigma = N/L, number events divided by integrated luminosity. This is what Matthew Strassler did, finding \sigma=75 pb.

This unfortunately is wrong, because the integrated luminosity of the data CDF used to extract the 153,000 events is not 2100 inverse picobarns, but a smaller sample, amounting to 742 inverse picobarns.

The above fact, however, is slightly concealed in the CDF publication, and you only realize it if you read on page 28 (after all discussions of event counts have finished -the mention of the size of the 153000 event sample is 20 pages above): “For this study, we use a larger statistics data sample[5]“, where footnote [5] explains why the full dataset is now used, 2100/pb, rather than the smaller one of 742/pb used that far: data taken after the first 742/pb were collected is prescaled, which makes it less straightforward to extract cross section measurements from it.

[Motl was deceived because he dismissed too quickly the possibility that the data amounted to 742/pb, reasoning that 153000/742 is 200 picobarns, a number much higher than that given by the authors themselves elsewhere. He made another mistake here: he did not understand that the authors do not care to estimate the cross section of ghost events, which are 50% polluted by backgrounds, but concentrate on the half of it which is indeed of unknown origin: so they get something like 100 picobarns. On this he may be excused: indeed, experimentalists are much more used to talk of “cross sections” for mixed samples, while it would be cleaner to leave that name for physical processes, and only speak of rates when more processes mix together. This time, though, the roles got inverted…]

So, in a nutshell: Strassler, and then Motl, stepped on a point where the CDF publication is not too clear, and err. Not a tragedy, of course. I used a bit of irony when I pointed it out while commenting Strassler’s paper the other day. My point is that theorists should be very careful when interpreting experimental papers. Such numerical details should be checked with the experimentalists who did the measurement.

Now, Motl is not known for his ability to admit his own mistakes. So, what happened yesterday was that after I pointed out he was wrong, and tried in good faith to explain what he had failed to realize, he went postal. His strategy, when cornered, is to counterattack, polluting the waters. But in so doing, he digs his own grave, for now not only his original mistake gets amplified, but he ends up adding a lot of detail about his ignorance of basic facts of collider physics.

The whole thread is available here, but let me nit-pick a few pearls for the lazier of you.

“The 742/pb figure you refer to is just a way to measure a particular smaller subset of the (total) 2100/pb collisions – namely those collisions which are relevant for the correlated b-bar cross section measurements. These special 742/pb collisions are those that are acquired by the dimuon trigger.”

Here Lubos shows his confusion. ALL of the 2100/pb of data the CDF paper is based on are collected by a dimuon trigger. He regards 742/pb as a figure only relevant to an earlier measurement, which is used to compute the sample composition of the data also in the “ghost” analysis.

“The calculation you suggest would lead to three times higher cross section than the real one – something like 200 pb? You’re just completely deluded […]”

He insists: 200 pb is too high (why, one could ask. We are talking of a mixed sample, but in any case there is no reason why 200 pb is outrageous and 75 is fine, for a unknown new physics signal. Just as an example, the cross section of W events in CDF is 20,000 pb, that of Z events is 6,000 pb. But of course, the ghost sample is very large and unaccounted for: it does look very vweird that a signal in the 100pb range has escaped detection in the past).

At this point, my christian soul forces me to warn Lubos, and I say:

“If I tell you the luminosity with which those 153000 events have been collected is 742/pb, you should pause before insisting in your mirror climbing, lest you end up injuring yourself badly. It is NOT 2100/pb, ok ?”

Now, anybody would pause for a moment, when confronted with an author of a paper stating facts very clearly. Does this raise a doubt in Lubos’ beautiful mind ? No. He insists, showing he is basically ignorant of collider physics:

“The number 742/pb only tells us that in about 1/3 of the proton-antiproton collisions, a bottom-antibottom pair was believed to be created at the very beginning, as evidenced by the dimuon trigger. One could use this number, 742/pb, if it were true that all these events recorded with the dimuon trigger had the bottom-antibottom pair at the beginning of the event, and if all the cross sections were somewhat re-expressed as cross sections for bottom-antibottom collisions. Well, except that the bottom-antibottom are not really colliding here. They are just produced in 1/3 of the collisions.”

Here he really does it. He does not appear to know the first thing about a proton-antiproton collision: bottom-antibottom pairs are produced about a thousandth of the times at the Tevatron. Lubos must ignore that the total proton-antiproton cross section is about 80 millibarns, while the b \bar b cross section is more than three orders of magnitude smaller. Not “a third”, Lubos. Please stop it. Stop it… Noooo, he continues:

“But with some handwaving, one could perhaps re-express the cross sections as cross sections based on fluxes of bottom quarks and antiquarks, and the corresponding integrated luminosity would only be 742/pb because the fluxes would refer to fluxes of bottom quarks and antiquarks.”

Some handwaving ? This is not handwaving, this is crazy! Fluxes of bottom quarks! Anyway, the thread ends there. Instead than continuing in my blog, where he cannot control matters well, he brings it to his own, where he has an illuminating post out today, in which he pretends to explain where I was wrong. Poor Lubos. I sincerely feel sorry for his blunder, and did try to stop him even with a private, conciliatory message. To no avail.

Now, my only concern at this point is that he is spreading false data on the CDF analysis. I will have him contacted by somebody in CDF to stop this.

An appetizer for the impatient lubologist November 19, 2008

Posted by dorigo in internet, personal, physics, science.
comments closed

I cannot resist directing the Lubophilites and the Lubophobes among you to the thread of the former post, where, out of the blue, Lubos starts an attempt at explaining why Strassler’s estimate of the cross section of “ghost events” in the recent CDF publication is right, and I am wrong.

Of course, I am right and Strassler is wrong. I know it because I authored the paper, but even if I had been so careless as to not know what I signed, I did talk about that very number -the cross section error of Strassler- with the main author of the CDF study. In any case, Lubos in the thread shows just how arrogant he is and -to his credit- how much he believes in himself, launching himself with a smile in the den of the tiger. Indeed, I explain to him what his error is (he can be excused for that: the CDF publication is not too clear about the fact that most of the study is performed with 742/pb, and only a part with 2100/pb), but he ignores my warnings, and ends up in a really sorry situation: having to choose between two evils. Defend his mistake ad infinitum, showing the world he is childish beyond repair, or retreat in good order ?

Of course, if you know Lubos, you know what he will do: and in fact, he counterattacks, ending even deeper in trouble. Some of the sentences of his last comment show just how deep his ignorance is.

An appetizer of better things to come -I will have a detailed post out tomorrow- below, pasted from his last comment (oh, as of now… He might please us with others):

The number 742/pb only tells us that in about 1/3 of the proton-antiproton collisions, a bottom-antibottom pair was believed to be created at the very beginning, as evidenced by the dimuon trigger. One could use this number, 742/pb, if it were true that all these events recorded with the dimuon trigger had the bottom-antibottom pair at the beginning of the event, and if all the cross sections were somewhat re-expressed as cross sections for bottom-antibottom collisions. Well, except that the bottom-antibottom are not really colliding here. They are just produced in 1/3 of the collisions.

1/3 of the collisions ? OMG LUBOOOOOOS! :))))

Now this post will appear as a “revenge” against Lubos Motl’s last post, where he uses a pair of pages to bitterly criticize me for my post on Strassler. I deny that: I in fact had posted on his blog the following comment (I paste it here because he might remove it, if  I know him…):

Nice article. Very wrong, but quite readable.

The authors of the CDF paper on multi-muons are not a unknown subset, since they all signed, and their names are on the front page. What is unknown is the fraction that did not sign, because authorship in CDF varies from paper to paper, due to people leaving CDF, new members, visitors who sign only one paper, etc.

About the rest… [Invalid characters removed.]


As you see, nothing aggressive on my part, and the exchange happened before the thread in my blog turned bad. But of course, I was not fast enough – after finally understanding his blunder, he removed the part where I explained his mistake, lest his readers understand he is wrong, and added a copy of his “explanations”. Oh well. Everybody is entitled to their opinion, even on Lubos Motl.