jump to navigation

Smooth data taking October 14, 2008

Posted by dorigo in news, personal, physics, science.
Tags: , , ,

I am serving as a Scientific Coordinator in the CDF control room this week (other pics and info here and here and here), during the night shift (midnight to 8AM). It is impressive how well oiled the data taking machine has become with time: the store begun yesterday at 16.45, and since then we have not had a single problem causing us to interrupt data taking. So far we have collected a billion L1 accepts, and 4.2 million events have been written to tape. The integrated luminosity this corresponds to is 3.33 inverse picobarns in 13 hours, and counting. In half a day, we have collected little less than the full bounty of data CDF acquired during its first campaign of data-taking, in 1988-89 -data which was used for dozens of breakthroughs in high-energy physics.

If the above paragraph contains information which you vaguely think would be nice to understand, but it does not make much sense to you, please read the following one, which tries to explain what I am talking about.

CDF is a multi-purpose detector for high-energy particle collisions. It detects the products of the interaction between protons and antiprotons, which are launched against each other in large numbers by the Tevatron accelerator after having been pushed to the speed of light, reaching the energy of 1 TeV each- the equivalent of more than a thousand times the proton mass.

A store begins with the injection in the Tevatron (a 6.3km long ring, located 30 miles west of Chicago) of typically ten thousand billion protons, and a few hundred billion antiprotons, in opposite directions. They circulate and intersect in the core of our detector, where at a rate of about 2.5 million times a second they produce particle collisions. Since as they collide the protons are removed from the beam, the density of the beam decreases with time in a store, such that after 10-15 hours the store is ended, and a new injection begins.

During the store, CDF collects the data by selecting the most interesting ones among the 2.5 million collisions every second. This is demanded to a three-level trigger system: Level 1 does very little very quickly, identifying the most energetic collisions and those containing electrons and muons -particles which are rare and interesting in the proton-antiproton collisions we produce. Typically Level 1 filters about ten thousand events a second, which are passed to Level 2. Since the input rate is now much lower, Level 2 has several tens of microseconds to reconstruct the particles produced in the event, and decide whether the physics is worth saving or not. The few hundred best events are then passed to Level 3, which does a much more detailed reconstruction and selects about 100 events a second for offline analysis.

Only 100 events a second ? Why can’t you collect all of them ?” you might ask. Well, it would be very demanding to build a system saving to disk a Terabyte a second (the amount of data produced before filtering). But it would also be silly, since most collisions produce physics we know inside out -low-energy interactions we have studied at lower-energy machines.

A run is not a store. A run may coincide with the duration of the store only if nothing happens which forces us to stop data-taking and fix it. Usually, this happens every hour or so, but today we got lucky and we had a very smooth running for the entire duration of the store. So, in the end, we can claim a very high collection efficiency, and a single run with lots of data. This means that all detector components worked without a glitch, without the need of intervention on the part of my crew, and we are just happy.

So in the end we collected more than three inverse picobarns of data. Picobarns are a measure of cross section: a very small area. We say that proton-antiproton collisions at 2 TeV energy (the one produced by the Tevatron beams when they collide) produce pairs of top quarks with a cross section of 6 picobarns, for instance. That means that if you collect an integrated luminosity of 3 inverse picobarns, you are likely to have captured 18 top quark events in your dataset. This would be true  if your triggers were smart enough to save all of them. Triggers are never 100% efficient, but I am confident that at least 10-15 top-antitop pairs have been logged to disk during this store. Enough for a top quark discovery (if there had not been one 13 years ago) ? Well, not really, because those 15 events are buried in a large background -as stated above, we wrote 4 million events to tape tonight. Once you select the most top-like events, you reduce your backgrounds but signals get small too, and maybe two or three would make it to a “golden dataset”. Still, this was definitely a good night of data taking for the CDF experiment!



1. Andrea Giammanco - October 14, 2008

It would be interesting (not only for the laymen) to know what are the most common reasons for run ending in a well-oiled experiment.
I know some of them only for the not-oiled-at-all experiments 😉

2. dorigo - October 14, 2008

🙂 usually the reason in a proton-antiproton collider is that there’s a stash of antiprotons ready to be injected. Luminosity decreases with time due to beam-gas and other effects – I think the lifetime of the LHC beam is longer than the one in CDF, which is only a few hours.


Sorry comments are closed for this entry

%d bloggers like this: