jump to navigation

The subtle art of triggering on proton collisions October 31, 2006

Posted by dorigo in physics, science.
trackback

In a comment to my post on antiprotons in the waste bin, Markk writes:

This is interesting – in a normal run can you use the decrease in intensity of interactions to test your triggers? As the beam intensities go down the probability of a rare event being in the population decreases, but your sample size as a percentage increases. Do you see any effect in the types of events collected, or isn’t it enough difference to matter?

I thought this was a good point, there indeed are some subtleties involved in collecting data in a high-rate particle collsion experiment.  I think it is interesting to discuss it in a bit of detail here.

In CDF we collect data coming from proton-antiproton collisions that happen every time a bunch of protons intersects a bunch of antiprotons orbiting the same ring in the opposite direction. The timing is such that we get one of the 36 bunches of protons – 0.3m long – to cross one of the 36 bunches of antiprotons in the very center of the CDF detector every 396ns, for a rate of 2.5MHz.

Input rates for the data acquisition system of our experiment are thus always 2.5MHz regardless of the luminosity (i.e. how many protons or antiprotons are there in each bunch or how narrow do the bunches get squeezed in the transverse direction, increasing the likelihood of interactions), because 2.5MHz is the bunch crossing rate – and there is almost always at least one interaction in each bunch crossing until the luminosity L goes down a lot, due to the continuous small loss of particles from the beams due to interactions with residual gas in the vacuum beam pipe where they circle the Tevatron ring.

The output rate is instead always around 100 Hz, since we want to collect as much data as we can. We have a farm of commercial processors that reconstruct events and write them to tape once they are accepted by our trigger system – a complex collection of hardware modules that decides what to keep and what to discard. The speed at which we can write events to tape is 100 Hz, no more, so that limits our ability to collect the data, ultimately. But we want to use always our full power of data writing! So we keep that output rate always at the maximum value.

What is the difference, then, between high and low luminosity ?And what is this big deal about trying to reach ever higher luminosities, by collecting more and more antiprotons in our accumulation rings before injecting a store in the Tevatron ? 

Indeed, the luminosity makes a HUGE difference. It dictates what is the menu of events we collect.

To see that, I have to explain that events get collected thanks to the characteristics they display. I will make a very simplified version here of what happens in reality, to let you see this point.

A collision may yield a high-energy electron – the clean signature of W boson production, which in turn can signify that event is a top quark production event or even a Higgs production event – but it will do that very rarely. Much more frequently, it will yield a couple of low energy jets of hadrons; we still like to collect jet events, but we care much less since they are not a discovery process.

So let’s say the first process has a rate of 1Hz and the second has a rate of 10000 Hz, when the luminosity is high. What we do is we take the full 1 Hz of electrons and write them to tape, while we take 99 more Hz of jet production events, discarding the 9901 remaining Hz of those. We have, that is, prescaled the jet trigger by accepting only one jet event in a hundred.

Now, as the store continues, protons and antiprotons will decrease in the beams. At a given point, the luminosity will have halved, and so the two processes. 0.5Hz of electrons and 5000Hz of jets. What we do then is we collect the 0.5Hz of electrons, and 99.5Hz of jets, by prescaling the latter by a factor 50 this time.

The above simplified view shows that our menu of collected physics processes constantly changes during a run. There are tens of different prescales, active on some of the more than hundred different triggers. Of course, the “discovery triggers” (ones that collect potentially rare processes) never get prescaled. But they have a very low rate anyway…

Comments

1. gabriel - October 31, 2006

a little off topic, but i would like to make a question. i want to understand whats the difference betwen a pp and a ee collision, in a specific sense. first lets forget the syncroton energy loss issue and lets assume that the CM energy of the ee is 3 times the pp (naive, for taking into acount the 3 quarks). in this case i have the same “potential” of observing new physics? put in other way, how will change the menu of produced particles?

A little bit confusing to me, still in the topic but a different thing, is why they had to make pp collision to first observe the W.

Thanks,
Gabriel

2. dorigo - October 31, 2006

Hi gabriel,

another excellent invite to write something meaningful…
I will answer in the following post in a second.

Cheers,
T.


Sorry comments are closed for this entry

%d bloggers like this: