jump to navigation

Four trillion antiprotons in the waste bin October 30, 2006

Posted by dorigo in news, physics, science.
trackback

Yesterday the day shift at the CDF control room looked promising at the beginning, when a huge stack of antiprotons was waiting to be injected in the Tevatron ring. The more antiprotons we can get to collide, the more data we collect, and the happier everybody is…

Actually, things are not so straightforward: no matter how high is the number of particles in the beam, and the resulting collision rate, we always write to tape about 100 Hz of events. But those events we write are the more interesting the more they were selected by our trigger system.

Still confused ? Ok, imagine we have a trillion antiprotons and ten trillion protons colliding. That will cause of the order of 5 MHz of collisions in our detector. We select 100 Hz of those with a trigger system that sorts out the most promising events for data analysis. The particles in the beam go on colliding for hours, and their number decrease as they collide or interact with the residual gas in the vacuum beam pipe. As the number decreases, the collision rate will go down accordingly, say to 2.5 MHz at some point. But our data acquisition system will adjust automatically the trigger selection cuts to keep the 100 Hz output constant.

Antiprotons do not come for free. They do not exist in nature, and have to be produced by colliding a beam of protons with a thin target. One such interaction every 50,000 will produce an antiproton, which is then collected in a dedicated facility, the Antiproton Accumulator ring.

It takes hours to build a stack of antiprotons large enough to make a meaningful beam for tevatron collisions. Yesterday morning we had a pretty large stack, a total of 5 trillion antiprotons. But we wasted 4 of them when, after injecting them in the tevatron tunnel, one of the magnets of the accelerator had a quench.

Magnets are used to focus the beam and to get particles trajectories to bend in the circular tunnel. The ones at the Tevatron are superconducting ones, so they produce intense fields in a compact size and with less current expense. To keep them in the superconducting state, liquid helium is flowed through them. But sometimes, the liquid helium will have not enough pressure to take away the heat from the magnet. The temperature will rise, and the field will decrease sharply. This is what is called a quench.

A quench can be destructive for the magnet, but most of the times things are not that bad. However, at the very least a fast quench causes the beam circulating in the accelerator to be lost.

Too bad for the nice stack of antiprotons, whose life had been designed to produce exotic new particles after a few hours spent orbiting and then smashing against a proton. They did not live long enough to make a fancy Higgs boson… They ended their life annihilating against a dump.

And today, we had another quench, again just before starting collisions… Too bad! These two stores might have allowed us to collect a couple of Higgs bosons and maybe 30 top quark events…

Comments

1. Markk - October 31, 2006

This is interesting – in a normal run can you use the decrease in intensity of interactions to test your triggers? As the beam intensities go down the probability of a rare event being in the population decreases, but your sample size as a percentage increases. Do you see any effect in the types of events collected, or isn’t it enough difference to matter?

2. dorigo - October 31, 2006

Hi Markk,

yours is a good point and I wish to elaborate, I will have a post later on this.

The quick answer though is that what we work with is event rates. Input rates are always 2.5MHz regardless of luminosity, because 2.5MHz is the bunch crossing rate – and there is almost always at least one interaction in each bunch crossing until L goes down a lot. Output rate is always 90-100 Hz, since we want to collect as much data as we can.
So what is the difference between high and low L ? indeed, it is the menu of events we collect. To give a constant output, the different triggers get changed as L changes. In particular, ones that would saturate the bandwidth of 100Hz by themselves if kept at full acceptance are prescaled at high luminosity: only a fraction F=1/N of events satisfying a particular requirement get written. As L decreases, the prescale N decreases too, until it reaches one. A mixture of different triggers with different prescale factors takes care of giving a varying menu of accepted events as L decreases…

Cheers,
T.


Sorry comments are closed for this entry

%d bloggers like this: