jump to navigation

Multiple interactions at LHC: an exercise in elementary statistics February 13, 2008

Posted by dorigo in mathematics, physics, science.
trackback

The LHC will start running towards the end of this year, at the design energy and with a bunch crossing time of 25 ns. That means 40 million intersections per second between two proton packets in the core of CMS and ATLAS [things are a bit more complicated -some bunches are empty- but this has no relevance for my point].

We expect that the beams will contain few protons in this initial phase: low luminosity, that is. That’s because a high energy beam requires a lot of tuning before it can accommodate a large number of particles. Charged particles, in fact, have the nasty tendency to repel each other, and squeezing them into a narrow corner of phase space -knee to knee, all together as a single man- is a tremendously hard task, requiring successive approximations. Moreover, as the beams travel through the LHC tunnel, each making over ten thousand turns a second, they generate strong induced currents on the machine’s hardware. This electromagnetic interplay is impossible to compute beforehand, and a trial and error procedure by the machinists is unavoidable.

Luminosity is a function of the number of protons spinning in the two directions. Basically one can compute it from the number of particles circulating in the two directions by taking their product N_1 N_2 and dividing it by the revolution frequency and the transverse section of the beam. One obtains a number whose units are inverse area (the beam size) times inverse time in seconds (the frequency). The LHC will start at L=10^{30} cm^{-2} s^{-1}, but we expect it to reach the design value of L=10^{34} in a couple of years.

Luminosity is not just a number with which machinists boast about their gadget. With it, you can compute the rate of production of any given process, if you know its cross section.

Cross section, a number labeled with the greek letter \sigma carrying units of area, basically tells you the effective area a proton must hit in another in order to give rise to a given reaction. The total cross section for proton-proton collisions at the LHC energy is \sigma_{tot} = 8 \times 10^{-26} cm^2: more or less like a circle with a radius of 1.6 millionths of a billionth of a meter – the “size” of a proton seen by another colliding with it head-on. But the total pp cross section is huge! Compare it with the cross section for producing a top quark pair: \sigma_{t \bar t} = 8 \times 10^{-34} cm^2, or a hundred million times smaller. It is like if the incoming proton had to hit the other one “just right there”, to produce a top pair.

With a knowledge of what a cross section is we can answer questions. What is the total rate of proton collisions at the LHC if the luminosity is L=10^{33} cm^{-2} s^{-1} – the one we will have in the “low luminosity” phase ? Simply,

N = \sigma_{tot} L.

With \sigma_{tot} as quoted above, we get a rate N=8 \times 10^7 of proton collisions: eighty million collisions per second! How many per bunch crossing ? Well, if all proton bunches contain the same number of particles, we get on average two interactions per bunch crossing, since the crossing rate is 4 \times 10^7. Easy, huh ?

Well, things in reality are just a bit more complicated. The probability of events that may come incoherently in integer numbers follows the rules of Poisson statistics. Poisson statistics allows us to compute the probability that a bunch crossing will contain no collisions, or one, or two, or N, given the average \mu=2 as computed above. The formula looks awful, but it is quite benign:

P(N) = \frac {\mu^N e^{-\mu}} {N!} ( the exclamation mark indicates taking the factorial of N).

We need a pocket calculator, but other than that there is nothing that should scare you out of this post. Keep reading if you want to use what you just learned to get some insight in the inner workings of the LHC experiments!

With the formula for the probability of N collisions, we have gained power – knowledge, they say, is just that. The power to make wonderful calculations. If I tell you that the cross section for producing an event with two energetic jets (say, energy above 30 GeV each) is \sigma_2 = 2 \times 10^{-28} cm^2, or 200 \mu b  (we prefer to use microbarns -labeled \mu b– for the area of 10^{-30} cm^2, a quite convenient unit),  how many such events will be produced in a single bunch crossing at the full LHC luminosity of 10^{34} cm^{-2} s^{-1}, on average ?

Easy. Use the formula N = \sigma L, and you get a rate N=2 \times 10^6 s^{-1}, that is,  2 MHz. Then, by dividing by 40 million bunch crossings per second, you get the rate per bunch crossing, 0.05: one in twenty. If instead we had taken the cross section for producing four energetic jets, \sigma_4 = 3 \mu b, we would have obtained a rate of 30 kHz, and a bunch crossing frequency of 7.5 in ten thousand. Mind you, the cross sections I quote are approximate – I estimated them with some back-of-the-envelope calculation. But let’s not be distracted by details and let me get to the point.

Those computed above are average rates. What happens if I ask you what is the chance that two, or more, separate proton collisions each producing two jets like the ones above in the same bunch crossing?

Now, that might sound like a weird question, devoid of any practical importance. Quite the contrary. Let me compute it for you before making my point. We use the Poisson probability formula, with \mu = 0.05 and N>=2. Instead of computing P(2), P(3), P(4)… and then adding them together, we use the fact that the sum of all P(N) is one: a nice property of probability, indeed! Here is the computation:

P(0) = e^{-\mu} = 0.95123,

P(1) = e^{-\mu} \mu^1 /1! = 0.04756, and so

P(>=2) = 1 - P(0) - P(1) = 1-0.95123-0.04756=0.00121.

Interesting! The chance of two distinct dijet events in a single bunch crossing is not that negligible… If we cannot distinguish where the jets come from (i.e., if the two proton collisions happen too close to each other), we will interpret the event as one with four energetic jets!

Now compare this 0.00121 with the number computed above with \sigma_4, the rate of collisions producing four jets in the final state from a single proton-proton interaction: we discover that at the LHC, there are instances when two separate collisions may conspire to mimic rarer processes! If I am looking for four-jet events, I will find 1.21 every thousand bunch crossings coming from two 2-jet “multiple interaction” events, while only an additional 0.75 every thousand will be genuine 4-jet events. I have a background to consider which lower luminosity machines would never have to care about!

The exercise is over. It is not an academic one: in the study of the very rare production of top pairs with higgs-strahlung, pp \to ttH, one gets to consider the collection of exceedingly rare events with up to eight energetic jets in the final state. The background from multiple interactions conjuring a multijet final state by adding different contributions is to be removed! We can do that by actually tracking the jets down to the space point where they were originated: we only keep events where all eight jets originated from the same spot, and we are ok. We can do it, since we have such a wonderful silicon tracker (see picture)…

Comments

1. carlbrannen - February 14, 2008

Excellent! Could you add how large the interaction volume is? It’s dimensions longitudinal and transverse? And the distance the silicon tracker can measure, longitudinal and transverse? I know the bunches are shaped like needles, wouldn’t that mean that the collision volume is needle shaped and you will mostly distinguish longitudinally?

2. dorigo - February 14, 2008

Hi Carl,

indeed, the bunches are thin, thin needles – a few tens of microns across, for a longitudinal length (along the beam direction, z-axis) of 10-20 cm. I have the actual numbers somewhere, will try to dig them out for you. In the meantime I can add that yes, we can only distinguish separate collisions along the z direction. The silicon tracker has a resolution comparable to the beam transverse size, a few tens of micrometers, so multiple interactions – which are spread out several centimeters across – can be separated quite easily.

Cheers,
T.

3. Andrea Giammanco - February 14, 2008

Two questions:
– Why are you saying that LHC will start at the end of the year? Although there are rumors about that, all the official statements that I have seen so far (including the slides from accelerator people at the Perugia workshop that you attended) still insist that the goal is to start in may and have the first “physics runs” in july.
– Why do you say that L=10^34 cm-2s-1 will be reached in two years? As far as I know the plan was to run at 10^33 until roughly 30 fb-1 are reached, which means roughly 3 years (not considering a first year at 10^32 or less) before ramping up to 10^34. At least, this is what we assumed in the TDR… (The rationale should be that studying the SM background is easier at 10^33.)

4. dorigo - February 14, 2008

Ciao Andrea,

I think July is still optimistic… But it is mostly a gut feeling. In any case, July is indeed the second half of 2008.
As for 10^34, it is a mistake on my part. Indeed, it will probably take three years to get there. However, I doubt that we would keep running at 10^33 during the first three years if we ever could go up with luminosity safely, just to “study the SM backgrounds”, though…

Cheers,
T.

5. nige cook - February 16, 2008

Thanks for this summary of basic calculations. Please could I ask why in this post you state cross-sections in old CGS units of square centimetres, rather than in modern SI units of square metres or particle physics units of barns, 1 barn = 10^{−24} square centimetres = 10^{-28} square metres? Is there any real reason, or is this just an Italian rebellion against European regulation?

Is it just pride that particle physicists at the forefront of everything use obsolete units prove they are above the pettiness of regulations? (In school when doing elementary physics twenty years ago in England, using CGS units was just as criminal as not rounding calculations results. Writing 10^2 cm in place of the correct result 1 m would result in no zero for the question. A bit difficult to master, since most of the old books I learned physics from were in CGS units.)

It used to make sense to use electron volts for the kinetic energy a particle gains in an accelerator before a collision. If an electron was accelerated by an electric field potential of 1,000,000 volts, it would have an energy of 1 MeV.

But it’s not directly measuring the energy of a particle, it’s like describing how fast your car is going by stating the number of litres of petrol (gasoline) required to accelerate the car to a given speed.

E.g., if it takes 1 litre of fuel to get up to 100 miles per hour, we could refer to speed in units of litres of petrol, which is just as logical as referring to the kinetic energy of a particle in terms of the number of volts of the field which was responsible for accelerating that particle up to a particular speed (and energy). Maybe instead of referring to 30 GeV jets, people should logically refer to 4.8 nJ jets?

6. dorigo - February 17, 2008

Hi Nigel,

the centimeter is, I am afraid, a very convenient unit for particle physicists. We measure in centimeters everything from detector components to beam sizes. Who cares about SI ? Luminosity is universally used in units of cm-2 s-1 in collider physics (I am not so sure about neutrino beams though). GeV are also very convenient because a proton’s mass is roughly a GeV, and because momenta of the order of a GeV are what you usually measure in the detectors.

I am of course not saying that international units are not useful. All I am saying is it is much more important that a system is used everywhere in a given branch of science, than that it is used across disciplines in, say, 80% of the world. If astronomers prefer parsecs (another quite odd unit, you’ll concede) to SI units they have their reasons. We have ours… I have never seen an example of a calculation requiring luminosity and distances on a parsec scale in the same sheet of paper, so we are ok.

Cheers,
T.

7. nige cook - February 17, 2008

Hi Tommaso,

Thanks for taking the trouble to reply, and thanks also for the analogy of parsecs. I did a cosmology module as well as quantum mechanics, and yes the parsec only really made sense in astronomy when absolute distance scales were uncertain.

At that time, all astronomers could do for reporting absolute measurements was to measure the angle of parallex. If you measure the angle of parallex to a star, i.e., the difference in apparent angle (relative to far more distant stars in the sky) for two times in the year 6 months apart (when the Earth is at different sides of the sun), this parallex or variation in apparent angular position can be measured in seconds of arc, i.e. 1 parsec is 1 second of arc in the sky, which is 1 part in 3600 of 1 degree of angle of the sky.

Hence 1 parsec is 1 second of arc difference in star location seen from opposite sides of the sun. Since it has been determined accurately that the radius of the Earth’s orbit is about 150 million km, it follows from the trigonometry of a right-angled triangle that a star with a parallax of 1 parsec would be at a distance of (1.5*10^8)/sin(1/3600) = 3.1*10^13 km.

What’s surprising is here is that this kind of conventionalism is the cause of a major failure by Hubble. Instead of thinking deeply about his recession law v/R = H, he expressed H conventionally in units of km/s/Mparsec. If he had thought about it, spacetime implies that you can represent a distance as a time. If he had written the Hubble law that way, he would have v/t = Hc, which is interesting since it naturally has units of acceleration.

Even if you just take the regular mainstream Hubble law v = HR, you can see that it implies acceleration: a = dv/dt = d(HR)/dt = (H*dR/dt) + (R*dH/dt) = Hv + 0 = HHR. So the Hubble law itself predicts that the universe is accelerating at the small rate of about 6*10^{-10} ms^{-2}. This is such a tiny acceleration that it was first observed only in 1998 by Saul Perlmutter’s clever automatic supernova-signature detecting software which was directly run with live digital input from CCD telescopes.

Mainstream cosmology is completely half-baked because it doesn’t bother to analyse the few solid facts it has at it’s disposal. Everytime I try to pointed out that it’s possible to prove the universe was accelerating (and I published it in 1996, years before the discovery), and the allied facts that the outward acceleration implies an outward force which leads to quantitative predictions in quantum gravity, I was just censored out for dozens of reasons. People don’t listen because they either (1) assume that the mainstream orthodoxy is gospel truth, or because they (2) completely reject the big bang and recession discovery factual evidence and want to preach about false “tired light” nonsense (against the facts) for pseudoscientific, metaphysical personal reasons . It’s very weird how orthodoxy is so helpful in experimental particle physics, but is unhelpful in other areas.


Sorry comments are closed for this entry

%d bloggers like this: