## Calorimeters for High-Energy Physics – part 2 April 11, 2008

Posted by dorigo in physics, science.

In the first part of this long post I discussed some generalities of calorimeters, which are one of the most important components of modern detectors for high-energy particle physics experiments. To analyze in some detail the way calorimeters work it is now useful to distinguish electromagnetic from hadronic ones. In the former what is accurately measured is the energy of photons and electrons, while the latter targets particles interacting strongly with nuclear matter: mostly protons, neutrons, pions, and kaons.

Electromagnetic calorimeters

Electromagnetic calorimeters aim at measuring electrons and photons with energy above a few hundred MeV, when these particles lose energy mostly by pair production and bremsstrahlung, respectively.

Pair production is the process whereby an energetic photon “materializes” into a particle-antiparticle pair. It naturally occurs if the photon passes close to a nucleus (the heavier the better) which can “absorb” the excess transverse momentum that a $\gamma \to e^+ e^-$ conversion forcefully generates. Of course pair production can only happen if the photon energy exceeds the mass of the produced particle pair; moreover, although the production of any fermion-antifermion pair energetically allowed has a non-zero chance to occur, the only experimentally important process is electron-positron production.

Bremsstrahlung (braking radiation) is instead the “inverse” process: an energetic electron emits a photon, also yielding some fraction of its momentum to a spectator nucleus. The effect is a “braking” of the electron, and the emission of a energetic photon.

The cross section of these processes – or, if you prefer, the probability that they happen – depends on the square root of the atomic number Z (the number of protons) of the traversed material. The quantity which characterizes these phenomena is called radiation length and is universally labeled $X_0$. Radiation length is defined as the thickness (in centimeters, or more usefully in grams per squared centimeter -you can switch from one unit to the other by multiplying by the material density in grams per cubic centimeter) of material crossing which an electron has a probability $P=1-1/e$ (or roughly 63%) of radiating a photon. For photons, $9/7 X_0$ can be considered as an attenuation length, because they “disappear”: if $I_0, I(x)$ are initial intensity and intensity beyond a thickness $x$, the formula dictates that $I(x)=I_0 exp(-7x/9X_0)$, which means that a thickness $x=9/7 X_0$ converts 1-1/e of the photons, and only 1/e=37% remain in the initial beam.

One radiation length corresponds to about 300 meters in air, 9 cm in aluminum, and only 5.6mm in lead (a very useful formula for a quick approximation to keep in mind is $X_0 = 180 A/Z^2$, again in grams per squared centimeter; it is good to better than 20% accuracy for Z>13). Lead and other heavy materials allow the construction of compact calorimeters; for instance, a common design is that of thin lead sheets alternated with sensitive material like sheets of plastic scintillator. Another possibility is to use blocks of lead glass, where light is obtained by the Cherenkov effect.

Another small parenthesis: Cherenkov radiation, discovered in the 1930s, occurs when a charged particle travels in a medium at a speed larger than that of light in the material, creating a “shock wave” in the form of photons radiated at an angle depending on the particle speed [were you familiar with the fact that light travels at speed slower than $c=3 \times 10^8 m/s$ in transparent media ? Its speed is indeed $v=c/n$, where n is the refraction index of the material ($n>1$). This phenomenon is at the basis of light refraction]. The radiation has a spectrum peaking in the near ultraviolet. In the picture on the right you see a particle path as a horizontal line; in the time it takes it to travel along the segment of length $\beta ct$ (with $\beta=v/c$ the ratio between particle speed and speed of light in vacuum), light only travel a distance $c/n t$, creating a coherent emission front at an angle $\theta$ such that $cos \theta = 1/(n \beta)$.

Because $X_0$ does not depend on the energy of the incoming photon or electron, it is possible to estimate as a function of this quantity the total thickness of material which is needed to completely absorb an electromagnetic shower: the number of produced particles doubles per each additional thickness $X_0$ of traversed material, until particle energy reaches a critical value beyond which the process cannot be continued.

Below criticality, for electrons dominate energy losses with atomic electrons (what is called Moller scattering), while for photons at about the same energy starts dominating the Compton effect – the process whereby photons yield a fraction of their momentum to a nucleus and become softer. With some simple math one then finds than, given an initial energy of about 100 GeV, 20 radiation lengths are sufficient to absorb about 98% of the energy: in practical terms that means not more than 40-50 cm of thickness for sampling devices such as the frequently used lead-scintillator wafer.

Transverse containment can also be parametrized through the quantity $X_0$. In this case, the widening of showers (principally due to the emission of bremsstrahlung photons not collinear with the incoming electron, since the angle is proportional to the fractionary momentum loss, and to Coulomb deflections only in the later stages of showering) scales with a quantity called Moliére radius, $\rho_M = 7/2 A/Z$ (where A is atomic weight and Z is atomic number of the material, and resulting units are grams per squared centimeter). The Moliére radius characterizes the typical deflection of electrons traversing one radiation length of material.

Energy resolution of electromagnetic calorimeters depends mostly on the stochastic nature of the processes of energy yield: since on average the total number of particles produced in a shower grows linearly with the energy of the original body, from Poisson statistics we know that energy resolution must scale with the square root of energy, $\sigma /E = k/ \sqrt {E}$, if one neglects systematic effects due to the loss of a part of the energy (longitudinally or transversely), to the non-linearity of the response of the active medium, and a multitude of other small nuisances.

Poisson statistics determines the distribution of counts for random processes which have integer outcomes, such as the number of tracks in a shower, as a function of the expected number, the average. The distribution has a width which is proportional to the square root of the average, so that the typical error that can be assigned to a number of counts N is $\sqrt {N}$.

For the constant $K$ values around 10% to 20% are common in sampling calorimeters. In CDF the central electromagnetic calorimeter is the classic lead-scintillator sandwich, and the resolution is $\sigma_E = 0.135 \sqrt {E} \oplus 0.02E$. For a $Z \to ee$ decay this results in a resolution of about $\sigma_M \simeq \sqrt {2} \sigma_E \simeq 2 GeV$.

In hadronic calorimeters what is measured is instead the energy of hadronic showers produced by nuclear interactions of mesons and baryons with the nuclei of the absorber. The processes causing energy loss are in this case much more complex and harder to measure accurately, for at least three reasons:

1. the presence of nuclear excitation phenomena, which reduce in a non trivial way the fraction of measurable energy because of the emission of fast neutrons and protons, or other non-radiative processes;
2. the decay in flight of pions and kaons into muons and neutrinos, since the latter do not release a significant amount of energy in the detector;
3. and finally, a sizable component of secondary hadrons is constituted by neutral pions (a third of the total of produced pions) and other particles which immediately decay to photons, with consequent losses of linearity in the energy response: photons give a larger response than charged pions (see below).

The resolution which can be obtained is much worse than that of electromagnetic shower detectors: the value of K ranges from 50% to 150% and above, depending on the quality of the active material. Moreover, one has to mention the fact that the quantity corresponding to the radiation length $X_0$ is, for hadronic showers, the interaction length $\lambda$, which is much longer, due to the smaller cross section of nuclear interactions. This forces much larger longitudinal dimensions in order to contain the hadronic showers. In iron, a material widely used as absorber in high-energy physics experiments, $\lambda=17 cm$; in uranium $\lambda=12 cm$.

(Above, iron wedges of the CMS forward calorimeter).

The difference in response to electrons (or photons) and pions in hadronic calorimeters amounts typically to 30-40% and is mostly due to nuclear excitation phenomena by pions. The response can be equalized in the so-called compensated calorimeters: these usually have U-238 as absorbing material. Uranium yields back the energy loss “with interest”, in the form of nuclear fission. The detection of even a small part of the released energy may allow, through an accurate calibration and an optimization of the layers of uranium and scintillation material, to halve the K factor. One thus obtains values of K around 30-40%.

One of the unavoidable constraints of calorimeters is the need to fully contain the development of particle showers in their volume. A leakage of penetrating tracks on the back of a calorimeter limits its resolution and worsens the measurement of the most energetic incoming particles. This suggests the use of very heavy materials, as $X_0$ and $\lambda$ have been already shown to be inversely proportional to atomic weight.

A parameter of fundamental importance in the design of calorimeters is transversal segmentation. A finer segmentation with “towers” pointing back to the interaction vertex allows to obtain a precise map of the energy deposition as a function of the polar coordinates $\theta, \phi$ of particles generated in the interaction point, which proves very important for the identification of hadronic jets.

As a matter of fact, the segmentation is usually designed to be uniform in pseudo-rapidity, the quantity $\eta = - ln (\tan \theta/2)$. Pseudorapidity is a monotonous function of the polar angle $\theta$ between the direction of a detector element and the beam, as seen from the interaction vertex. It transforms linearly for Lorentz boosts along the beam axis, and this makes jets show up as circular energy deposits in the calorimeter if mapped in the variables $(\phi, \eta)$.

A Lorentz boost is a transformation of coordinates satisfying special relativity, and must be used to study interactions yielding a center-of-momentum which is moving in the detector frame of reference. In proton-proton collisions such as those produced by LHC, or proton-antiproton collisions at the Tevatron, this is exactly what happens, because the originators of the hard collision are partons within the projectiles, each of which carries a unknown fraction of the (anti-)proton energy.

The typical radius of hadronic jets is of 0.7 units in $\eta-\phi$ space, but jets have a transversal extension that becomes smaller as energy increases. This is because the momentum of hadrons originated from parton fragmentation is on average equal to 300 MeV in the direction transverse to the jet axis and only weakly dependent on the originating parton energy, while the longitudinal component scales linearly with the parton energy. Because of this fact, the reconstruction radius of clustering algorithms that recognize hadronic jets from the calorimeter deposits has slowly shrunk with years, following the increase of the energy of typical jets which the experiments strive to measure with accuracy.

Values of R around 0.4 are now commonly used by the Tevatron experiments in the reconstruction of heavy particle decays to hadronic jets, such as that of top quark pairs. With a cone of $R=0.4$ some of the jet energy is lost in what is called “out-of-cone”, slightly deteriorating the energy resolution because an average correction becomes then necessary; this is acceptable, though, because the detection of all the energy deposited by the jet is not as critical a factor as is the correct identification of jets traveling close together, which is a common feature of high jet-multiplicity final states produced by top-antitop decays. (In the event display on the left, energy depositions in the $\eta-\phi$ plane are represented as bars of red and blue color to describe electromagnetic and hadronic energy measurements. This is a candidate top pair decay to a tau, an electron, plus hadronic jets. Despite the leptonic decay of both W bosons, the event is still best reconstructed by using a small radius for jet clustering.)

The future

If new detectors will ever be built to explore a yet higher energy regime than the one about to be probed by LHC, calorimeters will be as necessary as they are today. The following characteristics will be desirable in a design of new generation:

• self-triggering (the ability of independent portions of the system to identify and measure a signal, interpreting it and sending an accept signal to the data aquisition system)
• stand-alone tracking (the ability of the calorimeter system to independently determine the direction of crossing particles)
• an integrated time-of-flight measurement (the capability to separate different particle signals based on the delay between their arrival time and the interaction time)
• high resolution and granularity (attainable with silicon technology)

The needs of these fancy features, however, rests on the specific hunt that we will decide to embark on. Which, in turn, critically depends on the discoveries that the Large Hadron Collider will produce!

1. carlbrannen - April 12, 2008

The forward calorimeter photo is so awesome that it made me draw in my breath. It would be nice to have a human figure for scale

2. Myke - April 12, 2008

The photo of the calorimeter’s ‘termination benches’ makes one curious about the interface. Can you please describe the interface?

3. dorigo - April 14, 2008

Hi Carl, I will see if I find another pic for you of those devices.

Hi Myke, no, I am afraid I cannot. I do not know the system well enough.

Cheers,
T.

4. nc - April 14, 2008

I take it that ‘radiation length’ is Italian physicists’ jargon for what everybody else calls either the mean free path, or relaxation length, of the radiation!

5. nc - April 14, 2008

OK, I think I get it – the radiation length is the mean free path for the emission of secondary radiation due to inelastic scatter or pair production, while the attenuation length refers to the disappearance of the source radiation due to absorption.

6. Plato - April 15, 2008

Dorigo,

With your understanding of the calorimeters, how has this affected your views on the universe?:)

7. Dark Matter searches at colliders - part I « A Quantum Diaries Survivor - April 23, 2008

[…] are measured in the detector elements called calorimeters (see a description in two parts here and here) by destroying the particles they contain, both charged and neutral ones, in electromagnetic and […]

Sorry comments are closed for this entry