jump to navigation

Some notes on the multi-muon analysis – part IV February 2, 2009

Posted by dorigo in news, physics, science.
Tags: , ,
trackback

In this post -the fourth of a series (previous parts: part I, part II, and part III)- I wish to discuss a couple of attributes possessed by the “ghost” events unearthed by the CDF multi-muon analysis. A few months have passed since the publication of the CDF preprint describing that result, so I think it is useful for me to make a short summary below, repeating in a nutshell what is the signal we are discussing and how it came about.

Let me first of all remind you that “ghost events” are a unknown background component of the sample dimuon events collected by CDF. This background can be defined as an excess of events where one or both muons fail a standard selection criterion based on the pattern of hits left by the muons in the innermost layers of the silicon tracker, SVX. I feel I need to open a parenthesis here, in order to allow those of you who are unfamiliar with the detection of charged tracks to follow the discussion.

Two words on tracks and their quality

The silicon tracker of CDF, SVX, is made up by seven concentrical cylinders of solid-state sensors (see figure on the right: SVX in Run II is made by the innermost L00 layer in red, plus four blue SVX II layers, plus two ISL layers; also shown are the two innermost Run I SVX’ layers, in hatched green), surrounding the beam line. When electrically charged particles created in a proton-antiproton collision travel out of the interaction region lying at the center, they cross those sensors in succession, leaving in each a localized ionization signal -a “hit”.

CDF does not strictly need silicon hits to track charged particles, since outside of the silicon detector lies a gas tracker called COT (for Central Outer Tracker), capable of acquiring up to 96 additional independent position measurements of the ionization trail; however, silicon hits are a hundred times more precise than COT ones, so that one can define two different categories of tracks: COT-only, and SVX tracks. Only the latter are used for lifetime measurements of long-lived particles such as B hadrons, since those particles travel at most a few millimeters away from the primary interaction point before disintegrating: their decay products, if tracked with the silicon, allow the decay point to be determined.

Typically, CDF loosely requires an SVX track to have three or more hits; however, a tighter selection can be made which requires four or more hits, additionally enforcing that two of those belong to the two innermost silicon layers. These tight SVX tracks have considerably better spatial resolution on the point of origin of the track, since the two innermost hits “zoom in” on it very effectively.

Back to ghosts: a reminder of their definition

Getting back to ghost events, the whole evidence of their presence is that one finds considerably more muon pairs failing the tight-SVX tracking selection than geometry and kinematics would normally imply in a homogeneous sample of data. Muons in ghost events systematically fail hitting the innermost silicon layers, just as if they were produced outside of it by the decay of a long-lived, neutral particle.

Because of its very nature -an excess of muon pairs failing the tight-SVX criteria- the “ghost sample” is obtained by a subtraction procedure: one takes the number T of events with a pair of tight-SVX muons, divides their number by the geometrical and kinematical efficiency \epsilon that muons from the various known sources pass tight-SVX cuts, and obtains a number E, which subtracted from the number O of observed dimuon pairs allows to spot the excess G, as follows: G = O-E = O-T/\epsilon.

Mind you, we are not talking of a small excess here: if you have been around this blog for long enough, you are probably accustomed to the frequent phenomenon of particle physicists getting hyped up for 10-event excesses. Not this time: the number of ghost muon events exceeds 70,000, and the nature of this contribution is clearly of systematic origin. It may be a background unaccounted by the subtraction procedure, or a signal involving muons that are created outside of the innermost silicon layers.

In the former three installments of this multi-threaded post I have discussed with some detail the significant sources of reconstructed muons which may contribute to the ghost sample, and be unaccounted by the subtraction procedure: muons from decays in flight of kaons and pions, fake muon tracks due to hadrons punching through the calorimeter, and secondary nuclear interactions. Today, I will rather assume that the excess of dimuon events constitutes a class of its own, different from those mundane sources, and proceed to discuss a couple of additional characteristics that make these events really peculiar.

The number of muons

In the first part of this series I have discussed in detail how the excess of ghost events contains muons which have abnormally large impact parameters. Impact parameter -the distance of the track from the proton-antiproton collision point, as shown by the graph on the right- is a measure of the lifetime of the body which decays into the muons, and the observation of large impact parameters in ghost events is the real alarm bell, demanding that one needs to really try and figure out what is going on in the data. However, once that anomaly is acknowledged, surprises are not over.

The second observation that makes one jump on the chair occurs when one simply counts the number of additional muon candidates found accompanying the duo which triggered the event collection in the first place. In the sample of 743,000 events with no SVX hit requirements on the two triggering muons, 72,000 events are found to contain at least a third muon track. 10% is a large number! By comparison, only 0.9% of the well-identified \Upsilon(1S) \to \mu \mu decays contained in the sample is found to contain additional muons besides the decay pair. However, since the production of \Upsilon particles is a quite peculiar process, this observation need not worry us yet: those events are typically very clean, with the b\bar b meson accompanied by a relatively small energy release. In particle physics jargon, we say that \Upsilon mesons have a soft P_T spectrum: they are produced almost at rest in most cases. There are thus few particles recoiling against it -and so, few muons too.

Now, the 10% number quoted above is not an accurate estimate of the fraction of ghost events containing additional muons, since it is extracted from the total sample -the 743,000 events. The subtraction procedure described above allows to estimate the fraction in the ghost sample alone: this is actually larger, 15.8%, because all other sources contribute fewer multi-muon events: only 8.3%. These fractions include of course both real and fake muons: in the following I try to describe how one can size up better those contributions.

Fake muons

A detailed account of the number of additional muons in the data and the relative sources that may be originating them can be tried by using a complete Monte Carlo simulation of all processes contributing to the sample, applying some corrections where needed. As a matter of fact, a detailed accounting of all the physical processes produced in proton-antiproton collisions is rather an overkill, because events with three or more muon candidates are a rare merchandise, and they can be produced by few processes: basically the only sizable contributions come from sequential heavy flavor decays and fake muon sources. Let us discuss these two possibilities in turn.

Real muon pairs of small invariant mass, recoiling against a third muon, are usually the result of sequential decays of B-hadrons, like in the process B \to \mu \nu D \to \mu \nu X (see picture on the left, where the line of the decaying quark is shown emitting sequentially two lepton pairs in the weak decays). The two muons from such a chain decay cannot have a combined mass larger than 5 GeV, which is (roughly speaking) the mass of the originating B hadron. In fact, by enforcing that very requirement (M_{\mu \mu} >5 GeV) on the two muons at trigger level, CDF enriches the collected dataset of events where two independent heavy-flavor hadrons (B or D mesons, for instance) are produced at a sizable angle from each other. A sample event picture is shown below in a transverse section of the CDF detector. Muon detection systems are shown in green, and in red are shown the track segments of two muons firing the high-mass dimuon trigger.

(You might well ask: Why does CDF requires a high mass for muon pairs ? Because the measurements that can be extracted from such a “high-mass” sample are more interesting than those granted by events with two muons produced close in angle, events which are in any case likely to be collected into different datasets, such as the one triggered by a single muon with a larger transverse momentum threshold. But that is a detail, so let’s go back to ghost muons now.)

When there are three real muons, one thus has most likely a $b \bar b$ pair, with one of the quarks producing a double semileptonic decay (two muons of small mass and angle), and the other producing a single semileptonic decay (with this third muon making a large mass with one of the other two): for instance, B \bar B \to (\mu^- \bar \nu X) (\mu^+ \nu D) \to (\mu^- \bar \nu X)(\mu^+ \nu \mu^- \bar \nu Y), in the case of two B mesons; in the decay chain above, X and Y denote a generic hadronic state, while D is a hadron containing a anti-charm quark. B hadron decays can produce three muons also when one of them decays to a J/\Psi meson, which in turn decays to a muon pair. Other heavy flavor decays, like those involving a c \bar c pair, can at most produce a pair of muons, and the third one must then be a fake one.

The HERWIG Monte Carlo program, which simulates all QCD processes, does make a good guess of the production cross-section of b-quark pairs and c-quark pairs produced in proton-antiproton collisions, in order to simulate all processes with equanimity; but those numbers are not accurate. One improves things by taking simulated events that contain those production processes such that they match the b \bar b and c \bar c cross-sections which are measured with the tight-SVX sample, the subset devoid of the ghost contribution.

The CDF analysis then proceeds by estimating the number of events where at least one muon track is in reality a hadron which punched through the detector. The simulation can be trusted to reproduce the number of hadrons and their momentum spectrum, but the phenomenon of punch-through is unknown to it! To include it, a parametrization of the punch-through probability is obtained from a large sample of D \to K \pi decays, collected by the Silicon Vertex Tracker, a wonderful device capable of triggering on the impact parameter of tracks. The D meson lives long enough that the kaon and pion tracks it produces have sizable impact parameter, and millions of such events have been collected by CDF in Run II.

The extraction of the probability is quite simple: take the kaon tracks from D decays, and find the fraction of these tracks that are considered muon candidates, thanks to muon chamber hits consistent with their trajectory. Then, repeat the same with the pion candidates. The result is shown in the graphs below separately for kaon and pion tracks. In them, the probability has been computed as a function of the track transverse momentum.

Besides the above probabilities and the tuning of the b \bar b cross section, a number of other details are needed to produce a best-guess prediction of the number of multi-mion events with the HERWIG Monte Carlo simulation. However, once all is said and done, one can verify that there indeed is an excess in the data. This excess appears entirely in the ghost muon sample, while the tight-SVX sample is completely free from it. Its size is again very large, and its source is thus systematical -no fluctuation can be hypothesized to have originated it.

The mass of muon pairs in multi-muon events

To summarize, what happens with ghost events is that if one searches for additional muon tracks around each of the triggering muons, one finds them with a rate much higher than what one observes in the tight-SVX dimuon sample. It is as if a congregation of muons is occurring! The standard model is unable to even getting close to explain how events with so many muons can be produced. The source of ghost events is thus really mysterious.

Now, if you give to a particle physicist the momenta and energies P_x. P_y, P_z, E of two particles produced together in a mysterious process, there is no question on what is going to happen: next thing you know, he will produce a number, m^2=(\Sigma E)^2-(\Sigma P_x)^2 -(\Sigma P_y)^2 - (\Sigma P_z)^2. m is the invariant mass of the two-particle system: if they are the sole products of a decay process, m is a unbiased measurement of the mass M_x of the parent body. If, instead, the two particles are only part of the final state, m will be smaller than M_x; still, a distribution of the quantity m for several decays will say a lot about the parent particle X.

Given the above, it is not a surprise that the next step in the analysis, once triggering muons in ghost events are found to be accompanied by additional muons at an abnormal rate, is to plot the invariant mass of those two-muon combinations.

There is, however, an even stronger motivation from doing that: an anomalous mass distribution of lepton pairs (then electron-muon pairs, not dimuons -I will come back to this detail later) had been observed by the same authors in Run I. That excess of dilepton pairs was smaller numerically -the dataset from which it had been extracted corresponded to an integrated luminosity 20 times smaller- but had been extracted with quite different means, from a different trigger, and with a considerably different detector (the tracking of CDF has been entirely changed in Run II). The low-mass excess of dilepton pairs remained a unexplained feature, calling for more investigation which had to wait a few years to be performed. The mass distribution of electron-muon combinations found by CDF in Run I is shown in the graph on the right: the excess of data (the blue points) over known background sources (the yellow histogram) appears at very low mass.

In Run II, not only does CDF have 20 times more data (well, sixty times so by now, but the dataset on which this analysis was performed was frozen one and a half years ago, thus missing the data collected and processed after that date): we also have more tools at our disposal. The mass distribution of muon pairs close in angle, belonging to ghost events with three or more muon candidates, can be compared with the tuned HERWIG simulation both for ghost event sample and for the tight SVX sample: this makes for a wonderful cross-check that the simulation can be trusted on producing a sound estimate of that distribution!

The invariant mass distribution of muon pairs close in angle in tight-SVX events with three or more muon tracks is shown on the left. The experimental data is shown with full black dots, while the Monte Carlo simulation prediction is shown with empty ones. The shape and size of the two distributions match well, implying that the Monte Carlo is properly normalized. Indeed, the tight-SVX sample is the one used for the measurements of b \bar b and c \bar c cross sections: once the Monte Carlo is tuned to the values extracted from the data, its overall normalization could mismatch the data only if fake-muon sources were grossly mistaken. That is not the case, and further, one observes that the number of J/\Psi \to \mu \mu decays -which end up all in one bin in the histogram, at 3.1 GeV of mass- are perfectly well predicted by the simulation: again, not a surprise, since those mesons can make it to a three-muon dataset virtually only if they are originated from B hadron decays. So, the check in tight-SVX events fortifies our trust on our tuned Monte Carlo tool.

Now, let us look at how things are going in the ghost muon sample (see graph on the right). Here, we observe more data at low invariant mass than what the Monte Carlo predicts: there is a clear excess for masses below 2.5 GeV. This excess has the same shape as the one observed in Run I in electron-muon combinations!

Please take a moment to record this: in CDF, some of the collaborators who objected to the publication of the multi-muon analysis did so because they insisted that more studies should be made to confirm or disprove the effect. One of the objections was that the electron-muon sample had not been studied yet. The rationale is that if the ghost events are due to a real physical process, then the same process should show up in electron-muon combinations; otherwise, one is hard-pressed to avoid having to put into question a thing called lepton universality, which -at least for Standard Model processes- is a really hard thing to do. However, the electron signature in CDF is very difficult to handle, particularly at low energy: backgrounds are much harder to pinpoint than for muons. Such a study is ongoing, but it might take a long time to complete. Run I, instead, is there for us: and there, the same excess was indeed present in electron events too!

Finally, there is one additional point to mention: a small, but crucial one. The J/\Psi signal is in perfect match with the simulation prediction! This observation confirms that the tuned cross section of b \bar b production is right dead-on. Whatever these ghost events are, they sure cannot be coming from B production. Also, note that the agreement of the J/\Psi signal with Monte Carlo expectations constitutes proof that the efficiency of the tight-SVX requirements -the 24% number which is used to extract the numerical excess of ghost events- is correct. Everything points to a mysterious contribution which is absent in the Monte Carlo.

The above observations conclude this part of the discussion. In the next installment, I will try to discuss the additional oddities of ghost events -in particular, the rate of muons exceeding the triggering pair is actually four times higher than in QCD events. I will then examine some tentative interpretations that have been put forth in the course of the three months that have passed since the publication.

Comments

1. carlbrannen - February 3, 2009

Getting back to the DELPHI observation of muon multiplicities of cosmic rays, I thought it would be useful to type up a quick decription:

DELPHI is a lepton, photon, and hadron detector at CERN. They usually detect particles from the Large Electron Positron (LEP) collisions, but they decided to grab cosmic ray data with it. The result is Study of multi-muon bundles in cosmic ray showers detected with the DELPHI detector at LEP DELPHI collaboration.

They found: “The resulting muon multiplicity distribution is compared with the prediction of the Monte Carlo simulation based on cORSIKA/QGSJET01. The model fails to describe the abundance of high multiplicity events.”

They’re deep enough underground that they only see muons > 52 GeV/c. This means primary particles 10^{14} eV. And cosmic rays get more rare as energy increases. Looking up their total observation time, they should see stuff to around 10^{18} eV. The point is that these are relatively small energies, the highest energy cosmic ray ever seen was around 3.2\times 10^{20} eV, at least according to Wikipedia.

The paper is about the fact that the multiplicities aren’t following models. Their models are failing at very large multiplicities. The largest multiplicity they saw was something around 175 muons.

It’s easy to imagine that their multiplicity errors really begin happening at 2 or 3 muons, but they can’t see ghosts because they don’t have a nice silicon tracker etc., helping them to sort by offests or transverse momentum.

The reason I’m pointing this out is because it gets back to the Centauro and anti-Centauro events seen in the 1980s. These were seen in cosmic ray detectors roughly the size of DELPHI, that used plates of emulsion. One of the reasons their results were largely ignored was because the energies of the cosmic rays they were looking at were roughly on the order of the DELPHI limit (and also due to limited area and time), and with those relatively low energies, people expected that particle accelerators should be seeing something.

You can find these articles by googling Centauro on arXiv.org. One thing of interest here is the “deeply penetrating” cosmic rays. These are highly energetic particles that have less than the usual probability of interacting. But when they do, they make a huge shower. Maybe these could be the long lived unknown neutral.

There are a lot of crazy ideas out there to explain Centauros. One is that they were evident in pp but not p-pbar collisions. A favorite is “fluctuations in cross sections”. This idea says that the measured cross section of a hadron is an average. Sometimes it’s higher, sometimes lower. Then there are strangelets, vacuum condensation or quark gluon plasma, and mini black holes. My favorite article for describing the various things that have been seen is hep-ph/0111163.

2. dorigo - February 3, 2009

Thank you Carl, your comment is almost a post on cosmic ray anomalies! I wanted to blog about this myself, now you sort of stole me the idea šŸ˜‰

Cheers,
T.

3. Andrea Giammanco - February 3, 2009

Very good post, maybe the best of the series so far.
Just one small comment: in principle, a lack of confirmation in the electron channels would not kill the hypothesis of new physics (after all, why should we assume that an unknown particle has to decay with equal lepton ratios? the Higgs, just to name one, has disequal ratios in the SM), but just an interpretation by the same authors in which the next-to-last step of the decay chain is always or mostly composed by taus.
On the other hand it is true that finding the same excess in the electron channels makes things more thrilling…

4. dorigo - February 3, 2009

Yes Andrea, quite right. One might imagine a number of ways by means of which lepton universality in the decay of a new object is violated. Psychologically, though, finding zero excess of this kind in electron-muon combinations would be a serious blow to anybody who wants to believe in a exotic explanation.

Cheers,
T.

5. That crazy leptonic sector: multi-muon model-making « High Energy PhDs - February 16, 2009

[…] on the multi-muon anomaly is still Tommaso’s set of notes: part 0, part 1, part 2, part 3, part 4. An excellent theory-side discussion can be found at […]

6. carlbrannen - February 17, 2009

I’ve had the flu this past week, does “influenza” translate directly into Italian? I bet not.

Marni and I have been working on mixing angle matrices. We’ve done very nicely with the MNS matrix, but when I try to use the same methods on the CKM matrix I have not been able to get a unitary result. It’s like the “bt” entry should be larger in magnitude than the others. While ill, it occurred to me that CKM matrix entries would be rather difficult to measure absolutely; it would be a lot easier to get branching ratio information and specify CKM entries as multiples of other entries. So here’s a proposal for the explanation of the muon problem.

Suppose that the CKM matrix is a matrix of coupling constants rather than mixing angles. That is not such a huge change to the standard model. And that the bt entry exceeds unitarity, maybe by a factor of 2 or so. This will make it easier to get heavy quarks to weak decay and so make muons. So if you made a t and t-bar quark, these could decay to b and b-bar quarks, and these decay into u and u-bar quarks all weakly, and thus making as much as 6 muons.

On another not necessarily unrelated tack, I wonder if the neutral particle that explains the offset muons could be tau tau-bar. This might be unlikely to produce unless you messed with the CKM matrix.

Anyway, the thoughts of a fevered mind. I’m still waiting to hear back from Phys Math Central.

7. dorigo - February 17, 2009

Hi Carl,

influenza is correct italian.

As for the CKM matrix, your suggestion does not make much sense to me. The V_{ij} elements get multiplied by the weak coupling constant in every process; what you propose (scaling up one element to make it exceed one, for instance) is equivalent to increasing the common coupling and rescaling all matrix elements. But that is ruled out by measurements of the coupling.

In other words, forget the matrix for a second: then you only have a up-type and one down-type quark. They participate in charged weak processes without a CKM matrix element (i.e. it is 1.0), and all there is is a weak coupling in the vector current.

Also, note that top-antitop pairs already decay to bottom-antibottom pairs 999/1000 of the time. What is rare is the b->u transition, but that has very little to do with the branching fraction to muons. The muon branching fraction is governed by essentially two things: the availability of fermions to decay into (or, if you will, the universality of the weak interaction), and the phase space. It is roughly 10% because there are nine possible fermion pairs for a weak CC process to occur: the CKM element plays no role, because whatever it is the quark that the b- decays into, it is the weak boson which will decide how to decay, whether i.e. to give a muon-nu pair or a hadron.

Cheers,
T.

8. carlbrannen - February 18, 2009

“is equivalent to increasing the common coupling and rescaling all matrix elements.”

This is not true. Given a set of nine arbitrary coupling constants, it is not generally possible to choose a number for a common coupling constant and define the nine coupling constants as mixing matrix elements.

For example, if you take a coupling constant and mixing matrix, and from this define nine “independent coupling constants”, one finds that the nine coupling constants are “magic”, that is, the sum over a row or column is the same (uh in squared magnitude). An arbitrary set of nine coupling constants doesn’t have this symmetry property. And the PDG is rather clear on how the CKM matrix elements are found experimentally, there really isn’t much information on the bt element.

Thanks for reminding me that the top quark automatically decays into bottom. When I was writing the above, I was somehow thinking that the they would decay in some strong interaction but clearly there isn’t anything available.

On other news, I’ve been enjoying “One Hundred Years of Solitude” by Gabriel GarcĆ­a MĆ”rquez. This reminds me just a little of “The Island of the Day Before”.

9. dorigo - February 19, 2009

Hi Carl,

you were speaking of increasing one of the elements, not changing all of them…

About Vtb: experimentally it can only be measured in top decay; however it can be constrained by the other measurements if you assume unitarity. I think the Tevatron has a measurement which is more like a CL. lower limit. I can dig it out for you if you wish.

Cheers,
T.

10. DZERO refutes CDF’s multimuon signal… Or does it ? « A Quantum Diaries Survivor - March 17, 2009

[…] the interaction point before disintegrating. More information about the whole issue can be found in this series of posts, or by just clicking the “anomalous muons” tab in the column on the right of this […]

11. Kaki Makaki - May 18, 2009

The connection with the cosmic ray Centauro events is not necessary here.
It is better not to forget about the “final nails” in the experimental Centauro events’ “coffin”:
http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=PRVDAQ000073000008082001000001&idtype=cvips&gifs=yes

hep-ph/0111163 and others did not recognize some crucial experimental details which were overlooked in the original cosmic ray Centauro events study.
Wrong description and poor detection accuracy of the cosmic ray Centauros are not necessary to stress the importance of multi muon CDF signal.

dorigo - May 18, 2009

Hi,
thank you for your comment. Please note that this blog has moved to http://www.scientificblogging.com .
Cheers,
T


Sorry comments are closed for this entry