jump to navigation

Live video streaming of single top observation NOW March 10, 2009

Posted by dorigo in news, physics, science.
Tags: , ,
comments closed

You can follow it at this link.

Who discovered single top production ? March 5, 2009

Posted by dorigo in news, physics, science.
Tags: , , ,
comments closed

Both CDF and DZERO have announced yesterday the first observation of electroweak production of single top quarks in proton-antiproton collisions. Both papers (this one from CDF, and this one from DZERO) claim theirs is the first observation of the long sought-after subatomic reaction. Who is right ? Who has more merit in this advancement in human knowledge of fundamental interactions ? Whose analysis is more credible ? Which of the two results has fewer blemishes ?

To me, it is always a matter of which one is the most relevant question. And to me, the most relevant question is, Who cares who did it ? ... with the easy-to-guess answer: not me. As I have had other occasions to say, I am for the advancement of Science, much less for the advancement of scientific careers, leave alone to which experiments those careers belong.

The top quark is interesting, but so far the Tevatron experiments had only studied it when produced in pairs with its antiparticle, through strong interactions. Electroweak production of the top quark is also possible in proton-antiproton collisions, at half the rate. It is one of those rare instances when the electroweak force competes with the strong one, and it is due to the large mass of the top quark: producing two is much more demanding than producing only one, due to the limited energy budget of the collisions. The reactions capable of producing a single top quark are described by the diagrams shown above. In a), a b-quark from one of the projectiles becomes a top by intervention of a weak vector boson; in b), a gluon “fuses” with a W boson and a top quark is created; in c), a W boson is produced off-mass-shell, and it possesses enough energy to decay into a top-bottom pair.

Since 1995, when CDF and DZERO published jointly the observation of the top quark, nobody has ever doubted that electroweak processes would produce single tops as well. Not even one article, to my knowledge, tried to speculate that the top might be so special to have no weak couplings. The very few early attempts at casting doubt on the real nature of what the Tevatron experiments were producing died quickly as statistics improved and the characterization of the newfound quark was furthered. So what is the fuss about finding out that the reaction resulting from the Feynman diagrams shown above can indeed be directly observed ?

There are different facets in a thorough answer to  the above question. First of all, competition between CDF and DZERO: each collaboration badly wanted to get there first, especially since this was correctly predicted from the outset to be a tough nut to crack. Second, because seeing single top production implies having direct access to one element of the Cabibbo-Kobayashi-Maskawa mixing matrix, the element V_{tb}, which is after all a fundamental parameter in the standard model (well, to be precise it is a function of some of the latter, namely of the CKM matrix parameters, but let’s not split hairs here). Third, you cannot really see a low-mass Higgs at the Tevatron if you did not measure single top production first, because single top is a background in Higgs boson searches, and one cannot really discover something by assuming something else is there, if one has not proven that beforehand.

So, single top observation is important after all. I am a member of the CDF collaboration, and I am really proud I belong to it, so my judgement on the whole issue might be biased. But if I have to answer the question that gave the title to this post, I will first give you a very short summary of  the results of the two analyses,  deferring to a better day a more detailed discussion. This will allow me to drive home a few points.

The two analyses: a face-to-face summary

  • Significance: both experiments claim that the signal they observe has a statistical significance of 5.0 standard deviations.
  1. CDF uses 3.2 inverse femtobarns, and finds a 5.0-sigma-significance signal of single top production. The sensitivity of the analysis is better measured by the expected significance, which is quoted at 5.9-sigma.
  2. DZERO uses 2.4 inverse femtobarns, and finds a 5.0-sigma-significance of single top production. The sensitivity of the DZERO analysis is quoted at 4.5-sigma.
  • Cross-section: both experiments measure a cross section in agreement with standard model expectations.
  1. CDF measures \sigma = 2.3^{+0.6}{-0.5} pb, a relative uncertainty of about 24%.
  2. DZERO measures \sigma = 3.9 \pm 0.9 pb, a relative uncertainty of about 23%.
  • Measurements of the CKM matrix element: both experiments quote a direct determination of that quantity, which is very close to 1.0 in the SM, but cannot exceed unity.
  1. CDF finds |V_{tb}|=0.91 \pm 0.11, a 12% accuracy.
  2. DZERO finds |V_{tb}|=1.07 \pm 0.12, a 11% accuracy.
  • Data distributions: both experiments have a super-discriminant which combines the information from different searches. This is a graphical display of the power of the analysis, and should be examined with care.

1. CDF in its paper shows the distribution below, as well as the five inputs that were used to obtain it. The distribution shows the single-top contribution in red, stacked over the concurring backgrounds. At high values of the discriminant, the single top signal does stick out, and the black points -the data- follow the sum of all processes nicely.

2.DZERO in its paper has only the distribution shown below. I was underwhelmed when I saw it. Again, backgrounds are stacked one on top of the other, the top distribution is the one from single top (this time shown in blue), and the data is shown by black dots. It does not look like the data prefer the hypothesis of backgrounds+single top over the background-only one all that much!

Maybe I am too partisan to really make a credible point here, and since I did not follow in detail the development of these analyses -from their first publications as evidence for single top, to updates, until yesterday’s papers- I may very well be proven wrong; however, by looking at the two plots above, and by knowing that they both appear to provide a 5.0-sigma significance, I am drawn to the conclusion that DZERO believes their background shapes and normalization much better than CDF does!

Now, believing something is a good thing in almost all human activities except Science. And if two scientific collaborations have a very different way of looking at how well their backgrounds are modeled by Monte Carlo simulations (which, at least as far as the generation of subatomic processes is concerned, are -or can be- the same), which one is to praise more: the one which believes the simulations more to extract their signal, or the one which relies less on them?

The above question is rethorical, and you should have already agreed that you value more a result which is less based on simulations. So let us look into this issue a bit further. CDF bases its result on a total sample of 4780 events, where the total uncertainty is estimated at +-533 events. DZERO bases its own on a sample of 4651 events, with a total uncertainty estimated at +-234 events! What drives such a large difference in the precision of these predictions ?

The culprit is one of the backgrounds, the production of W bosons in association with heavy flavor quarks – an annoying process, which enters all selection of top quarks and Higgs bosons at the Tevatron. CDF has it at 1855 events, with an uncertainty of 486 -or 26.2%; it is shown in green in the CDF plot above. DZERO has it at 2646 events, with an uncertainty of 173, or 6.5%; it is also shown in green in the DZERO plot.  Do not be distracted by the different size of the contribution of W+heavy flavor in the two datasets: different selection strategies drive the numbers to differ, and besides, it is rather the total number of events of the two analyses which is similar by pure chance. The point here is the uncertainty.

Luckily, the DZERO analysis does not appear to rely too much on the background normalization -this is not a simple counting experiment, where the better you know the size of expected backgrounds, the smaller your uncertainty on the signal; rather, the shapes of backgrounds are important, and the graphs above show that the data appears indeed well-described by the discriminant shape. And of course, background shapes are checked in control samples, so both experiments have many tools to ensure that the different contributions are well understood. However, the issue remains: how much do the different estimates of the W plus heavy flavor uncertainty impacts the significance of the measurements ? The DZERO paper mentions that one of their largest uncertainties arises from the modeling of the heavy flavor composition of W+jet events, but it does not provide further details.

I would be happy to receive an informed answer in the comments thread about the points I mention above…

First observation of single top production from CDF!!! March 5, 2009

Posted by dorigo in news, physics, science.
Tags: , , ,
comments closed

The paper, submitted to PRL yesterday evening, is here.
I will discuss the details later today…

UPDATE: a reader points out that the above link was broken. Now fixed.

The 1999/2003 Higgs predictions compared with CDF 2009 results February 13, 2009

Posted by dorigo in news, personal, physics, science.
Tags: , , ,
comments closed

Two years ago I used the combined Higgs search limits produced by the D0 experiment to evaluate how well the Tevatron was doing if compared with the predictions that had been put together by the 1999 SUSY-HIGGS working group, and later by the 2003 Higgs Sensitivity Working Group (HSWG), two endeavours to which I had participated with enthusiasm. The picture that emerged was that, although results were falling short of justifying fully the early predictions, there was still hope that those would one day be vindicated.

Indeed, I remember that when in 2003 the HSWG produced its report, we felt our results were greeted with a dose of scepticism. And we ourselves were a bit embarassed, because we knew we had been a bit optimistic in our predictions: however, that was the name of the game – looking at things on their bright side, for the sake of convincing funding agents that the Tevatron had a reason to run for a long time. I felt a strong justification for being optimistic in the incredible results on the top quark mass that the Tevatron had already started achieving: early prospects of measuring the top mass to a 1% uncertainty have in fact been surpassed by the combination of dedication of the scientists doing the analyses, and their imagination in inventing new precise methods.

We now have a chance to look back at the 1999/2003 predictions for the Higgs reach of the Tevatron with a rather solid set of hard data: the CDF combination, which I briefly discussed two days ago, is based on analyzed sets of data ranging from 2 to 3 inverse femtobarns, and the comparisons do not require a lot of extrapolations to be carried out.

If we look at the 1999/2003 predictions shown above (two basically coincident results, if one considers that the 2003 results were not accounting for systematic effects, which would worsen a bit the curves of sensitivity and bring them to match the older ones), we can read off the integrated luminosity that the Tevatron experiments needed to analyze in order to exclude, by combining their results, SM Higgs production at 95% confidence level. These numbers are as follows: for a Higgs mass of 100 GeV, 1/fb was considered sufficient; for a Higgs mass of 120 GeV, 2/fb were needed; 10/fb at 140 GeV; 4.5/fb at 160 GeV; 8/fb at 180 GeV; and 80/fb at 200 GeV. You can check them on the purple band in the graph above.

Now, let us take the actual expected limits by CDF with the analyses and the data they have based their new result upon (using expected limits rather than observed ones is correct, since the former are unaffected by statistical fluctuations). At 100 GeV, CDF has a reach in the 95%CL limit at 2.63xSM; at 120 GeV, the reach is 3.72xSM; at 140 GeV, 3.61xSM; at 160 GeV it is 1.75xSM; at 180 GeV 3.02xSM; and at 200 GeV, the reach is at 6.33xSM.

(Below, the 2009 combined CDF limits are shown by the thick red curve; the data I list above is based on the hatched curve instead, which shows the expected limit.)

How do we now compare these sets of numbers ?

Easy. As easy as 1.2.3.4 (well, not too easy, but that’s how it goes).

  1. We first scale up by a factor of two the 1999/2003 luminosity numbers needed for a 95% CL exclusion, which we listed above. We thus get, for Higgs masses ranging from 100 to 200 GeV in 20-GeV steps, needed integrated luminosities of 2,4,20,9,16,160/fb.
  2. Then, we take the actual luminosity used by CDF for the analyses that have been combined to yield the expected limits listed above. This is slightly tricky, since the combination includes analyses which have used 2.0/fb of data (the H \to \tau \tau search), 2.1/fb (the VH \to ME_T b \bar b search), 2.7/fb (the WH \to l \nu b \bar b, the ZH \to ll b \bar b, and the WH \to WWW searches), and 3.0/fb (the H \to WW search). In principle, we should weight those numbers with the relative sensitivity of the various analyses, but we can approximate it by taking an “average effective luminosity” of 2.4/fb for the 100 GeV Higgs search, 2.7/fb for the 120 and 140 GeV points, and 3.0/fb for the high-mass searches. This is appropriate, since the H \to WW search starts kicking in above 140 GeV.
  3. We now have all the numbers we need: we divide the expected luminosity needed for one experiment by the 1999/2003 study, found at point 1 above, by the effective luminosities found at point 2, and take the square root of that number: this means finding the “reduction factor” in the sensitivity that the actual CDF data suffers with respect to the data needed to exclude the Higgs boson. We find a reduction factor of 0.91, 1.22, 2.72, 1.73, 2.31, and 7.30 for Higgs masses of 100,120,140,160,180, and 200 GeV respectively.
  4. Now we are done. We can compare the “times the SM” limits of CDF with the numbers found at point 3 above. The ratio of the two says how much worse is CDF doing with respect to predictions, for each mass point. We find that CDF is doing 2.88 times worse than predictions at 100 GeV; 3.06 times worse than predictions at 120 GeV; 1.33 times worse at 140 GeV; 1.01 times worse at 160 GeV; 1.31 times worse at 180 GeV; and 0.87 times worse (i.e., 1.15 times better!) at 200 GeV.

The results of point 4 are plotted on the graph shown above, where the x-axis shows the Higgs mass, and the y axis this “shame factor”. I have given a 20% uncertainty to the figures I computed, because of the rather rough way I extracted the numbers from the 1999/2003 prediction graph. If you look at the graph, you notice that the CDF experiment has kept its (our!) promise (points bouncing around a ratio of 1.0) with its high-mass searches, while low-mass searches still are a bit below expectations in terms of reach (3x worse reach than expected). It is not a surprise: at low Higgs mass, the searches have to rely on the H \to b \bar b final state, which is very difficult to optimize (vertex b-tagging, dijet mass resolution, lepton acceptance are the three things on which CDF has been spending hundreds of man-years in the last decade). Give CDF (and DZERO) enough time, and those points will get down to 1.0 too!

New CDF Combination of Higgs limits! February 11, 2009

Posted by dorigo in news, physics, science.
Tags: , , ,
comments closed

A brand-new combination of Higgs boson cross-section limits has been recently produced by the CDF experiment for the 2009 winter conferences. The results are almost one month old, but I decided to wait a bit before posting them here, in order to avoid arising bad feelings in a few of my CDF colleagues, those who believe I have no right to post here published results in too timely a fashion, because they feel those results should first be presented at conferences by the real authors of the analyses.

Now I think it is due time to have the most relevant plots here, since they are all available from a public web page of CDF anyway; so here we go, with the most updated information. Mind you, these are CDF-only results: a sizable improvement in the limits will come when they get combined with the findings of DZERO. I seem to understand that the Tevatron combination group folks are dragging their feet this year, so we have better to just as well take the CDF results and comment them alone.

The first graph is the most important one of all: it describes the combination of CDF results, in the usual “95% CL limit on times the SM cross section“. It is shown below.

On the x axis is the Higgs boson mass, and on the y axis the cross-section limit. Different colors of the curves refer to different analyses, which target the various decay channels of the sought particle; hatched lines show expected limits, and full ones show instead the limits actually obtained by the analysis.

As you can see by examining the thick red curve at the bottom, CDF by itself cannot rule out the 170-GeV point, which last summer was excluded by the CDF+D0 combination. However, a sizable improvement can be observed across the board in the results. The red curve, for one thing, is considerably flatter than it used to be, a sign that the low-mass searches have started to pitch in with momentum. Another thing to note is that these results correspond to 3.0/fb of analyzed luminosity or less (2.4/fb at low mass): there is already at least twice as much data waiting to be analyzed, and results are thus expected to sizably improve their sensitivity.

The above summary brings me to mention another important point. By looking at the graph, you might run the risk of failing to appreciate the enormous effort which CDF is putting into these searches. In truth, the name of the game is not “wait more data and turn the crank”. Quite the opposite! The most important improvements in the discovery reach have been achieved in the course of the last five years by continuously improving the algorithms, the search methods, by refining tools, by finding new avenues of investigation, and new search channels neglected before. This is summarized masterfully in the two graphs shown below.

Above, you can see that for a Higgs mass of 115 GeV, the limit that CDF was able to set on its existence, in terms of cross section (well, “times the SM cross section” units to be precise: the ones shown on the y axis) has improved much more than what one would have expected by scaling down the limit with a simple square root law -the one that Poisson Statistics would dictate, for statistically-limited measurements. Quite the opposite: as time went by, the actual limits (colored points) have moved down almost vertically, a sign that the data has been used better and better! Above, if you took the extrapolation expected after the first limit was published (the one in green), you would expect that the limit today, with 2.4/fb analyzed, was at 7xSM, while it in fact is at 3xSM: this corresponds to a 2.3x improvement in the limit, which would have been granted by a 5.2 times larger analyzed dataset!!

Above, the same information is shown for the M_H = 160 GeV value. In this case, CDF is now expected to be able to set an exclusion alone with 9/fb of data, but we still expect to see some improvements in the data analyses, which should move the points well into the brown band. In this case, 7/fb of data might be enough.

The last two plots I wish to discuss are shown below. BEWARE: This is information that LHC scientists would really, really not like to see – so, if your life depends on the success of ATLAS or CMS, please stop reading now, take my advice.

OK. The plot below shows the probability that the Tevatron experiments, by combining their datasets and results, may observe a 2-sigma evidence for SM Higgs production, with 5/fb and 10/fb of data collected by each. If the analyses will not perform better than what they have so far, you get the full lines -red for 5/fb, blue for 10/fb. If they improve as much as it is reasonable to predict, you get the hatched lines.

What to gather from the plot ? Well: it seems that, regardless of the Higgs boson mass, the Tevatron has a sizable chance to be able to say something good, by the time CDF and D0 will have analyzed the datasets they already possess (which are in excess of 5/fb each: the delivered luminosity of the Tevatron is passing the 6/fb mark as we speak, and the typical live time of the experiments is above 80%).

Below, we see what is the chance of a 3-sigma evidence. Again, there is a sizable chance of that happening, although if no additional improvements occur in the analyses, it seems that the Tevatron will need to get lucky!

I remember that in 2005 I gave a talk in Corfù (Greece) where I ventured to speculate on the possible scenarios for Higgs searches at the Tevatron and the LHC. One of the scenarios saw the two experiments competing to find the particle with roughly equal reach, and eventually producing a combined observation. That possibility  does not seem too far-fetched any longer!

In the next few days I plan to discuss in some detail the most important analyses which contribute to the combination discussed above. Stay tuned…

CMS and extensive air showers: ideas for an experiment February 6, 2009

Posted by dorigo in astronomy, cosmology, physics, science.
Tags: , , , , , , ,
comments closed

The paper by Thomas Gehrmann and collaborators I cited a few days ago has inspired me to have a closer look at the problem of understanding the features of extensive air showers – the phenomenon of a localized stream of high-energy cosmic rays originated by the incidence on the upper atmosphere of a very energetic proton or light nucleus.

Layman facts about cosmic rays

While the topic of cosmic rays, their sources, and their study is largely terra incognita to me -I only know the very basic facts, having learned them like most of you from popularization magazines-, I do know that a few of their features are not too well understood as of yet. Let me mention only a few issues below, with no fear of being shown how ignorant on the topic I am:

  • The highest-energy cosmic rays have no clear explanation in terms of their origin. A few events with energy exceeding $10^{20} eV$ have been recorded by at least a couple of experiments, and they are the subject of an extensive investigation by the Pierre Auger observatory.
  • There are a number of anomalies on their composition, their energy spectrum, the composition of the showers they develop. The data from PAMELA and ATIC are just two recent examples of things we do not understand well, and which might have an exotic explanation.
  • While models of their formation suppose that only light nuclei -iron at most- are composing the flux of primary hadrons, some data (for instance this study by the Delphi collaboration) seems to imply otherwise.

The paper by Gehrmann addresses in particular the latter point. There appears to be a failure in our ability to describe the development of air showers producing very large number of muons, and this failure might be due to modeling uncertainties, heavy nuclei as primaries, or the creation of exotic particles with muonic decay, in decreasing order of likelihood. For sure, if an exotic particle like the 300 GeV one hypothesized in the interpretation paper produced by the authors of the CDF study of multi-muon events (see the tag cloud on the right column for an extensive review of that result) existed, the Tevatron would not be the only place to find it: high-energy cosmic rays would produce it in sizable amounts, and the observed multi-muon signature from its decay in the atmosphere might end up showing in those air showers as well!

Mind you, large numbers of muons are by no means a surprising phenomenon in high-energy cosmic ray showers. What happens is that a hadronic collision between the primary hadron and a nucleus of nitrogen or oxygen in the upper atmosphere creates dozens of secondary light hadrons. These in turn hit other nuclei, and the developing hadronic shower progresses until the hadrons fall below the energy required to create more secondaries. The created hadrons then decay, and in particular K^+ \to \mu^+ \nu_{\mu}, \pi^+ \to \mu^+ \nu_{\mu} decays will create a lot of muons.

Muons have a lifetime of two microseconds, and if they are energetic enough, they can travel many kilometers, reaching the ground and whatever detector we set there. In addition, muons are very penetrating: a muon needs just 52 GeV of energy to make it 100 meters underground, through the rock lying on top of the CERN detectors. Of course, air  showers include not just muons, but electrons, neutrinos, and photons, plus protons and other hadronic particles. But none of these particles, except neutrinos, can make it deep underground. And neutrinos pass through unseen…

Now, if one reads the Delphi publication, as well as information from other experiments which have studied high-multiplicity cosmic-ray showers, one learns a few interesting facts. Delphi found a large number of events with so many muon tracks that they could not even count them! In a few cases, they could just quote a lower limit on the number of muons crossing the detector volume. One such event is shown on the picture on the right: they infer that an air shower passed through the detector by observing voids in the distribution of hits!

The number of muons seen underground is an excellent estimator of the energy of the primary cosmic ray, as the Kascade collaboration result shown on the left shows (on the abscissa is the logarithm of the energy of the primary cosmic ray, and on the y axis the number of muons per square meter measured by the detector). But to compute energy and composition of cosmic rays from the characteristics we observe on the ground, we need detailed simulations of the mechanisms creating the shower -and these simulations require an understanding of the physical processes at the basis of the productions of secondaries, which are known only to a certain degree. I will get back to this point, but here I just mean to point out that a detector measuring the number of muons gets an estimate of the energy of the primary nucleus. The energy, but not the species!

As I was mentioning, the Delphi data (and that of other experiments, too) showed that there are too many high-muon-multiplicity showers. The graph on the right shows the observed excess at very high muon multiplicities (the points on the very right of the graph). This is a 3-sigma effect, and it might be caused by modeling uncertainties, but it might also mean that we do not understand the composition of the primary cosmic rays: yes, because if a heavier nucleus has a given energy, it usually produces more muons than a lighter one.

The modeling uncertainties are due to the fact that the very forward production of hadrons in a nucleus-nucleus collision is governed by QCD at very small energy scales, where we cannot calculate the theory to a good approximation. So, we cannot really compute with the precision we would like how likely it is that a 1,000,000-TeV proton, say, produces a forward-going 1-TeV proton in the collision with a nucleus of the atmosphere. The energy distribution of secondaries produced forwards is not so well-known, that is. And this reflects in the uncertainty on the shower composition.

Enter CMS

Now, what does CMS have to do with all the above ? Well. For one thing, last summer the detector was turned on in the underground cavern at Point 5 of LHC, and it collected 300 million cosmic-ray events. This is a huge data sample, warranted by the large extension of the detector, and the beautiful working of its muon chambers (which, by the way, have been designed by physicists of Padova University!).  Such a large dataset already includes very high-multiplicity muon showers, and some of my collaborators are busy analyzing that gold mine. Measurements of the cosmic ray properties are ongoing.

One might hope that the collection of cosmic rays will continue even after the LHC  is turned on. I believe it will, but only during the short periods when there is no beam circulating in the machine. The cosmic-ray data thus collected is typically used to keep the system “warm” while waiting for more proton-proton collisions, but it will not be a orders-of-magnitude increase in statistics with respect to what has been already collected last summer.

The CMS cosmic-ray data can indeed provide an estimate of several characteristics of the air showers, but it will not be capable of providing results qualitatively different from the findings of Delphi -although, of course, it might provide a confirmation of simulations, disproving the excess observed by that experiment. The problem is that very energetic events are rare -so one must actively pursue them, rather than turning on the cosmic ray data collection when not in collider mode. But there is one further important point: since only muons are detected, one cannot really understand whether the simulation is tuned correctly, and one cannot achieve a critical additional information: the amount of energy that the shower produced in the form of electrons and photons.

The electron- and photon-component of the air shower is a good discriminant of the nucleus which produced the primary interaction, as the plot on the right shows. It in fact is a crucial information to rule out the presence of nuclei heavier than iron, or the composition of primaries in terms of light nuclei. Since the number of muons in high-multiplicity showers is connected to the nuclear species as well, by determining both quantities one would really be able to understand what is going on. [In the plot, the quantity Y is plotted as a function of the primary cosmic ray energy. Y is the ratio between the logarithm of the number of detected muons and electrons. You can observe that Y is higher for iron-induced showers (the full black squares)].

Idea for a new experiment

The idea is thus already there, if you can add one plus one. CMS is underground. We need a detector at ground level to be sensitive to the “soft” component of the air shower- the one due to electrons and photons, which cannot punch through more than a meter of rock. So we may take a certain number of scintillation counters, layered alternated with lead sheets, all sitting on top of a thicker set of lead bricks, underneath which we may set some drift tubes or, even better, resistive plate chambers.

We can build a 20- to 50-square meter detector this way with a relatively small amount of money, since the technology is really simple and we can even scavenge material here and there (for instance, we can use spare chambers for the CMS experiment!). Then, we just build a simple logic of coincidences between the resistive plate chambers, imposing that several parts of our array fires together at the passage of many muons, and send the triggering signal 100 meters down, where CMS may be receiving a “auto-accept” to read out the event regardless of the presence of a collision in the detector.

The latter is the most complicated thing to do of the whole idea: to modify existing things is always harder than to create new ones. But it should not be too hard to read out CMS parasitically, and collect at very low frequency those high-multiplicity showers. Then, the readout of the ground-based electromagnetic calorimeter should provide us with an estimate of the (local) electron-to-muon ratio, which is what we know to determine the weight of the primary nucleus.

If the above sounds confusing, it is entirely my fault: I have dumped here some loose ideas, with the aim of coming back here when I need them. After all, this is a log. a Web log, but always a log of my ideas… But I wish to investigate more on the feasibility of this project. Indeed, CMS will for sure pursue cosmic-ray measurements with the 300M events it has already collected. And CMS does have spare muon chambers. And CMS does have plans of storing them at Point 5… Why not just power them up and build a poor man’s trigger ? A calorimeter might come later…

Some notes on the multi-muon analysis – part IV February 2, 2009

Posted by dorigo in news, physics, science.
Tags: , ,
comments closed

In this post -the fourth of a series (previous parts: part I, part II, and part III)- I wish to discuss a couple of attributes possessed by the “ghost” events unearthed by the CDF multi-muon analysis. A few months have passed since the publication of the CDF preprint describing that result, so I think it is useful for me to make a short summary below, repeating in a nutshell what is the signal we are discussing and how it came about.

Let me first of all remind you that “ghost events” are a unknown background component of the sample dimuon events collected by CDF. This background can be defined as an excess of events where one or both muons fail a standard selection criterion based on the pattern of hits left by the muons in the innermost layers of the silicon tracker, SVX. I feel I need to open a parenthesis here, in order to allow those of you who are unfamiliar with the detection of charged tracks to follow the discussion.

Two words on tracks and their quality

The silicon tracker of CDF, SVX, is made up by seven concentrical cylinders of solid-state sensors (see figure on the right: SVX in Run II is made by the innermost L00 layer in red, plus four blue SVX II layers, plus two ISL layers; also shown are the two innermost Run I SVX’ layers, in hatched green), surrounding the beam line. When electrically charged particles created in a proton-antiproton collision travel out of the interaction region lying at the center, they cross those sensors in succession, leaving in each a localized ionization signal -a “hit”.

CDF does not strictly need silicon hits to track charged particles, since outside of the silicon detector lies a gas tracker called COT (for Central Outer Tracker), capable of acquiring up to 96 additional independent position measurements of the ionization trail; however, silicon hits are a hundred times more precise than COT ones, so that one can define two different categories of tracks: COT-only, and SVX tracks. Only the latter are used for lifetime measurements of long-lived particles such as B hadrons, since those particles travel at most a few millimeters away from the primary interaction point before disintegrating: their decay products, if tracked with the silicon, allow the decay point to be determined.

Typically, CDF loosely requires an SVX track to have three or more hits; however, a tighter selection can be made which requires four or more hits, additionally enforcing that two of those belong to the two innermost silicon layers. These tight SVX tracks have considerably better spatial resolution on the point of origin of the track, since the two innermost hits “zoom in” on it very effectively.

Back to ghosts: a reminder of their definition

Getting back to ghost events, the whole evidence of their presence is that one finds considerably more muon pairs failing the tight-SVX tracking selection than geometry and kinematics would normally imply in a homogeneous sample of data. Muons in ghost events systematically fail hitting the innermost silicon layers, just as if they were produced outside of it by the decay of a long-lived, neutral particle.

Because of its very nature -an excess of muon pairs failing the tight-SVX criteria- the “ghost sample” is obtained by a subtraction procedure: one takes the number T of events with a pair of tight-SVX muons, divides their number by the geometrical and kinematical efficiency \epsilon that muons from the various known sources pass tight-SVX cuts, and obtains a number E, which subtracted from the number O of observed dimuon pairs allows to spot the excess G, as follows: G = O-E = O-T/\epsilon.

Mind you, we are not talking of a small excess here: if you have been around this blog for long enough, you are probably accustomed to the frequent phenomenon of particle physicists getting hyped up for 10-event excesses. Not this time: the number of ghost muon events exceeds 70,000, and the nature of this contribution is clearly of systematic origin. It may be a background unaccounted by the subtraction procedure, or a signal involving muons that are created outside of the innermost silicon layers.

In the former three installments of this multi-threaded post I have discussed with some detail the significant sources of reconstructed muons which may contribute to the ghost sample, and be unaccounted by the subtraction procedure: muons from decays in flight of kaons and pions, fake muon tracks due to hadrons punching through the calorimeter, and secondary nuclear interactions. Today, I will rather assume that the excess of dimuon events constitutes a class of its own, different from those mundane sources, and proceed to discuss a couple of additional characteristics that make these events really peculiar.

The number of muons

In the first part of this series I have discussed in detail how the excess of ghost events contains muons which have abnormally large impact parameters. Impact parameter -the distance of the track from the proton-antiproton collision point, as shown by the graph on the right- is a measure of the lifetime of the body which decays into the muons, and the observation of large impact parameters in ghost events is the real alarm bell, demanding that one needs to really try and figure out what is going on in the data. However, once that anomaly is acknowledged, surprises are not over.

The second observation that makes one jump on the chair occurs when one simply counts the number of additional muon candidates found accompanying the duo which triggered the event collection in the first place. In the sample of 743,000 events with no SVX hit requirements on the two triggering muons, 72,000 events are found to contain at least a third muon track. 10% is a large number! By comparison, only 0.9% of the well-identified \Upsilon(1S) \to \mu \mu decays contained in the sample is found to contain additional muons besides the decay pair. However, since the production of \Upsilon particles is a quite peculiar process, this observation need not worry us yet: those events are typically very clean, with the b\bar b meson accompanied by a relatively small energy release. In particle physics jargon, we say that \Upsilon mesons have a soft P_T spectrum: they are produced almost at rest in most cases. There are thus few particles recoiling against it -and so, few muons too.

Now, the 10% number quoted above is not an accurate estimate of the fraction of ghost events containing additional muons, since it is extracted from the total sample -the 743,000 events. The subtraction procedure described above allows to estimate the fraction in the ghost sample alone: this is actually larger, 15.8%, because all other sources contribute fewer multi-muon events: only 8.3%. These fractions include of course both real and fake muons: in the following I try to describe how one can size up better those contributions.

Fake muons

A detailed account of the number of additional muons in the data and the relative sources that may be originating them can be tried by using a complete Monte Carlo simulation of all processes contributing to the sample, applying some corrections where needed. As a matter of fact, a detailed accounting of all the physical processes produced in proton-antiproton collisions is rather an overkill, because events with three or more muon candidates are a rare merchandise, and they can be produced by few processes: basically the only sizable contributions come from sequential heavy flavor decays and fake muon sources. Let us discuss these two possibilities in turn.

Real muon pairs of small invariant mass, recoiling against a third muon, are usually the result of sequential decays of B-hadrons, like in the process B \to \mu \nu D \to \mu \nu X (see picture on the left, where the line of the decaying quark is shown emitting sequentially two lepton pairs in the weak decays). The two muons from such a chain decay cannot have a combined mass larger than 5 GeV, which is (roughly speaking) the mass of the originating B hadron. In fact, by enforcing that very requirement (M_{\mu \mu} >5 GeV) on the two muons at trigger level, CDF enriches the collected dataset of events where two independent heavy-flavor hadrons (B or D mesons, for instance) are produced at a sizable angle from each other. A sample event picture is shown below in a transverse section of the CDF detector. Muon detection systems are shown in green, and in red are shown the track segments of two muons firing the high-mass dimuon trigger.

(You might well ask: Why does CDF requires a high mass for muon pairs ? Because the measurements that can be extracted from such a “high-mass” sample are more interesting than those granted by events with two muons produced close in angle, events which are in any case likely to be collected into different datasets, such as the one triggered by a single muon with a larger transverse momentum threshold. But that is a detail, so let’s go back to ghost muons now.)

When there are three real muons, one thus has most likely a $b \bar b$ pair, with one of the quarks producing a double semileptonic decay (two muons of small mass and angle), and the other producing a single semileptonic decay (with this third muon making a large mass with one of the other two): for instance, B \bar B \to (\mu^- \bar \nu X) (\mu^+ \nu D) \to (\mu^- \bar \nu X)(\mu^+ \nu \mu^- \bar \nu Y), in the case of two B mesons; in the decay chain above, X and Y denote a generic hadronic state, while D is a hadron containing a anti-charm quark. B hadron decays can produce three muons also when one of them decays to a J/\Psi meson, which in turn decays to a muon pair. Other heavy flavor decays, like those involving a c \bar c pair, can at most produce a pair of muons, and the third one must then be a fake one.

The HERWIG Monte Carlo program, which simulates all QCD processes, does make a good guess of the production cross-section of b-quark pairs and c-quark pairs produced in proton-antiproton collisions, in order to simulate all processes with equanimity; but those numbers are not accurate. One improves things by taking simulated events that contain those production processes such that they match the b \bar b and c \bar c cross-sections which are measured with the tight-SVX sample, the subset devoid of the ghost contribution.

The CDF analysis then proceeds by estimating the number of events where at least one muon track is in reality a hadron which punched through the detector. The simulation can be trusted to reproduce the number of hadrons and their momentum spectrum, but the phenomenon of punch-through is unknown to it! To include it, a parametrization of the punch-through probability is obtained from a large sample of D \to K \pi decays, collected by the Silicon Vertex Tracker, a wonderful device capable of triggering on the impact parameter of tracks. The D meson lives long enough that the kaon and pion tracks it produces have sizable impact parameter, and millions of such events have been collected by CDF in Run II.

The extraction of the probability is quite simple: take the kaon tracks from D decays, and find the fraction of these tracks that are considered muon candidates, thanks to muon chamber hits consistent with their trajectory. Then, repeat the same with the pion candidates. The result is shown in the graphs below separately for kaon and pion tracks. In them, the probability has been computed as a function of the track transverse momentum.

Besides the above probabilities and the tuning of the b \bar b cross section, a number of other details are needed to produce a best-guess prediction of the number of multi-mion events with the HERWIG Monte Carlo simulation. However, once all is said and done, one can verify that there indeed is an excess in the data. This excess appears entirely in the ghost muon sample, while the tight-SVX sample is completely free from it. Its size is again very large, and its source is thus systematical -no fluctuation can be hypothesized to have originated it.

The mass of muon pairs in multi-muon events

To summarize, what happens with ghost events is that if one searches for additional muon tracks around each of the triggering muons, one finds them with a rate much higher than what one observes in the tight-SVX dimuon sample. It is as if a congregation of muons is occurring! The standard model is unable to even getting close to explain how events with so many muons can be produced. The source of ghost events is thus really mysterious.

Now, if you give to a particle physicist the momenta and energies P_x. P_y, P_z, E of two particles produced together in a mysterious process, there is no question on what is going to happen: next thing you know, he will produce a number, m^2=(\Sigma E)^2-(\Sigma P_x)^2 -(\Sigma P_y)^2 - (\Sigma P_z)^2. m is the invariant mass of the two-particle system: if they are the sole products of a decay process, m is a unbiased measurement of the mass M_x of the parent body. If, instead, the two particles are only part of the final state, m will be smaller than M_x; still, a distribution of the quantity m for several decays will say a lot about the parent particle X.

Given the above, it is not a surprise that the next step in the analysis, once triggering muons in ghost events are found to be accompanied by additional muons at an abnormal rate, is to plot the invariant mass of those two-muon combinations.

There is, however, an even stronger motivation from doing that: an anomalous mass distribution of lepton pairs (then electron-muon pairs, not dimuons -I will come back to this detail later) had been observed by the same authors in Run I. That excess of dilepton pairs was smaller numerically -the dataset from which it had been extracted corresponded to an integrated luminosity 20 times smaller- but had been extracted with quite different means, from a different trigger, and with a considerably different detector (the tracking of CDF has been entirely changed in Run II). The low-mass excess of dilepton pairs remained a unexplained feature, calling for more investigation which had to wait a few years to be performed. The mass distribution of electron-muon combinations found by CDF in Run I is shown in the graph on the right: the excess of data (the blue points) over known background sources (the yellow histogram) appears at very low mass.

In Run II, not only does CDF have 20 times more data (well, sixty times so by now, but the dataset on which this analysis was performed was frozen one and a half years ago, thus missing the data collected and processed after that date): we also have more tools at our disposal. The mass distribution of muon pairs close in angle, belonging to ghost events with three or more muon candidates, can be compared with the tuned HERWIG simulation both for ghost event sample and for the tight SVX sample: this makes for a wonderful cross-check that the simulation can be trusted on producing a sound estimate of that distribution!

The invariant mass distribution of muon pairs close in angle in tight-SVX events with three or more muon tracks is shown on the left. The experimental data is shown with full black dots, while the Monte Carlo simulation prediction is shown with empty ones. The shape and size of the two distributions match well, implying that the Monte Carlo is properly normalized. Indeed, the tight-SVX sample is the one used for the measurements of b \bar b and c \bar c cross sections: once the Monte Carlo is tuned to the values extracted from the data, its overall normalization could mismatch the data only if fake-muon sources were grossly mistaken. That is not the case, and further, one observes that the number of J/\Psi \to \mu \mu decays -which end up all in one bin in the histogram, at 3.1 GeV of mass- are perfectly well predicted by the simulation: again, not a surprise, since those mesons can make it to a three-muon dataset virtually only if they are originated from B hadron decays. So, the check in tight-SVX events fortifies our trust on our tuned Monte Carlo tool.

Now, let us look at how things are going in the ghost muon sample (see graph on the right). Here, we observe more data at low invariant mass than what the Monte Carlo predicts: there is a clear excess for masses below 2.5 GeV. This excess has the same shape as the one observed in Run I in electron-muon combinations!

Please take a moment to record this: in CDF, some of the collaborators who objected to the publication of the multi-muon analysis did so because they insisted that more studies should be made to confirm or disprove the effect. One of the objections was that the electron-muon sample had not been studied yet. The rationale is that if the ghost events are due to a real physical process, then the same process should show up in electron-muon combinations; otherwise, one is hard-pressed to avoid having to put into question a thing called lepton universality, which -at least for Standard Model processes- is a really hard thing to do. However, the electron signature in CDF is very difficult to handle, particularly at low energy: backgrounds are much harder to pinpoint than for muons. Such a study is ongoing, but it might take a long time to complete. Run I, instead, is there for us: and there, the same excess was indeed present in electron events too!

Finally, there is one additional point to mention: a small, but crucial one. The J/\Psi signal is in perfect match with the simulation prediction! This observation confirms that the tuned cross section of b \bar b production is right dead-on. Whatever these ghost events are, they sure cannot be coming from B production. Also, note that the agreement of the J/\Psi signal with Monte Carlo expectations constitutes proof that the efficiency of the tight-SVX requirements -the 24% number which is used to extract the numerical excess of ghost events- is correct. Everything points to a mysterious contribution which is absent in the Monte Carlo.

The above observations conclude this part of the discussion. In the next installment, I will try to discuss the additional oddities of ghost events -in particular, the rate of muons exceeding the triggering pair is actually four times higher than in QCD events. I will then examine some tentative interpretations that have been put forth in the course of the three months that have passed since the publication.

Babysitting this week February 1, 2009

Posted by dorigo in news, personal, physics.
Tags: , , , ,
comments closed

Blogging is one of the activities that will get slightly reduced this week, along with others that are not strictly necessary for my survival. Mariarosa has left for Athens this morning with three high-school classes of her school, Liceo Foscarini. They will visit Greece for a whole week, and be back to Venice on Saturday.

I am not scared by the obligation of having to care for my two kids, and I do like such challenges -I maintain that my wife should not complain too much when it is me who leaves for a week, much more frequently- but of course the management of our family life will take all of my spare time, plus some.

Blogging material, in the meantime, is piling up. There are beautiful results coming out of CDF these days (isn’t that becoming a rule?). Furthermore, recently the Tevatron has been running excellently, and the LHC seems in the middle of a crisis over whether to risk a second, colossal failure by pushing the energy up to 10 TeV to put the Tevatron off the table in the shortest time possible, or to play it safe and keep the collision energy at 6 TeV, accepting the risk of being scooped of the most juicy bits of physics left over to party with.

And multi-muons keep me busy these days. Besides the starting analysis in the CMS-Padova group, there are papers worth discussing in the arxiv. This one was published a few days ago, and we had in Padova last Thursday one of the authors, Thomas Gehrmann, discussing QCD calculations of event shapes observables in a seminar- which of course allowed me to chat with him about his hunch on the hidden valley scenarios he discusses in his paper. More on these things next week, after I set my kids to sleep!

Multi-muon news January 26, 2009

Posted by dorigo in news, personal, physics, science.
Tags: , , , ,
comments closed

This post is not it but no, I have not given up on my promise to complete my series on the anomalous multi-muon signal found by CDF in its Run II data. In fact, I expect to be able to post once more on the topic this week. There, I hope I will be able to discuss the kinematic characteristics of multi-lepton jets. [I am lazy today, so I will refrain from adding links to past discussions of the topic here: if you need references on the topic, just click on the tag cloud on the right column, where it says "anomalous muons"!]

In the meantime, I am happy to report that I have just started working at the same analysis for the CMS experiment! In Padova we have recently put together a group of six -one professor, three researchers, a PhD student, and a undergrad- and we will pursue the investigation of the same signature seen by CDF.  And today, together with Luca, our new brilliant PhD student, I started looking at the reconstruction of neutral kaon decays K^\circ \to \pi^+ \pi^-, a clean source of well-identified pion tracks with which we hope to be able to study muon mis-identification in CMS.

Meanwhile, the six-strong group in Padova is already expanding. Last Wednesday professor Fotios Ptochos, a longtime colleague in CDF, a good friend, and crucially one of the authors of the multi-muon analysis, came to Padova and presented a two-hour-long seminar on the CDF signal in front of a very interested group of forty physicists spanning four generations -from Milla Baldo Ceolin to our youngest undergraduates. The seminar was enlightening and I was very happy with the result of a week spent organizing the whole thing! (I will have to ask Fotios if I can make the slides of his talk available here….)

Fotios, a professor at the University of Cyprus, is a member of CMS, and a true expert of measurements in the B-physics sector at hadron machines. We plan to work together to repeat the controversial CDF analysis with the first data that CMS will collect -hopefully later this year.

The idea of repeating the CDF analysis in CMS is obvious. Both CDF and D0 can say something on the signal in a reasonable time scale, but whatever the outcome, the matter will only be settled by the LHC experiments. Imagine, for instance, that in a few months D0 publishes an analysis which disproves the CDF signal. Will we then conclude that CDF has completely screwed up its measurement ? We will probably have quite a clue in that case, but we will need to remain possibilistic until at least a third, possibly more precise, measurement is performed by an independent experiment.That measurement is surely going to be worth a useful publication.

And now imagine, on the contrary, that the CDF signal is real…

Single top seen with no leptons! January 14, 2009

Posted by dorigo in news, personal, physics, science.
Tags: , , , , ,
comments closed

This post has a rather long introduction which does not discuss single top production, but rather explains how the techniques for detecting top quark pairs at the Tevatron have evolved since the first searches. Informed readers who are interested mainly in the new CDF result for the single top cross section may skip the first two sections below…

Introduction: missing energy as the main tag top quarks

In the years before the top quark discovery, and for a few years thereafter, top quark pairs produced by the Tevatron proton-antiproton collider were searched by the CDF and D0 experiments with a quite clear, if a bit unimaginative, three-pronged strategy.

A top quark pair candidate event could be extracted from backgrounds if it contained two charged leptons -basically electrons or muons-, missing transverse energy, and two hadronic jets (the dilepton signature, pictured above); or if there were one charged lepton, missing transverse energy, and three or four hadronic jets (the single-lepton signature, shown on the right); or finally, if it just showed six hadronic jets (the all-hadronic signature).

(A note to avoid letting down from square one those of you who feel inadequate for not knowing what a jet, or missing energy, are: Jets are the result of the materialization of high-energy quarks, which are kicked out of the colliding protons or materialized by the released energy, into streams of hadronic particles; they appear in collider detectors as localized deposits of energy. Missing energy results from the escape of undetected particles, typically neutrinos. More on this below…)

The three final states mentioned above were the result of the different decay modes of the two W bosons always present in a top pair decay: if both W bosons decayed to lepton-neutrino pairs one would get a dilepton event; if one decayed to a lepton-neutrino and the other to a pair of hadronic jets the single-lepton final state would arise; and if both decayed to jets one would get the six-jet topology. Life in the top physics group was just that easy.

The dilepton final state is the cleanest of the three channels, and the all-hadronic final state the dirtiest: in proton-antiproton collisions a simple rule of thumb states that the more leptons you are after, the cleaner your signal is, and conversely the more jets you look for, the deeper you have to dig in the mud of strong interactions. That is because strong interactions (or QCD, for Quantum ChromoDynamics) produce lots and lots of jets, and very rarely do they yield leptons; and QCD is the name of the game in proton-antiproton collisions.

It took quite a while to realize that one could imagine other successful ways to extract top-quark pairs from Tevatron data. A sizable step forward on this issue was made by yours truly with the help of a graduate student, I am proud to note. Let me explain this in a few lines.

While the search for leptons (electrons, muons, tauons) is a way to clean the dataset from QCD backgrounds, the explicit identification of these particles results by force in a reduction of the available top signal. The CDF and D0 experiments are well-suited to detect electrons and muons, but only when these particles are produced at a large angle from the proton beam axis -i.e., “centrally”; moreover, the lepton identification efficiency is never 100% even in those cases. As for tauons, they are much harder to detect, because the tauon is a heavy particle, so that despite being a lepton it has the chance to decay into light hadrons, mimicking a hadronic jet.

All in all, if one considers the single-lepton final state of the process t \bar t \to W^+ b W^- \bar b \to l \nu b q \bar q' \bar b, which arises in a total of 44% of the cases, the typical fraction of top pair decays one may hope to collect in a clean dataset is not larger than 10%. The rest is lost when one explicitly requires to have reconstructed a central, clean lepton signal.

Put this way, it does beg the question. What are we going to do with the large fraction of lost top pair decays ? The answer, for eight years after the top discovery, was simple: nothing. There had been, in truth, attempts at loosening the identification requirements on leptons; but the fact that leptons are the means by which those events are collected -they are requested by the online triggering system- called for a more radical solution. So Giorgio Cortiana and I, while looking for a suitable thesis topic for him, decided to drop the lepton request altogether, and to simply look for top pairs in data just containing missing transverse energy and jets.

Missing transverse energy is a powerful signature at hadron colliders by itself, because it may signal the presence of an energetic neutrino escaping the detector. The signature arises from a simple calculation of the energy flowing out of the interaction point in the plane transverse to the beam direction: in that plane, momentum conservation implies that the vector sum of all particles is zero, compatibly with measurement uncertainties: if it very different from zero, either one or more particles have escaped unnoticed, or some of them have been measured imprecisely.

If a significant amount of missing transverse energy effectively tags an energetic neutrino, there is no need to search for an additional charged lepton to confirm that a leptonic W \to l \nu decay has taken place! Energetic neutrinos are either due to a leptonic W decay or a Z boson decay to a pair of neutrinos, Z \to \nu \nu. Now, Z bosons are even more rare than W bosons, so they do not constitute a too worrisome background. By ignoring the charged lepton that might accompany the missing transverse energy, one gains access to all the bounty of single lepton decays of top quark pairs which the tight search discards -because the charged lepton went unseen in a hole of the detector, or failed the identification criteria.

Giorgio and I published two papers using the missing-energy-plus-jets signature: a cross-section measurement for top-pair production (paper here) which, at the time of publication, was the third-best result on that quantity, and a top quark mass measurement (paper here) which, despite carrying a large uncertainty, showed that the sample could be a promising ground for top physics measurements despite the lack of kinematic closure (the fact that one lepton is present but is unidentified means that one cannot completely define the decay kinematics: one then speaks of unconstrained kinematics). Now, I am glad to see that the same signature we used for top quark pairs is being exploited in CDF for a single top quark search.

A few words on single top production

Single tops are produced in proton-antiproton collisions by weak interaction processes, but they are not much less frequent than strongly-produced top pairs,
because a pair of top quarks weighs twice as much as a single top quark does, and this has a huge impact in the cross section. Usually, the single-top production signature amounts to the leptonic top decay products -a charged lepton, missing energy, and a jet- accompanied by another jet or two, produced by the quark(s) originally recoiling against the top quark. If one considers the simplest diagrams giving rise to a single top quark, there are two very different processes: s-channel
W* decay
and t-channle W-gluon fusion. Let me explain what these are.

A regular, “on-shell” W boson -one which has a mass very close to the peak of the W
resonance, M_W=80.4 GeV- does not decay into a top and a bottom quark: that is because the W is lighter than the required final state particles! But a W boson produced “off-mass-shell”, i.e. with a mass much larger than its normal value, can indeed decay that way. One just has to remember that W bosons may have any mass from 0 to whatever value, but the probability that the mass is far from M_W quickly becomes small, following a curve called a Breit-Wigner; one I have recently posted in a discussion about Z bosons, incidentally. You can check the shape there, bearing in mind that the peak for W bosons is 10 GeV smaller, and the width about 20% smaller. Anyway, when a off-mass-shell W boson decays as W \to t \bar b, as shown in the diagram on the right in the figure below, the final state ends up containing two b-quarks, plus the decay products of the second W boson appearing in the process -the one emitted by the top decay. So one has a lepton, missing transverse energy, and two b-jets.

The second way by means of which a single top quark may be produced in proton-antiproton collisions is shown on the left above, and it occurs via the splitting of a gluon from the proton into a bottom-antibottom quark pair: while one of them does not concern us, the other interacts with a W boson emitted from the other projectile, and a top quark is the result. One thus obtains the signature of three jets plus lepton plus missing transverse energy, and two of the jets still have b-quarks in them.

The new CDF result

Single top production has been sought at the Tevatron with enthusiasm in Run II, and CDF and D0 have already shown sizable signals of that process in datasets containing leptons, missing energy, and jets. But finally, a new analysis by the Purdue University group in CDF (Artur Apresyan, Fabrizio Margaroli, and Karolos Potamianos, led by Daniela Bortoletto) is now finding a signal without the help of the charged lepton. I of course cannot but be happy about it, since it is just another demonstration of the potentiality of the “lepton-ignoring” technique!

The new analysis selects events with a significant amount of missing transverse energy (ME_T>50 GeV) accompanied by two or three hadronic jets. Events are not collected as signal candidates if the missing Et has a small azimuthal angle with a jet, because that is a hint that the former may be due to a fluctuation of the energy measurement of the latter. After that selection, a combination of b-quark tagging algorithms is used to select a sample rich with two b-quark jets -the other important background-reducing characteristic of top decays.

Three different classes of b-enriched events are selected. Two classes depend solely on the presence in the jets of one or two “Secvtx” b-tags: these are explicitly reconstructed secondary vertices, signalling the decay in flight of a B-hadron. A third class collects events with one Secvtx b-tag plus a jet tagged by a different algorithm, “JetProb“. JetProb computes the probability that charged tracks contained in a jet originate from the primary vertex, and tags jets which are likely to contain a long-lived particle.

The three classes have a different signal purity, and their separate analysis allows to extract more information from the data sample than a combination of all b-tagged events.

A neural-network classifier is used to discriminate single-top events from the surviving backgrounds, which are predominantly due to a combination of three processes: “W+jets” production, which arises when a W boson is created along with QCD radiation; top pair production, which does produce missing energy and b-tagged jets, but has typically a larger number of jets; and the more generic QCD-multijet background, which may contaminate the sample when missing transverse energy is faked by a weird fluctuation in the energy measurement of one of the hadronic jets, not removed by the azimuthal angle cuts mentioned above. Since the latter is the largest offender, this neural network -through the choice of kinematical variables- is aimed in particular at downsizing QCD events.

Above you can see the NN output for the class of events containing two Secvtx b-tags; points with error bars are the data, and the expected backgrounds are shown by color-coded histograms. The QCD background (in green) populates the negative region, as expected.

After the selection of high-NN-output events (NN>-0.1), about a thousand events survive in the “one Secvtx” class, and about a hundred in the “two Secvtx” and “one Secvtx-one JetProb” class. The first class is dominated by QCD backgrounds, while the second and the third have top pairs as the main contribution. These backgrounds are precisely estimated using a tagging-matrix approach: a parametrization of the probability of finding b-tags in jets as a function of the jet characteristics. Control samples of data are used to verify that the background expectations are accurate.

The analysis does not end there, though: the single top signal is small, and the samples have to be purified further. The authors use another neural network, trained with variables sensitive to the signal kinematics, and extract the signal size from the NN output distributions in the three different classes.

Below you can see the second-NN output for the first class of events. As you can see, the s-channel and t-channel single-top production processes are small, but the fit prefers to include them in the mixture.

The graph below displays the results class by class, and the combination.

The analysis finds a very nice result: the single top cross section is measured at \sigma_t = 4.9^{+2.5 }_{-2.2} pb, in good agreement with standard model expectations. The measured significance of the signal is quoted at 2.1-\sigma, while the expected sensitivity of the search is given by the paper at 1.4-\sigma: this is a very important number to quote, as it allows one to size up with precision the relative importance of this determination, decoupling from statistical fluctuation effects that may influence the particular value found by the analysis.

Given the fact that it is based on a data sample orthogonal to others, once combined with the other determinations the new measurement described above will give a sizable contribution to the significance of the CDF signals of single top production: the authors must be heartily congratulated for their  result!

And the goodies are not over: the measurement of the cross section for single top production can be directly translated in a determination of the $V_{tb}$ matrix element of the Cabibbo-Kobayashi-Maskawa matrix. The plot below shows the result obtained by this search. Of course we are still far from a meaningful determination, and this also reflects in the unphysical value obtained, which is however in good agreement with the expectation, close to 1.0 in the standard model.

I have not seen it there yet, but a public web page describing these results and linking to a public note on the analysis will soon appear in the public web page of single top searches in CDF.

UPDATE: The public web page of the analysis is here, and a .pdf file with the public note describing the result is at this link. Enjoy!

Follow

Get every new post delivered to your Inbox.

Join 100 other followers