Single top seen with no leptons! January 14, 2009
Posted by dorigo in news, personal, physics, science.Tags: CDF, missing energy, Purdue, Tevatron, top mass, top quark
trackback
This post has a rather long introduction which does not discuss single top production, but rather explains how the techniques for detecting top quark pairs at the Tevatron have evolved since the first searches. Informed readers who are interested mainly in the new CDF result for the single top cross section may skip the first two sections below…
Introduction: missing energy as the main tag top quarks
In the years before the top quark discovery, and for a few years thereafter, top quark pairs produced by the Tevatron proton-antiproton collider were searched by the CDF and D0 experiments with a quite clear, if a bit unimaginative, three-pronged strategy.
A top quark pair candidate event could be extracted from backgrounds if it contained two charged leptons -basically electrons or muons-, missing transverse energy, and two hadronic jets (the dilepton signature, pictured above); or if there were one charged lepton, missing transverse energy, and three or four hadronic jets (the single-lepton signature, shown on the right); or finally, if it just showed six hadronic jets (the all-hadronic signature).
(A note to avoid letting down from square one those of you who feel inadequate for not knowing what a jet, or missing energy, are: Jets are the result of the materialization of high-energy quarks, which are kicked out of the colliding protons or materialized by the released energy, into streams of hadronic particles; they appear in collider detectors as localized deposits of energy. Missing energy results from the escape of undetected particles, typically neutrinos. More on this below…)
The three final states mentioned above were the result of the different decay modes of the two W bosons always present in a top pair decay: if both W bosons decayed to lepton-neutrino pairs one would get a dilepton event; if one decayed to a lepton-neutrino and the other to a pair of hadronic jets the single-lepton final state would arise; and if both decayed to jets one would get the six-jet topology. Life in the top physics group was just that easy.
The dilepton final state is the cleanest of the three channels, and the all-hadronic final state the dirtiest: in proton-antiproton collisions a simple rule of thumb states that the more leptons you are after, the cleaner your signal is, and conversely the more jets you look for, the deeper you have to dig in the mud of strong interactions. That is because strong interactions (or QCD, for Quantum ChromoDynamics) produce lots and lots of jets, and very rarely do they yield leptons; and QCD is the name of the game in proton-antiproton collisions.
It took quite a while to realize that one could imagine other successful ways to extract top-quark pairs from Tevatron data. A sizable step forward on this issue was made by yours truly with the help of a graduate student, I am proud to note. Let me explain this in a few lines.
While the search for leptons (electrons, muons, tauons) is a way to clean the dataset from QCD backgrounds, the explicit identification of these particles results by force in a reduction of the available top signal. The CDF and D0 experiments are well-suited to detect electrons and muons, but only when these particles are produced at a large angle from the proton beam axis -i.e., “centrally”; moreover, the lepton identification efficiency is never 100% even in those cases. As for tauons, they are much harder to detect, because the tauon is a heavy particle, so that despite being a lepton it has the chance to decay into light hadrons, mimicking a hadronic jet.
All in all, if one considers the single-lepton final state of the process , which arises in a total of 44% of the cases, the typical fraction of top pair decays one may hope to collect in a clean dataset is not larger than 10%. The rest is lost when one explicitly requires to have reconstructed a central, clean lepton signal.
Put this way, it does beg the question. What are we going to do with the large fraction of lost top pair decays ? The answer, for eight years after the top discovery, was simple: nothing. There had been, in truth, attempts at loosening the identification requirements on leptons; but the fact that leptons are the means by which those events are collected -they are requested by the online triggering system- called for a more radical solution. So Giorgio Cortiana and I, while looking for a suitable thesis topic for him, decided to drop the lepton request altogether, and to simply look for top pairs in data just containing missing transverse energy and jets.
Missing transverse energy is a powerful signature at hadron colliders by itself, because it may signal the presence of an energetic neutrino escaping the detector. The signature arises from a simple calculation of the energy flowing out of the interaction point in the plane transverse to the beam direction: in that plane, momentum conservation implies that the vector sum of all particles is zero, compatibly with measurement uncertainties: if it very different from zero, either one or more particles have escaped unnoticed, or some of them have been measured imprecisely.
If a significant amount of missing transverse energy effectively tags an energetic neutrino, there is no need to search for an additional charged lepton to confirm that a leptonic decay has taken place! Energetic neutrinos are either due to a leptonic W decay or a Z boson decay to a pair of neutrinos,
. Now, Z bosons are even more rare than W bosons, so they do not constitute a too worrisome background. By ignoring the charged lepton that might accompany the missing transverse energy, one gains access to all the bounty of single lepton decays of top quark pairs which the tight search discards -because the charged lepton went unseen in a hole of the detector, or failed the identification criteria.
Giorgio and I published two papers using the missing-energy-plus-jets signature: a cross-section measurement for top-pair production (paper here) which, at the time of publication, was the third-best result on that quantity, and a top quark mass measurement (paper here) which, despite carrying a large uncertainty, showed that the sample could be a promising ground for top physics measurements despite the lack of kinematic closure (the fact that one lepton is present but is unidentified means that one cannot completely define the decay kinematics: one then speaks of unconstrained kinematics). Now, I am glad to see that the same signature we used for top quark pairs is being exploited in CDF for a single top quark search.
A few words on single top production
Single tops are produced in proton-antiproton collisions by weak interaction processes, but they are not much less frequent than strongly-produced top pairs,
because a pair of top quarks weighs twice as much as a single top quark does, and this has a huge impact in the cross section. Usually, the single-top production signature amounts to the leptonic top decay products -a charged lepton, missing energy, and a jet- accompanied by another jet or two, produced by the quark(s) originally recoiling against the top quark. If one considers the simplest diagrams giving rise to a single top quark, there are two very different processes: s-channel
W* decay and t-channle W-gluon fusion. Let me explain what these are.
A regular, “on-shell” W boson -one which has a mass very close to the peak of the W
resonance, – does not decay into a top and a bottom quark: that is because the W is lighter than the required final state particles! But a W boson produced “off-mass-shell”, i.e. with a mass much larger than its normal value, can indeed decay that way. One just has to remember that W bosons may have any mass from 0 to whatever value, but the probability that the mass is far from
quickly becomes small, following a curve called a Breit-Wigner; one I have recently posted in a discussion about Z bosons, incidentally. You can check the shape there, bearing in mind that the peak for W bosons is 10 GeV smaller, and the width about 20% smaller. Anyway, when a off-mass-shell W boson decays as
, as shown in the diagram on the right in the figure below, the final state ends up containing two b-quarks, plus the decay products of the second W boson appearing in the process -the one emitted by the top decay. So one has a lepton, missing transverse energy, and two b-jets.
The second way by means of which a single top quark may be produced in proton-antiproton collisions is shown on the left above, and it occurs via the splitting of a gluon from the proton into a bottom-antibottom quark pair: while one of them does not concern us, the other interacts with a W boson emitted from the other projectile, and a top quark is the result. One thus obtains the signature of three jets plus lepton plus missing transverse energy, and two of the jets still have b-quarks in them.
The new CDF result
Single top production has been sought at the Tevatron with enthusiasm in Run II, and CDF and D0 have already shown sizable signals of that process in datasets containing leptons, missing energy, and jets. But finally, a new analysis by the Purdue University group in CDF (Artur Apresyan, Fabrizio Margaroli, and Karolos Potamianos, led by Daniela Bortoletto) is now finding a signal without the help of the charged lepton. I of course cannot but be happy about it, since it is just another demonstration of the potentiality of the “lepton-ignoring” technique!
The new analysis selects events with a significant amount of missing transverse energy () accompanied by two or three hadronic jets. Events are not collected as signal candidates if the missing Et has a small azimuthal angle with a jet, because that is a hint that the former may be due to a fluctuation of the energy measurement of the latter. After that selection, a combination of b-quark tagging algorithms is used to select a sample rich with two b-quark jets -the other important background-reducing characteristic of top decays.
Three different classes of b-enriched events are selected. Two classes depend solely on the presence in the jets of one or two “Secvtx” b-tags: these are explicitly reconstructed secondary vertices, signalling the decay in flight of a B-hadron. A third class collects events with one Secvtx b-tag plus a jet tagged by a different algorithm, “JetProb“. JetProb computes the probability that charged tracks contained in a jet originate from the primary vertex, and tags jets which are likely to contain a long-lived particle.
The three classes have a different signal purity, and their separate analysis allows to extract more information from the data sample than a combination of all b-tagged events.
A neural-network classifier is used to discriminate single-top events from the surviving backgrounds, which are predominantly due to a combination of three processes: “W+jets” production, which arises when a W boson is created along with QCD radiation; top pair production, which does produce missing energy and b-tagged jets, but has typically a larger number of jets; and the more generic QCD-multijet background, which may contaminate the sample when missing transverse energy is faked by a weird fluctuation in the energy measurement of one of the hadronic jets, not removed by the azimuthal angle cuts mentioned above. Since the latter is the largest offender, this neural network -through the choice of kinematical variables- is aimed in particular at downsizing QCD events.
Above you can see the NN output for the class of events containing two Secvtx b-tags; points with error bars are the data, and the expected backgrounds are shown by color-coded histograms. The QCD background (in green) populates the negative region, as expected.
After the selection of high-NN-output events (), about a thousand events survive in the “one Secvtx” class, and about a hundred in the “two Secvtx” and “one Secvtx-one JetProb” class. The first class is dominated by QCD backgrounds, while the second and the third have top pairs as the main contribution. These backgrounds are precisely estimated using a tagging-matrix approach: a parametrization of the probability of finding b-tags in jets as a function of the jet characteristics. Control samples of data are used to verify that the background expectations are accurate.
The analysis does not end there, though: the single top signal is small, and the samples have to be purified further. The authors use another neural network, trained with variables sensitive to the signal kinematics, and extract the signal size from the NN output distributions in the three different classes.
Below you can see the second-NN output for the first class of events. As you can see, the s-channel and t-channel single-top production processes are small, but the fit prefers to include them in the mixture.
The graph below displays the results class by class, and the combination.
The analysis finds a very nice result: the single top cross section is measured at , in good agreement with standard model expectations. The measured significance of the signal is quoted at
, while the expected sensitivity of the search is given by the paper at
: this is a very important number to quote, as it allows one to size up with precision the relative importance of this determination, decoupling from statistical fluctuation effects that may influence the particular value found by the analysis.
Given the fact that it is based on a data sample orthogonal to others, once combined with the other determinations the new measurement described above will give a sizable contribution to the significance of the CDF signals of single top production: the authors must be heartily congratulated for their result!
And the goodies are not over: the measurement of the cross section for single top production can be directly translated in a determination of the $V_{tb}$ matrix element of the Cabibbo-Kobayashi-Maskawa matrix. The plot below shows the result obtained by this search. Of course we are still far from a meaningful determination, and this also reflects in the unphysical value obtained, which is however in good agreement with the expectation, close to 1.0 in the standard model.
I have not seen it there yet, but a public web page describing these results and linking to a public note on the analysis will soon appear in the public web page of single top searches in CDF.
UPDATE: The public web page of the analysis is here, and a .pdf file with the public note describing the result is at this link. Enjoy!
Comments
Sorry comments are closed for this entry
Hi,
Kind of an off question, but tangentially related. What percent of beam energy could you account for if you went looking for it?
That is, say, you have beams with X protons (anti-protons) in them. You have some percentage of actual interactions vs beam luminosity so say it is Y. So given a specific machine and a specific timeframe (so you have a standard energy per particle) you have some function Energy_in_interactions of X and Y for a given accelerator. What percentage of that can we account for by (*) actually looking at it with instruments and saying ok, e_1 energy in elastic collisions we see with ?, e_2 energy in mesons from interaction y_2, which we see with instrument ?? and so on.
(*) not all at once, but adding up the results of different detectors over time – once looking at this, then that.
I am asking because I am trying to understand if you are picking out specific cases with most of the interactions non-understandable, or are you filling in the nooks of things and we could look at almost all the interactions, so we are down to the hard cases of really rare interactions.
Thinking about it, what I am asking is what percentage of total beam luminosity could you UNDERSTAND if you looked for it and how much could you not even if you could see charged particles coming out? Say they hit twice and messed things up so it just couldn’t be figured out.
How much has this been getting better? That is, hpw much more efficient are we than 5, 10, 15 years ago
MRK
You miss another point in your question… The quoted ’14 TeV collision energy’ (7 TeV per beam) is the energy per individual proton. The proton is made of three (valence) quarks, and a sea of other stuff (gluons and quarks). The energy is shared out between all these partons, so you will never see a 14 TeV collision.
With regards to (what I understand as the root of) your question, it’s not that most interactions are non-understandable; most are boring. That’s why we have trigger systems to select interesting events and throw away the rest. At CMS, we aim to reduce the ~40M collisions per second (which can have up to ~15 proton-proton interactions each at high luminosity) to about 150 per second stored for analysis.
Summing up the energy in subdetectors over time is a useful tool, used for some calibration studies (as you know that energy deposition will have certain rotational symmetries), but it is the symmetry that is meaningful, not the absolute value.
It seems that “the pdf file with the public note describing the result” requires a username and password which I do not have,
but
I was able to go to the “public web page of the analysis” and make a pdf file. One question I have:
In the section “-Disciminant NN Input Variables”
(probably should be Discriminant NN Input Variables,
but I make so many typos I am in no position to be critical of that)
there are
3 rows of 3 figures each followed by a 4th (last) row of two figures.
The middle figure of the second row plots
Events/10.20 v. Inv. Mass of 3 Leading Jets
and seems to me to show two data peaks,
one at about 150 and another at about 180.
The first (left-hand) figure of the fourth (last) row plots
Events/10.17 v. Inv. Mass of MET, Jet 1 & Jet 2
and seems to me to show a two-peak structure,
one at about 160 and another at about 180.
The first (left-hand) figure of the third row plots
Events/10.40 v. Inv. Mass of MET and Jet 2
and seems to me to show a data excess around 135.
As you might expect, I would like to interpret the two-peak structures as two states of a multi-state T-quark,
and
the excess around 135 as related to a low-mass T-quark state,
but
I expect that you would see them as statistically insignificant.
The public web page says “… we use a likelihood profile of this discriminant to measure the production cross section of single top events … We consider the uncertainty associated with the variation of the top mass. We consider top masses of 170 & 180 GeV as 2 standard deviation variations. This uncertainty is applied to all top processes. …”.
How does this analysis compare with
the CDF Likelihood Function (LF) Method
and
the CDF 1-dim and 2-dim Neural Network searches
that in 2006 did not find single-T events ?
In particular,
did the new analysis use less rigid assumption of Mt than the negative-result analyses of 2006,
thus allowing contributions from low-mass states of the T-quark?
Tony Smith
Nice post, Tommaso. Your enthusiasm for non-lepton-based searches is nice. But you have to admit that this analysis, a real tour de force, would not suffice to establish single-top production. In my view, the paper is valuable as an extended exercise in artificial neural network techniques and shows the capabilities of the CDF detector well. But I don’t see the measurement itself as significant.
Dear MRK,
first of all, sorry for the belated answer -your comment went unnoticed along with the others in this thread, because I’ve been flooded with comments on the Gaza war these days.
The CDF detector allows particles with large component of longitudinal momentum escape through holes next to the beam pipe. This is done on purpose for two reasons: one, because the physics connected with those particles is not terribly interesting (although diffractive-physics aficionados do object), and two, because these particles have high energy and intensity is very high too, resulting in problems with radiation hardness. There is then a third problem, connected with the fact that high-energy particle hitting detectors located very close to the beam would create sprays of secondary hadrons which would mess up the measurements done in the central region.
That said, your question can be put in perspective. We cannot account for more than a few percents of the total energy released in the collisions, because most of it escapes along the beampipes. Even restricting to the inelastic collisions, that remains true. But if you were to ask “What fraction of the energy do you detect from particles emitted at an angle larger than 20 degrees from the beam?”, the answer would be 95 to 99%, because there the detector is quite hermetic. In fact, once the traceable losses from punch-through are accounted for, what is left is a few muons and neutrinos which escape undetected, and very few “blue sky” events which happen when the interaction point is displaced from the center of the detector by some amount (but I am not even sure this still holds in Run II -it was an issue from Run 1 I think).
Cheers,
T.
Hi Tony,
sorry for the late answer. This analysis is considering a totally orthogonal dataset to the one of 2006, so the two observations are also orthogonal. As you may imagine, given that the data is different, and the fact that the final state objects are also different (no leptons in the latter case), the mass reconstruction cannot be compared in the two cases. In particular, I am not sure to understand what plot are you referring to, since this new analysis does not fully reconstruct the top mass.
Cheers,
T.
Hello Michael,
the current searches for single top have not established fully a signal yet. In CDF, for instance, the global significance is so far of just 3.7 standard deviations. These 3.7 sigmas are the result of a combination of several different analyses, and the effort of dozens of people, as you well know. Therefore, I think the 2.2 standard deviations that this single analysis adds to the lot are significant.
If you ask me whether this analysis by itself would suffice to establish single top production, of course I have to say it doesn’t, but with ten times more data it probably would.
I could marry your point of view and agree that hadronic searches are not really advancing our understanding at the forefront, but they do fill a lot of gaps in the back lines…
Cheers,
T.
Tommaso, you say that the 2009 NN analysis that sees Single-T
“… is considering a totally orthogonal dataset to the one of 2006 …”.
Since two NN analyses of the 2006 dataset did not see Single-T,
I would be interested to see application of the 2009 NN analysis to the 2006 dataset.
If the 2009 NN analysis were to see Single-T in the 2006 dataset,
then
I would be interested to see exactly the differences in the NN code between the two 2006 NN and the 2009 NN
that enabled the 2009 NN to see what the 2006 NN did not see in the same dataset.
Also, it might be interesting to apply the 2006 NN to the 2009 dataset.
Such comparisons might give more physical intuition into how NN code works.
Tony Smith