jump to navigation

Who discovered single top production ? March 5, 2009

Posted by dorigo in news, physics, science.
Tags: , , ,
trackback

Both CDF and DZERO have announced yesterday the first observation of electroweak production of single top quarks in proton-antiproton collisions. Both papers (this one from CDF, and this one from DZERO) claim theirs is the first observation of the long sought-after subatomic reaction. Who is right ? Who has more merit in this advancement in human knowledge of fundamental interactions ? Whose analysis is more credible ? Which of the two results has fewer blemishes ?

To me, it is always a matter of which one is the most relevant question. And to me, the most relevant question is, Who cares who did it ? ... with the easy-to-guess answer: not me. As I have had other occasions to say, I am for the advancement of Science, much less for the advancement of scientific careers, leave alone to which experiments those careers belong.

The top quark is interesting, but so far the Tevatron experiments had only studied it when produced in pairs with its antiparticle, through strong interactions. Electroweak production of the top quark is also possible in proton-antiproton collisions, at half the rate. It is one of those rare instances when the electroweak force competes with the strong one, and it is due to the large mass of the top quark: producing two is much more demanding than producing only one, due to the limited energy budget of the collisions. The reactions capable of producing a single top quark are described by the diagrams shown above. In a), a b-quark from one of the projectiles becomes a top by intervention of a weak vector boson; in b), a gluon “fuses” with a W boson and a top quark is created; in c), a W boson is produced off-mass-shell, and it possesses enough energy to decay into a top-bottom pair.

Since 1995, when CDF and DZERO published jointly the observation of the top quark, nobody has ever doubted that electroweak processes would produce single tops as well. Not even one article, to my knowledge, tried to speculate that the top might be so special to have no weak couplings. The very few early attempts at casting doubt on the real nature of what the Tevatron experiments were producing died quickly as statistics improved and the characterization of the newfound quark was furthered. So what is the fuss about finding out that the reaction resulting from the Feynman diagrams shown above can indeed be directly observed ?

There are different facets in a thorough answer to  the above question. First of all, competition between CDF and DZERO: each collaboration badly wanted to get there first, especially since this was correctly predicted from the outset to be a tough nut to crack. Second, because seeing single top production implies having direct access to one element of the Cabibbo-Kobayashi-Maskawa mixing matrix, the element V_{tb}, which is after all a fundamental parameter in the standard model (well, to be precise it is a function of some of the latter, namely of the CKM matrix parameters, but let’s not split hairs here). Third, you cannot really see a low-mass Higgs at the Tevatron if you did not measure single top production first, because single top is a background in Higgs boson searches, and one cannot really discover something by assuming something else is there, if one has not proven that beforehand.

So, single top observation is important after all. I am a member of the CDF collaboration, and I am really proud I belong to it, so my judgement on the whole issue might be biased. But if I have to answer the question that gave the title to this post, I will first give you a very short summary of  the results of the two analyses,  deferring to a better day a more detailed discussion. This will allow me to drive home a few points.

The two analyses: a face-to-face summary

  • Significance: both experiments claim that the signal they observe has a statistical significance of 5.0 standard deviations.
  1. CDF uses 3.2 inverse femtobarns, and finds a 5.0-sigma-significance signal of single top production. The sensitivity of the analysis is better measured by the expected significance, which is quoted at 5.9-sigma.
  2. DZERO uses 2.4 inverse femtobarns, and finds a 5.0-sigma-significance of single top production. The sensitivity of the DZERO analysis is quoted at 4.5-sigma.
  • Cross-section: both experiments measure a cross section in agreement with standard model expectations.
  1. CDF measures \sigma = 2.3^{+0.6}{-0.5} pb, a relative uncertainty of about 24%.
  2. DZERO measures \sigma = 3.9 \pm 0.9 pb, a relative uncertainty of about 23%.
  • Measurements of the CKM matrix element: both experiments quote a direct determination of that quantity, which is very close to 1.0 in the SM, but cannot exceed unity.
  1. CDF finds |V_{tb}|=0.91 \pm 0.11, a 12% accuracy.
  2. DZERO finds |V_{tb}|=1.07 \pm 0.12, a 11% accuracy.
  • Data distributions: both experiments have a super-discriminant which combines the information from different searches. This is a graphical display of the power of the analysis, and should be examined with care.

1. CDF in its paper shows the distribution below, as well as the five inputs that were used to obtain it. The distribution shows the single-top contribution in red, stacked over the concurring backgrounds. At high values of the discriminant, the single top signal does stick out, and the black points -the data- follow the sum of all processes nicely.

2.DZERO in its paper has only the distribution shown below. I was underwhelmed when I saw it. Again, backgrounds are stacked one on top of the other, the top distribution is the one from single top (this time shown in blue), and the data is shown by black dots. It does not look like the data prefer the hypothesis of backgrounds+single top over the background-only one all that much!

Maybe I am too partisan to really make a credible point here, and since I did not follow in detail the development of these analyses -from their first publications as evidence for single top, to updates, until yesterday’s papers- I may very well be proven wrong; however, by looking at the two plots above, and by knowing that they both appear to provide a 5.0-sigma significance, I am drawn to the conclusion that DZERO believes their background shapes and normalization much better than CDF does!

Now, believing something is a good thing in almost all human activities except Science. And if two scientific collaborations have a very different way of looking at how well their backgrounds are modeled by Monte Carlo simulations (which, at least as far as the generation of subatomic processes is concerned, are -or can be- the same), which one is to praise more: the one which believes the simulations more to extract their signal, or the one which relies less on them?

The above question is rethorical, and you should have already agreed that you value more a result which is less based on simulations. So let us look into this issue a bit further. CDF bases its result on a total sample of 4780 events, where the total uncertainty is estimated at +-533 events. DZERO bases its own on a sample of 4651 events, with a total uncertainty estimated at +-234 events! What drives such a large difference in the precision of these predictions ?

The culprit is one of the backgrounds, the production of W bosons in association with heavy flavor quarks – an annoying process, which enters all selection of top quarks and Higgs bosons at the Tevatron. CDF has it at 1855 events, with an uncertainty of 486 -or 26.2%; it is shown in green in the CDF plot above. DZERO has it at 2646 events, with an uncertainty of 173, or 6.5%; it is also shown in green in the DZERO plot.  Do not be distracted by the different size of the contribution of W+heavy flavor in the two datasets: different selection strategies drive the numbers to differ, and besides, it is rather the total number of events of the two analyses which is similar by pure chance. The point here is the uncertainty.

Luckily, the DZERO analysis does not appear to rely too much on the background normalization -this is not a simple counting experiment, where the better you know the size of expected backgrounds, the smaller your uncertainty on the signal; rather, the shapes of backgrounds are important, and the graphs above show that the data appears indeed well-described by the discriminant shape. And of course, background shapes are checked in control samples, so both experiments have many tools to ensure that the different contributions are well understood. However, the issue remains: how much do the different estimates of the W plus heavy flavor uncertainty impacts the significance of the measurements ? The DZERO paper mentions that one of their largest uncertainties arises from the modeling of the heavy flavor composition of W+jet events, but it does not provide further details.

I would be happy to receive an informed answer in the comments thread about the points I mention above…

About these ads

Comments

1. gordonwatts - March 5, 2009

Congrats to both collaborations for this! I didn’t work on the 5 sigma version, but I know how much work it is from the 3 sigma version of the analysis – and from the internal mailing lists it was clear this was quite a bit more work. Really fantastic!

2. dorigo - March 5, 2009

Hi Gordon,

yes, all are to be congratulated. It is a lot of work indeed. Maybe you can answer my question about the W+h.f. background ? I reckon that ALPGEN is used for W+jets, but how exactly is the bb contribution obtained ? Why such a small uncertainty ?

Cheers,
T.

3. gordonwatts - March 5, 2009

Yo Tomasso, I’ve made sure the people that really know are aware of the question. I don’t know if the will respond here or not; I hope here. I think I can guess, but I’m basing my guess on another analysis I’m involved in, not on this one.

So — just a quick questions — where did you get the 2646 number from? Did you just add accross the line in our paper (table I)? If so that is more than just W+HF.

BTW — note that you are comparing a log plot and a linear plot – if we were to make a log plot of the same plot it would look much more similar than yours – only our point would be high rather than low as your’s is.

4. carlbrannen - March 5, 2009

Obviously this should be decided by the grad students of the respective collaborations. This is the kind of thing that paintball was invented for.

5. dorigo - March 5, 2009

Hi Gordon,

well, the line refers to W+jets, and the data has 1 or 2 b-tags… I know there’s some W+charm and mistags, but overall the bulk is heavy flavor, I think. In any case, the CDF analysis is similar…

Ok for the plot, although the insets are both linear, and there for some reason the CDF signal convinces me more. But then again, I might have a biased eye.

Cheers,

6. dorigo - March 5, 2009

Carl, LOL that’s a good suggestion!
T.

7. dorigo - March 5, 2009

PS what is paintball ?

8. gordonwatts - March 5, 2009

Ha! I love it — paintball!

paintball – you dress up and go into the woods and try to kill each other with guns. Only the guns are loaded with small pellets of paint. That way you can see that someone is dead because they are marked with paint.

Don’t tell me this is something that is only American. Guns!? Sheesh! ;-)

9. dorigo - March 5, 2009

You guys are sick! :)

10. gordonwatts - March 5, 2009

well, we are american… ;-) Say — who is your prez? ;-) See, we both can be messed up! I guess our country is bigger, so when we are messed up _everybody_ suffers.

11. dorigo - March 5, 2009

Ouch, that was quite a blow! Yes, italians just have to shut up these days.

12. gordonwatts - March 6, 2009

Sorry. I know how much you love the guy (having read your posts about him). But I sympathize. It is so nice not having to duck and hide my head when these sorts of conversations come up. At any rate — all of this will pass. THe world has to become a better place, right? I’ll try to track down the physics answers for you.

13. carlbrannen - March 6, 2009

Paintball wasn’t my idea. It was the theme of the latest episode of a comedy show based on physics grad students. If you go to LuMo’s website and click around, you can watch it. You will find out what a paintball gun looks like, and what people look like when they’re shot.

It turns out that one thing that animals absolutely hate is having humans point things at them and then getting hit by something at a distance. Must be racial memory of being hunted. So an unadvertised alternative use of paintball guns is keeping the local animals in line without having to dispose of rotting bodies.

14. Namit - March 6, 2009

Dear Tomasso,
A quick question regarding the theoretical cross-section:
the D0 paper quotes the cross-section to be 3.46 +- 0.18 pb while the CDF paper quotes a value ~ 2.9pb for the same (and both papers cite “N. Kidonakis, Phys. Rev. D 74, 114012 (2006)” for the SM cross-section prediction).
The two central values quoted above seem quite different. Any clue about this? Slight difference in the theoretical predictions is expected since its seems that the D0 paper quotes the value of cross-section for mt=170GeV while CDF has mt=175GeV. But such a large difference seems puzzling, unless I have missed something.

Thanks and regards,
Namit

15. dorigo - March 6, 2009

Dear Namit,

this is an excellent question. It is true that both papers quote kidonakis, although CDF also quotes other papers. I will ask this question to the CDF authors, although I believe that the difference is mostly due to the different assumed mass (170 vs 175).

Cheers,
T.

16. Namit - March 6, 2009

Dear Tomasso,
Though in general possible, it would be a bit surprising if the difference entirely is due to the choice of top mass – this would mean that the NLO (and resummation etc) corrections are highly sensitive to top mass value. A quick glance at the paper(s) does not seem to indicate this, though I may have missed out something.
I look forward to the resolution of this apparent puzzle.
Regards,
namit

17. dorigo - March 6, 2009

Hi,

ok, most of the difference is from the top mass, but a part is due to the fact that CDF uses the NLO prediction, while DZERO uses a computation which includes some (but not all) of the NNLO effects.

Because of the incompleteness of the NNLO calculation, CDF chose to stick with NLO. This makes the expected cross section smaller for CDF.

Please also note that a smaller expected cross section will translate in a smaller expected significance for the signal: so DZERO, by using NNLO and a smaller top mass, obtains an expected sensitivity which is sizably larger than what it would be if they used the cross section used by CDF. Or one might say the opposite: the CDF expected significance would be larger, had they used a 3.5 pb expected xs.

Hope that helps,
T.

18. rescolo - March 6, 2009

Dear Tomasso,
I am wondering how the two measurements of V_tb combines to establish a 3-sigma allowed range. This would have some impact in the determinations of this parameter without assume unitarity.

19. Andrea Giammanco - March 6, 2009

Hi rescolo, have a look at the thread of comments of the previous post.

20. dorigo - March 6, 2009

Yes, I recommend Andrea’s thorough explanation of the limitations in the extraction of Vtb. See this thread.

Cheers,
T.

21. rescolo - March 6, 2009

Thank you Andrea and Dorigo for the interesting thread!

22. Paolo - March 7, 2009

Yes, thanks a lot. Tommaso, the way you turned a not-particularly interesting question (IMHO) to a nice discussion is impressive!

23. Paolo - March 7, 2009

PS: to all posters: TOMMASO, NOT TOMASSO!

24. dorigo - March 7, 2009

Thanks Paolo, I have stopped complaining by now, it’s a lost cause with many if not most English natives.

Cheers,
T.

25. carlbrannen - March 8, 2009

For English speakers, remember “Tommaso” as a variation of “Tommy”, say, as Tommy-so with the y changed to a because, well, Italians are just different.

26. Gordon Watts - March 10, 2009

Hi again. I’m finally working back to this — great weekend with family. Carl – did you see BBT this evening — both Summer and Smoot in one episode? Must be sweeps month! ;-)

Perhaps I’m missing something here – but how does the expected cross section that was used by the experiments affect the significance of the result? The significance is calculated by trying to make the background model (i.e. _no_ singletop) fluctuate up to mimic the data. This involves millions and millions of runs – it took us weeks to do this. But as far as I know the signal doesn’t enter into that. Check out the very carefully worded (:-)) paragraph that stretches from pages 5 to 6.

Next – I’ve just had a few minutes to start to chase down the plot. First thing to know is we don’t actually use that plot. Rather, we have 24 sub-analyses (lets see if I can get this right: 2 reconstruction versions, muon/electron, 1/2 tags, 2/3/4 jets – yes, that is 24). Once we have the BDT, BNN, and ME outputs for a particular channel we then create a combination BNN. That output distribution (x24) is what is fed to the actual likelihood. So, while that is most powerful, you’d need a poster to display that – and you can’t create a nice neat plot that shows how it looks — other than Figure 3 of the paper. Of course, there is a lot of math between the 24 different analysis input and that output so it is hard for us to look at it and “feel good”. :-)

The systematic errors will take me a bit longer. First, the numbers that are shown in the table can’t be what is used by just reading the text, right? There are shape systematics involved – so that can’t possibly be the whole story. For one thing – we have to have the systematic errors broken up into those 24 analysis channels.

BTW – except for the combination (which is fantastic to see in this round) what you see here is based on the same analysis philosophy as the evidence paper we put out in 2007 (which was when I was very active in the analysis, unlike now).

27. Gordon Watts - March 10, 2009

BTW – I think some of this will be addressed in the DZERO talk tomorrow at Fermilab.

28. Namit - March 10, 2009

Dear Tommaso,
Thanks for the response and sincere apologies for mis-spelling your name.

A related question: why are the two collaborations using different values of m_t (while quoting the expected cross-section)?

Namit

29. dorigo - March 10, 2009

Ciao Gordon,

what we have been discussing is that when an experiment uses an inflated expected cross section to derive the a-priori sensitivity of some data and some analysis, that experiment is going to obtain an inflated sensitivity. Thus DZERO’s expected 4.5 sigma are probably about 4.0 or so. Of course you can retort that it is CDF that plays it too safe by not using the NNLO (which is however incomplete) and by using 175 GeV instead than 170 GeV for the top mass, and then I agree -partly.

About the shape systs: of course those are dominating the uncertainty in the signal. But I still have not heard the answer of why the W+jet xs is known to a very small 6% in the DZERO analyses.

Cheers,
T.

30. dorigo - March 10, 2009

Hi Namit,

it is a choice – 170 and 175 are both round numbers. 175 GeV has been the reference for all of CDF Monte Carlo samples since Run II, while D0 probably uses 170 for the same reason.

Of course, using a smaller mass means that one expects more single top, which in turn implies that the expected sensitivity of that experiment is going to be computed higher.

Cheers,
T.

31. Gordon Watts - March 11, 2009

Hi again! Ok, I’ve been able to verify that that plot is what I thought it was – it is all the channels added together. That plot is never actually used – rather it is the sum of 24 individual discriminate outputs. So that is why it looks underwhelming – both high S:B and low S:B channels are all combined, which washes out the high ones.

By inflated sensitivity you mean the expected significance, right? I don’t get your argument. First of all, the NLO and NNLO are rescalings of some MC that we both generate. We use different generators – but presumably the events and the “shapes” are the same – except for any effects due to the difference in top mass. We generate at 170 and you at 175. You optimize your analysis for 170 and we for 175. So the difference between our analysis shoudl be minimal – because they are optimized for their top masses and I have a lot of trouble believing that the sensitivity of the analysis woudl change with 5 gev but a large amount.

The only difference will be in the cross section. For us the SM cross section is 3.46, and for you it is 2.9 (let me know if I misread your paper). Lets say the CDF NLO number is correct. We really should re-evalute our analysis using an expected cross section of 2.9 – less than what we did – so our analysis will look more sensitive, not less.

I guess there are two competing effects here (at least that I’m thinking of): the change in MTop and the change in the cross section. The change in cross section is easy – the lower the number the more sensitive the analysis. The MTop change is very hard to judge since we would both re-optimize our analysis for the proper top mass. However, given the systematic error checks we’ve done I think the effect will be minimal compared to the change in the cross section.

So I guess I would end up claiming that our analysis is more sensitive than it looks if you think MTop = 170 is the right number, not less sensitive.

32. Gordon Watts - March 11, 2009

Namit – it is almost exactly what T says – tradition. Changing is actually very expensive. You would never change in the middle of an analysis – it takes months to remake all of the Monte Carlo. Both experiments are on a crazy 6-month release schedule so they are loath to do something like that unless they really really think it will change the results. A 5 GeV shift when there are not other particles near that mass peak doesn’t really change things very much.

33. Gordon Watts - March 11, 2009

Whew! I guess that is what you get for posting between potty training sessions! My logic is ok above until the end.

We are measuring how often the SM background fluctuates up to the SM background + single top. So the higher it has to fluctuate, the harder it is – which will result in increased sensitivity.

Because D0′s ST x-section is higher we measure against fluctuating up to a higher number. If we were to have used CDF’s cross section then our significance would have been less than it is shown, as Tommaso correctly points out above (and sorry for mispelling your name earlier).

34. Gordon Watts - March 11, 2009

And — one other thing to consider is the difference in the luminosity – if you are trying to compare the sensitivity of the raw analysis I would expect ours to go up if we used your size of data. Of course, if you are just trying to compare what is in the papers then this isn’t a point.

35. dorigo - March 11, 2009

Thank you Gordon for your very objective considerations. In the post I have been more partisan than you show to be here…

Cheers,
T.

36. Latest on the Higgs « Not Even Wrong - March 11, 2009

[...] the papers are here and here. For an expository account, you can’t possibly do better than this one from Tommaso [...]

37. Namit - March 12, 2009

Dear Tommaso and Gordon,
Thanks a lot for the response. I do understand that changing the top mass in monte-carlos would mean a lot of work/effort. However, it is perhaps important to have a common value in order to compare with the theoretical predictions. The difference between the two numbers quoted by D0 and CDF (and as Tommaso said earlier that the bulk of the difference is due to mt values used) is almost (a bit less actually) 15% – this means that the result is quite sensitive to choice of mt!! And therefore, V_tb extracted will also differ.

As more data is analysed, this issue may become more and more important – at least that’s what seems like.

Another related question: since CDF has mt=175GeV as the reference value, does that mean that one should put this value as the mt(pole) for B-physics observables rather than mt(pole)=170GeV? This could mean significant differences at some places.

Regards,
Namit

38. Dag - March 12, 2009

Hey guys,

Namit: You said “….this means that the result is quite sensitive to choice of mt…”
The theoretical cross section is really quite sensitive to the m_top, but not the measured cross section. Most our sensitivity comes from events with 2 jets, 1 of them b-tagged. For these events, the background is mainly W+jets (~80% in the signal region). The excess in data on top of the background is hence not very sensitive to the top mass.

Regarding the sensitivity (expected significance) – a larger expected cross section means higher sensitivity. But harder kinematics (larger m_top) means better discrimination against W+jets – which again is the main problem. At D0, our most discriminating variables against W+jets are H_T and the reconstructed top mass – both of these become more powerful with a higher top mass. I don’t know how much better separation gets, but I’d guess between 5 and 10%.

I’ll try to comment on the W+jets uncertainty also.. but i’m far from an expert here:
The D0 number is extracted from the cross section measurement (Bayesian calculations integrating over the 24 channels). The total uncertainty of each of the W+jets components that are fed into the calculations are larger than 20%, but after considering the correlations betwen all uncertainties, and constraints from the fit with data, and combining all 24 channels, the total W+jets normalization uncertainty becomes only 6.5%.

39. gordonwatts - March 12, 2009

Fantastic – Dag is one of the people who was actually doing this analysis. Dag, thanks for commenting here.

T – does CDF do a profile fit for the uncertianties? The 6.5% number here is the output from that method. If CDF does, do you know what the output uncertianty is for CDF?


Sorry comments are closed for this entry

Follow

Get every new post delivered to your Inbox.

Join 100 other followers

%d bloggers like this: