jump to navigation

And here are the slides January 13, 2008

Posted by dorigo in news, personal, physics, science.
trackback

As I promised a couple of days ago, here are some of the slides of the paper seminar that Julien gave in CDF on Thursday, with some of my comments.

The analysis authors are the nine people listed above. I must say that this effort is the result of 6 years of studies, and that several other researchers contributed to it at some point in the past. However, many dropped out as the analysis showed harder and longer to carry out than originally expected. The nine names above were those who kept fighting till the bitter end… Among them, I have to mention the constant support of Melvyn Shochet (University of Chicago), who despite his many obligations was always interested in bringing this analysis to completion and provided constant help, review, and insight. While of course I need not spend more words on Julien, without whom the analysis would have never converged to a paper.

Above are listed some of the reasons why this analysis is intrinsically interesting and important to carry out. In particular, for Higgs boson searches having a calibration sample where to show that energy corrections do achieve a better resolution in the dijet invariant mass is a very important issue. I see few studies on this going on right now, but I believe that if the Tevatron experiments are one day going to claim some excess in their dijet mass distribution, they will have to show how their mass reconstruction performs on the Z \to b \bar b sample we have shown how to extract.

The preliminary selection above (two jets with Et>22 GeV) is applied to events collected by a SVT trigger – one exploiting the online measurement of track impact parameter to enrich the data of b-quark decays. The distributions show the transverse energy of the two jets, and the cut at 22 GeV gets rid of events affected by the turn-on of the trigger acceptance, which would create trouble in modeling the dijet mass distribution at low mass. 

 

Above is shown the kinematical selection applied to the data. Two variables are used: the azimuthal angle between the leading jets, which is more peaked to 3.14 (back-to-back jets) for the Z decay process than for jet pairs originated by strong interaction – the background. Also, the transverse energy of the third jet, shown in the second graph, is a discriminant variable because Z boson production is an electroweak process, and additional jets are not often produced by the quarks which produce the Z, as opposed to gluons producing background dijet events.

After the kinematic selection, both leading jets are required to contain a secondary vertex (b-tag). The vertex invariant mass is a powerful variable to show the quark content of the tagged jets: it is shown in the two graphs for events with one (top) and both (bottom) jets b-tagged. It is clearly seen that double b-tagging enriches the sample with b \bar b production, at the level of 90%.

Above, some studies of the modeling of the Z production kinematics in simulated events is performed by comparing Z \to ee and Z \to \mu \mu decays in real data and in simulation. Leptonic Z decays have very small backgrounds, so one can really compare their kinematics to the simulation with high accuracy. In the graphs one sees that the kinematics of the Z is well modeled. The distributions allow to estimate an uncertainty in the selection acceptance, due to differences between data and simulation.

Above are shown the parametrizations of background shape (left) and signal shape (right). The latter depends on the b-JES, the parameter we wish to measure. It is the scale to be applied to measured transverse energy of jets. The data is fit to a sum of background and signal shape, with free fractions and b-JES.

Above are shown the results of the fitting procedure on 260,000 events selected with the kinematic cuts and double b-tagging. The green histogram is the background, the red one is the signal. Blue points show the data, and the upper right inset shows the excess of data above the measured background, with the estimated Z content overlaid.  The fit measures the b-JES with an uncertainty of 1.1%. The number of signal events in the data is 5621, with a 7% uncertainty.

The tables above describe the estimated systematic uncertainties to the measurement of the b-jet energy scale. Some systematic uncertainties do not apply to the b-JES because they are dependent on the specific choice of Monte Carlo simulation: if one uses Pythia as we did for signal simulation, these must not be considered. If one uses Herwig for one’s b-jets, he or she must then include them in the estimate of the b-jet energy scale, which is a number describing the agreement of data and simulation, and is thus simulation-dependent…

Above you can see the measurement of the Z cross section from the b \bar b final state – something which is produced for the first time at the Tevatron. The value obtained by the analysis is larger, but consistent to, the theory prediction.

Above, Julien’s conclusions. I have to say, I would have put them in a stronger way. In fact, top quark mass measurements in CDF really should use the information we provide with the b-JES measurement. If they do not, it is because of lack of manpower, but also because of too much streamlining in the production of new results. Top mass measurements in the dilepton final state would almost reduce by half their total systematics by using our number – but they would have to redo their analysis using jets with a cone radius of R=0.7, or studying in detail what does change in the two jet definitions.

That is all… I will link here the preprint once it becomes available.

Comments

1. Andrea Giammanco - January 14, 2008

Why didn’t you use R=0.4?
Is it really so hard to redo it with a different jet definition? If yes, why?

2. dorigo - January 14, 2008

Hi Andrea,

0.7 is more appropriate for a dijet resonance, because you minimize uncertainties with respect to out-of-cone errors, and the jet resolution is best.

Redoing it with another cone would be hard because one would have to change basically everything in the analysis, from data selection to cut optimization to sample composition to background shape modeling…

Of course one could do it in a quick and dirty way, by using the same selection and just changing jet cone at the end, and one would then have to just change the background modeling and few other things, but one would learn little then.

Cheers,
T.


Sorry comments are closed for this entry

%d bloggers like this: