Physics Highlights – May 2009 June 2, 2009Posted by dorigo in news, physics, science.
Tags: CDF, DZERO, Fermi, heavy quarks, Hess, QCD, Randall, standard model
Here is a list of noteworthy pieces I published on my new blog site in May. Those of you who have not yet updated their links to point there might benefit from it…
Four things about four generations -the three families of fermions in the Standard Model could be complemented by a fourth: a recent preprint discusses the possibility.
Fermi and Hess do not confirm a dark matter signal: a discussion of recent measurements of the electron and positron cosmic ray fluxes.
Nit-picking on the Omega_b Discovery: A discussion of the significance of the signal found by DZERO, attributed to a Omega_b particle.
Nit-picking on the Omega_b Baryon -part II: A pseudoexperiments approach to the assessment of the significance of the signal found by DZERO.
The real discovery of the Omega_b released by CDF today: Announcing the observation of the Omega_b by CDF.
CDF versus DZERO: and the winner is…: A comparison of the two “discoveries” of the Omega_b particle.
The Tevatron Higgs limits strenghtened by a new theoretical study: a discussion of a new calculation of Higgs cross sections, showing an increase in the predictions with respect to numbers used by Tevatron experiments.
Citizen Randall: a report of the giving of honorary citizenship in Padova to Lisa Randall.
Hadronic Dibosons seen -next stop: the Higgs: A report of the new observation of WW/WZ/ZZ decays where one of the bosons decays to jet pairs.
Tags: conferences, extra dimensions, neutrino, standard model
This post and the few ones that will follow are for experts only, and I apologize in advance to those of you who do not have a background in particle physics: I will resume more down-to-earth discussions of physics very soon. Below, a short writeup is offered of Steve King’s talk, which I listened to during day three of the “Neutrino Telescopes” conference in Venice, three weeks ago. Any mistake in these writeups is totally my own fault. The slides of all talks, including the one reported here, have been made available at the conference site.
Most of the talk focused on a decision tree for neutrino mass models. This is some kind of flow diagram to decide -better, decode- the nature of neutrinos and their role in particle physics.
In the Standard Model there are no right-handed neutrinos, only Higgs doublets of the symmetry group , and the theory contains only renormalizable terms. If the above hypotheses all apply, then neutrinos are massless, and three separate lepton numbers are conserved. To generate neutrino masses, one must relax one of the three conditions.
The decision tree starts with the question: is the LSND result true or false ? if it is true, then are neutrinos sterile or CPT-Violating ? Otherwise, if the LSND result is false, one must decide whether neutrinos are Dirac or Majorana particles. If they are Dirac particles, they point to extra dimensions, while if they are Majorana ones, this brings to several consequences, tri-bimaximal mixing among them.
So, to start with the beginning: Is LSND true or false ? MiniBoone does not support the LSND result but it does support three neutrinos mixing. LSND is assumed false in this talk. So one then has to answer the question, are then neutrinos Dirac or Majorana ? Depending on that you can write down masses of different kinds in the Lagrangian. Majorana ones violate lepton number and separately the three of them. Dirac masses couple L-handed antineutrinos to R-handed neutrinos. In this case the neutrino is not equal to the antineutrino.
The first possibility is that the neutrinos are Dirac particles. This raises interesting questions: they must have very small Yukawa coupling. The Higgs Vacuum Expectation Value is about 175 GeV, and the Yukawa coupling is 3E-6 for electrons: this is already quite small. If we do the same with neutrinos, the Yukawa coupling must be of the order of 10^-12 for an electron neutrino mass of 0.2 eV. This raises the question of why this is so small.
One possibility then is provided by theories with extra dimensions: first one may consider flat extra dimensions, with right-handed neutrinos in the bulk (see graph on the right). These particles live in the bulk, whereas we are trapped in a brane. When we write a Yukawa term for neutrinos we get a volume suppression, corresponding to the spread of the wavefunction outside of our world. It goes as one over the square root of the volume, so if the string scale is smaller than the Planck scale ( we get the right scale.
The other sort of extra dimensions (see below) are the warped ones, with the standard model sitting in the bulk. The wavefunction of the Higgs overlaps with fermions, and this gives exponentially suppressed Dirac masses, depending on the fermion profiles. Because electrons and muons peak in the Planck brane while we live in the TeV brane, where the top quark peaks, this provides a natural way of giving a hierarchy to particle masses.
Some of these models address the problem of dark energy in the Universe. Neutrino telescopes studying neutrinos from Gamma-ray bursts may shed light on this issue along with Quantum Gravity and neutrino mass. The time delay relative to low-energy photons as a function of redshift can be studied against the energy of neutrinos. The function lines are different, and they depend on the models of dark energy. The point is that by studying neutrinos from gamma-ray bursts, one
has a handle to measure dark energy.
Now let us go back to the second possibility: namely, that neutrinos are Majorana particles. In this case you have two choices: a renormalizable operator with a Higgs triplet, and a non-renormalizable operator with a lepton violation term, . Because it is non-renormalizable you get a mass suppression, a mass at the denominator, which corresponds to some high energy scale. The way to implement this is to imagine that the mass scale is due to the exchange of a massive particle in the s-channel between Higgs and leptons, or in the t-channel.
We can concentrate on see-saw mechanisms in the rest of the talk. There are several types of such models, type I is essentially exchanging a heavy right-handed neutrino in the s-channel with the Higgs. Type II is instead when you exchange something in the t-channel, this could be a heavy Higgs triplet, and this would also give a suppressed mass.
The two types of see-saw types can work together. One may think of a unit matrix coming from a type-II see-saw, with the mass splittings and mixings coming from the type-I contribution. In this case the type II would render the neutrinoless double beta decay observable.
Moving down the decision tree, we come to the question of whether we have precise tri-bimaximal mixing (TBM). The matrix (see figure on the right) corresponds to angles of the standard parametrization, , , . These values are consistent with observations so far.
Let us consider the form of the neutrino mass matrix assuming the correctness of the TBM matrix. We can derive what the mass matrix is by multiplying it by the mixing matrix. It has three terms, one proportional to mass , one to , and one multiplied to . These matrices can be decomposed into column vectors. These are the columns of the TBM matrix. When you add the matrices together, you get the total matrix, symmetric, with the six terms populating the three rows (, , ) satisfying some relations: , , .
Such a mass matrix is called “form-diagonalizable” since it is diagonalized by the TBM matrix for all values of a,b,d. A,b,d translate into the masses. There is no cancelation of parameters involved, and the whole thing is extremely elegant. This suggests something called “form dominance”, a mechanism to achieve a form-diagonalizable effective neutrino mass matrix from the type-I see-saw. Working in the diagonal MRR basis, if is the Dirac mass, this can be written as three column vectors, and the effective light neutrino mass matrix is the sum of three terms. Form dominance is the assumption that the columns of the Dirac matrix are proportional to the columns of the TBM matrix (see slide 16 of the talk). Then one can generate the TBM mass matrix. In this case the physical neutrino masses are given by a combination of parameters. This constitutes a very nice way to get a diagonalizable mass matrix from the see-saw mechanism.
Moving on to symmetries, clearly, the TBM matrix suggests some family symmetry. This is badly broken in the charged lepton sector, so one can write explicitly what the Lagrangian is, and the neutrino Majorana matrix respects the muon-tauon interchange, whereas the charged matrix does not. So this is an example of a symmetry working in one way but not in the other. To achieve different symmetries in the neutrino and charged lepton sectors we need to align the Higgs fields which break the family symmetry (called flavons) along different symmetry-preserving directions (called vacuum alignment). We need to have a triplet of flavons which breaks the A4 symmetry.
A4 see-saw models satisfy form dominance. There are two models. Both have R=1. These models are economical, they involve only two flavons. A4 is economical: yet, one must assume that there are some cancelations of the vacuum expectation values in order to achieve consistency with experimental measurements of atmospheric and solar mixing. This suggests a “natural form dominance”, less economical but involving no cancelations. A different flavon is associated to each neutrino mass. An extension is “constrained sequential dominance”, which is a special case, which supplies strongly hierarchical neutrino masses.
As far as family symmetry is concerned, the idea is that there are two symmetries, two family groups from the group . You get certain relations which are quite interesting. The CKM mixing is in relation with the Yukawa matrix. You can make a connection between the down-Yukawa matrix and the electron Yukawa. This leads to some mixing sum rule relations, because the PMNS matrix is the product of a Cabibbo-like matrix and a TBM matrix. The mixing angles carry information on corrections to TBM. The mixing sum rule one gets is a deviation from 35 degrees of , which is due to a Cabibbo angle coming from the charged sector. Putting two things together, one can get a physical relation between these angles. A mixing sum rule, .
The conclusions are that neutrino masses and mixing require new physics beyond the Standard Model. There are many roads for model building, but their answers to key experimental questions will provide the signposts. If TMB is accurately realized, this may imply a new symmetry of nature: a family symmetry, broken by flavons. The whole package is a very attractive scheme, sum rules underline the importance of showing that the deviations from TBM are non-zero. Neutrino telescopes may provide a window into neutrino mass, quantum gravity and dark energy.
After the talk, there were a few questions from the audience.
Q: although true that MiniBoone is not consistent with LSND in a simple 2-neutrino mixing model, in more complex models the two experiments may be consistent. King agrees.
Q: The form dominance scenario in some sense would not apply to the quark sector. It seems it is independent of A4. King’s answer: form dominance is a general framework for achieving form-diagonalizable elements starting from the see-saw mechanism. This includes the A4 model as an example, but does not restricts to it. There are a large class of models in this framework.
Q: So it is not specific enough to extend to the quark sector ? King: form dominance is all about the see-saw mechanism.
Q: So, cannot we extend this to symmetries like T’ which involve the quarks ? King: the answer is yes. Because of time this was only flashed in the talk. It is a very good talk to do by itself.
Tags: CDF, DZERO, electroweak fits, Gfitter, Higgs boson, LEP, SLD, standard model, Tevatron, top quark, W boson
A recent discussion in this blog between well-known theorists and phenomenologists, centered on the real meaning of the experimental measurements of top quark and W boson masses, Higgs boson cross-section limits, and other SM observables, convinces me that some clarification is needed.
The work has been done for us: there are groups that do exactly that, i.e. updating their global fits to express the internal consistency of all those measurements, and the implications for the search of the Higgs boson. So let me go through the most important graphs below, after mentioning that most of the material comes from the LEP electroweak working group web site.
First of all, what goes in the soup ? Many things, but most notably, the LEP I/SLD measurements at the Z pole, the top quark mass measurements by CDF and DZERO, and the W mass measurements by CDF, DZERO, and LEP II. Let us give a look at the mass measurements, which have recently been updated.
For the top mass, the situation is the one pictured in the graph shown below. As you can clearly see, the CDF and DZERO measurements have reached a combined precision of 0.75% on this quantity.
The world average is now at . I am amazed to see that the first estimate of the top mass, made by a handful of events published by CDF in 1994 (a set which did not even provide a conclusive “observation-level” significance at the time) was so dead-on: the measurement back then was ! (for comparison, the DZERO measurement of 1995, in their “observation” paper, was ).
As far as global fits are concerned, there is one additional point to make for the top quark: knowing the top mass any better than this has become, by now, useless. You can see it by comparing the constraints on coming from the indirect measurements and W mass measurements (shown by the blue bars at the bottom of the graph above) with the direct measurements at the Tevatron (shown with the green band). The green band is already too narrow: the width of the blue error bars compared to the narrow green band tells us that the SM does not care much where exactly the top mass is, by now.
Then, let us look at the W mass determinations. Note, the graph below shows the situation BEFORE the latest DZERO result;, obtained with 1/fb of data, and which finds ; its inclusion would not change much of the discussion below, but it is important to stress it.
Here the situation is different: a better measurement would still increase the precision of our comparisons with indirect information from electroweak measurements at the Z. This is apparent by observing that the blue bars have width still smaller than the world average of direct measurements (again in green). Narrow the green band, and you can still collect interesting information on its consistency with the blue points.
Finally, let us look at the global fit: the electroweak working group at LEP displays in the by now famous “blue band plot”, shown below for March 2009 conferences. It shows the constraints on the Higgs boson mass coming from all experimental inputs combined, assuming that the Standard Model holds.
I will not discuss this graph in details, since I have done it repeatedly in the past. I will just mention that the yellow regions have been excluded by direct searches of the Higgs boson at LEP II (on the left, the wide yellow area) and the Tevatron ( the narrow strip on the right). From the plot you should just gather that a light Higgs mass is preferred (the central value being 90 GeV, with +36 and -27 GeV one-sigma error bars). Also, a 95% confidence-level exclusion of masses above 163 GeV is implied by the variation of the global fit with Higgs mass.
I have started to be a bit bored by this plot, because it does not do the best job for me. For one thing, the LEP II limit and the Tevatron limit on the Higgs mass are treated as if they were equivalent in their strength, something which could not be possibly farther from the truth. The truth is, the LEP II limit is a very strong one -the probability that the Higgs has a mass below 112 GeV, say, is one in a billion or so-, while the limit obtained recently by the Tevatron is just an “indication”, because the excluded region (160 to 170 GeV) is not excluded strongly: there still is a one-in-twenty chance or so that the real Higgs boson mass indeed lies there.
Another thing I do not particularly like in the graph is that it attempts to pack too much information: variations of , inclusion of low-Q^2 data, etcetera. A much better graph to look at is the one produced by the GFitter group instead. It is shown below.
In this plot, the direct search results are introduced with their actual measured probability of exclusion as a function of Higgs mass, and not just in a digital manner, yes/no, as the yellow regions in the blue band plot. And in fact, you can see that the LEP II limit is a brick wall, while the Tevatron exclusion acts like a smooth increase in the global of the fit.
From the black curve in the graph you can get a lot of information. For instance, the most likely values, those that globally have a 1-sigma probability of being one day proven correct, are masses contained in the interval 114-132 GeV. At two-sigma, the Higgs mass is instead within the interval 114-152 GeV, and at three sigma, it extends into the Tevatron-excluded band a little, 114-163 GeV, with a second region allowed between 181 and 224 GeV.
In conclusion, I would like you to take away the following few points:
- Future indirect constraints on the Higgs boson mass will only come from increased precision measurements of the W boson mass, while the top quark has exhausted its discrimination power;
- Global SM fits show an overall very good consistency: there does not seem to be much tension between fits and experimental constraints;
- The Higgs boson is most likely in the 114-132 GeV range (1-sigma bounds from global fits).
Zooming in on the Higgs March 24, 2009Posted by dorigo in news, physics, science.
Tags: CDF, DZERO, Higgs boson, LEP, MSSM, standard model, supersymmetry, Tevatron, top quark, W boson
Yesterday Sven Heinemeyer kindly provided me with an updated version of a plot which best describes the experimental constraints on the Higgs boson mass, coming from electroweak observables measured at LEP and SLD, and from the most recent measurements of W boson and top quark masses. It is shown on the right (click to get the full-sized version).
The graph is a quite busy one, but I will try below to explain everything one bit at a time, hoping I keep things simple enough that a non-physicist can understand it.
The axes show suitable ranges of values of the top quark mass (varying on the horizontal axis) and of the W boson masses (on the vertical axis). The value of these quantities is functionally dependent (because of quantum effects connected to the propagation of the particles and their interaction with the Higgs field) on the Higgs boson mass.
The dependence, however, is really “soft”: if you were to double the Higgs mass by a factor of two from its true value, the effect on top and W masses would be only of the order of 1% or less. Because of that, only recently have the determinations of top quark and W boson masses started to provide meaningful inputs for a guess of the mass of the Higgs.
Top mass and W mass measurements are plotted in the graphs in the form of ellipses encompassing the most likely values: their size is such that the true masses should lie within their boundaries, 68% of the time. The red ellipse shows CDF results, and the blue one shows DZERO results.
There is a third measurement of the W mass shown in the plot: it is displayed as a horizontal band limited by two black lines, and it comes from the LEP II measurements. The band also encompasses the 68% most likely W masses, as ellipses do.
In addition to W and top masses, other experimental results constrain the mass of top, W, and Higgs boson. The most stringent of these results are those coming from the LEP experiment at CERN, from detailed analysis of electroweak interactions studied in the production of Z bosons. A wide band crossing the graph from left to right, with a small tilt, encompasses the most likely region for top and W masses.
So far we have described measurements. Then, there are two different physical models one should consider in order to link those measurements to the Higgs mass. The first one is the Standard Model: it dictates precisely the inter-dependence of all the parameters mentioned above. Because of the precise SM predictions, for any choice of the Higgs boson mass one can draw a curve in the top mass versus W mass plane. However, in the graph a full band is hatched instead. This correspond to allowing the Higgs boson mass to vary from a minimum of 114 GeV to 400 GeV. 114 GeV is the lower limit on the Higgs boson mass found by the LEP II experiments in their direct searches, using electron-positron collisions; while 400 GeV is just a reference value.
The boundaries of the red region show the functional dependence of Higgs mass on top and W masses: an increase of top mass, for fixed W mass, results in an increase of the Higgs mass, as is clear by starting from the 114 GeV upper boundary of the red region, since one then would move into the region, to higher Higgs masses. On the contrary, for a fixed top mass, an increase in W boson mass results in a decrease of the Higgs mass predicted by the Standard Model. Also note that the red region includes a narrow band which has been left white: it is the region corresponding to Higgs masses varying between 160 and 170 GeV, the masses that direct searches at the Tevatron have excluded at 95% confidence level.
The second area, hatched in green, is not showing a single model predictions, but rather a range of values allowed by varying arbitrarily many of the parameters describing the supersymmetric extension of the SM called “MSSM”, its “minimal” extension. Even in the minimal extension there are about a hundred additional parameters introduced in the theory, and the values of a few of those modify the interconnection between top mass and W mass in a way that makes direct functional dependencies in the graph impossible to draw. Still, the hatched green region shows a “possible range of values” of the top quark and W boson masses. The arrow pointing down only describes what is expected for W and top masses if the mass of supersymmetric particles is increased from values barely above present exclusion limits to very high values.
So, to summarize, what to get from the plot ? I think the graph describes many things in one single package, and it is not easy to get the right message from it alone. Here is a short commentary, in bits.
- All experimental results are consistent with each other (but here, I should add, a result from NuTeV which finds indirectly the W mass from the measured ratio of neutral current and charged current neutrino interactions is not shown);
- Results point to a small patch of the plane, consistent with a light Higgs boson if the Standard Model holds
- The lower part of the MSSM allowed region is favored, pointing to heavy supersymmetric particles if that theory holds
- Among experimental determinations, the most constraining are those of the top mass; but once the top mass is known to within a few GeV, it is the W mass the one which tells us more about the unknown mass of the Higgs boson
- One point to note when comparing measurements from LEP II and the Tevatron experiments: when one draws a 2-D ellipse of 68% contour, this compares unfavourably to a band, which encompasses the same probability in a 1-D distribution. This is clear if one compares the actual measurements: CDF (with 200/pb of data), DZERO (with five times more statistics), LEP II (average of four experiments). The ellipses look like they are half as precise as the black band, while they are actually only 30-40% worse. If the above is obscure to you, a simple graphical explanation is provided here.
- When averaged, CDF and DZERO will actually beat the LEP II precision measurement -and they are sitting on 25 times more data (CDF) or 5 times more (DZERO).
Streaming video for Y(4140) discovery March 17, 2009Posted by dorigo in news, physics, science.
Tags: B physics, CDF, discoveries, QCD, standard model, Tevatron
The CDF collaboration will present at a public venue (Fermilab’s Wilson Hall) its discovery of the new Y(4140) hadron, a mysterious particle created in B meson decays, and observed to decay strongly into a state, a pair of vector mesons. I have described that exciting discovery in a recent post.
From this site you can connect to streaming video (starting at 4.00PM CDT, or 9.00PM GMT – should last about 1.30 hours).
DZERO refutes CDF’s multimuon signal… Or does it ? March 17, 2009Posted by dorigo in news, physics, science.
Tags: anomalous muons, CDF, DZERO, new physics, standard model, Tevatron
Hot off the press: Mark Williams, a DZERO member speaking at Moriond QCD 2009 -a yearly international conference in particle physics, where HEP experimentalists regularly present their hottest results- has shown today the preliminary results of their analysis of dimuon events, based on 900 inverse picobarns of proton-antiproton collision data. And the conclusion is…
DZERO searched for an excess of muons with large impact parameter by applying a data selection very similar, and when possible totally equivalent, to the one used by CDF in its recent study. Of course, the two detectors have entirely different hardware, software algorithms, and triggers, so there are certain limits to how closely one analysis can be replicated by the other experiment. However, the main machinery is quite similar: they count how many events have two muons produced within the first layer of silicon detector, and extrapolate to determine how many they expect to see which fail to yield a hit in that first layer, comparing to the actual number. They find no excess of large impact parameter muons.
Impact parameter, for those of you who have not followed this closely in the last few months, is the smallest distance between a track and the proton-antiproton collision vertex, in the plane transverse to the beam direction. A large impact parameter indicates that a particle has been produced in the decay of a parent body which was able to travel away from the interaction point before disintegrating. More information about the whole issue can be found in this series of posts, or by just clicking the “anomalous muons” tab in the column on the right of this text.
There are many things to say, but I will not say them all here now, because I am still digesting the presentation, the accompanying document produced by DZERO (not ready for public consumption yet), and the implications and subtleties involved. However, let me flash a few of the questions I am going to try and give an answer to with my readings:
- The paper does not address the most important question – what is DZERO’s track reconstruction efficiency as a function of track impact parameter ? They do discuss with some detail the complicated mixture of their data, which results from triggers which enforce that tracks have very small impact parameter -effectively cutting away all tracks with an impact parameter larger than 0.5cm- and a dedicated trigger which does not enforce an IP requirement; they also discuss their offline track reconstruction algorithms. But at a first sight it did not seem clear to me that they can actually reconstruct effectively tracks with impact parameters up to 2.5 cm as they claim. I would have inserted in the documents an efficiency graph for the reconstruction efficiency as a function of impact parameter, had I authored it.
- The paper shows a distribution of the decay radius of neutral K mesons, reconstructed from their decay into pair of charged pions. From the plot, the efficiency of reconstructing those pions is excessively small -some three times smaller than what it is in CMS, for instance. I need to read another paper by DZERO to figure out what drives their K-zero reconstruction efficiency to be so small, and whether this is in fact due to the decrease of effectiveness with track displacement.
- What really puzzles me, however, is the fact that they do not see *any* excess, while we know there must be in any case a significant one: decays in flight of charged kaons and pions. Why is it that CDF is riddled with those, while DZERO appears free of them ? To explain this point: charged kaons and pions yield muons, which get reconstructed as real muons with large impact parameter. If the decay occurs within the tracking volume, the track is partly reconstructed with the muon hits and partly with the kaon or pion hits. Now, while pions have a mass similar to that of muons, and thus the muon practically follows the pion trajectory faithfully, for kaons there must be a significant kink in the track trajectory. One expects that the track reconstruction algorithm will fail to associate inner hits to a good fraction of those tracks, and the resulting muons will belong to the “loose” category, without a correspondence in the “tight” muon category which has muons containing a silicon hit in the innermost layer of the silicon detector. This creates an excess of muons with large impact parameter. CDF does estimate that contribution, and it is quite large, of the order of tens of thousands of events in 743 inverse picobarns of data! Now where are those events in the DZERO dataset, then ?
Of course, you should not expect that my limited intellectual capabilities and my slow reading of a paper I have had in my hands for no longer than two hours can produce foulproof arguments. So the above is just a first pass, sort of a quick and dirty evaluation. I imagine I will be able to give an answer to those puzzles myself, at least in part, with a deeper look at the documentation. But, for the time being, this is what I have to say about the DZERO analysis.
Or rather, I should add something. By reading the above, you might get the impression that I am only criticizing DZERO out of bitterness for the failed discovery of the century by CDF… No, it is not the case: I have always thought, and I continue to think, that the multi-muon signal by CDF is some unaccounted-for background. And I do salute with relief and interest the new effort by DZERO on this issue. I actually thank them for providing their input on this mystery. However, I still retain some scepticism with respect to the findings of their study. I hope that scepticism can be wiped off by some input – maybe some reader belonging to DZERO wants to shed some light on the issues I mentioned above ? You are most welcome to do so!
UPDATE: Lubos pitches in, and guess what, he blames CDF… But Lubos the experimentalist is not better than Lubos the diplomat, if you know what I mean…
Other reactions will be collected below – if you have any to point to, please do so.
CDF discovers a new hadron! March 13, 2009Posted by dorigo in news, physics, science.
Tags: CDF, discoveries, QCD, standard model, Tevatron
This morning CDF released the results of a search for narrow resonances produced in B meson decays, and in turn decaying into a pair of vector mesons: namely, . This Y state is a new particle whose exact composition is as of yet unknown, except that CDF has measured its mass (4144 MeV) and established that its decay appears to be mediated by strong interactions, given that the natural width of the state is in the range of a few MeV. I describe succintly the analysis below, but first let me make a few points on the relevance of area of investigation.
Heavy meson spectroscopy appears to be a really entertaining research field these days. While all eyes are pointed at the searches for the Higgs boson and supersymmetric particles, if not at even more exotic high-mass objects, and while careers are made and unmade on those uneventful searches, it is elsewhere that action develops. Just think about it: the baryon, the , those mysterious X and Y states which are still unknown in their quark composition. Such discoveries tell the tale of a very prolific research field: one where there is really a lot to understand.
Low-energy QCD is still poorly known and not easily calculable. In frontier High-Energy Physics we bypassed the problem for the sake of studying high-energy phenomena by tuning our simulations such that their output well resembles the result of low-energy QCD processes in all cases where we need them -such as the details of parton fragmentation, or jet production, or transverse momentum effects in the production of massive bodies. However, we have not learnt much with our parametrizations: those describe well what we already know, but they do not even come close to guessing whatever we do not know. Our understanding of low-energy QCD is starting to be a limiting factor in cosmological studies, such as in baryogenesis predictions. So by all means, let us pursue low-energy QCD in all the dirty corners of our produced datasets at hadron colliders!
CDF is actively pursuing this task. The outstanding spectroscopic capabilities of the detector, combined with the huge size of the dataset collected since 2002, allow searches for decays in the one-in-a-million range of branching ratios. The new discovery I am discussing today has indeed been made possible by pushing to the limit our search range.
The full decay chain which has been observed is the following: . That mesons decay to muon pairs is not a surprise, as is the decay to two charged kaons of the vector meson. Also the original decay of the B hadron into the final state is not new: it had been in fact observed previously. What had not been realized yet, because of the insufficient statistics and mass resolution, is that the and mesons produced in that reaction often “resonate” at a very definite mass value, indicating that in those instances the decay actually takes place in two steps as the chain of two two-body decays: and .
The new analysis by CDF is a pleasure to examine, because the already excellent momentum resolution of the charged particle tracking system gets boosted when constraints are placed on the combined mass of multi-body systems. Take the B meson, reconstructed with two muons and three charged tracks, each assumed to be a kaon: if you did not know that the muon pair comes from a nor that two of the kaons come from a , the mass resolution of the system would be in the few tens of MeV range. Instead, by forcing the momenta of the two muons to be consistent with the World average mass of the , , and by imposing that the two kaons make exactly the extremely well-known mass (), much of the uncertainty on the daughter particle momenta disappears, and the B meson becomes an extremely narrow signal: its mass resolution is just 5.9 MeV, a per-mille measurement event-by-event!
The selection of signal events requires several cleanup cuts, including mass window cuts around the J/Psi and phi masses, a decay length of the reconstructed B+ meson longer than 500 microns, and a cut on the log-likelihood ratio fed with dE/dx and time-of-flight information capable of discriminating kaon tracks from other hadrons. After those cuts, the B+ signal really stands above the flat background. There is a total of 78+-10 events in the sample after these cuts, and this is the largest sample of such decays ever isolated. It is shown above (left), together with the corresponding distribution in the candidate mass (right).
A Dalitz plot of the reconstructed decay candidates is shown in the figure on the right. A Dalitz plot is a scatterplot of the squared invariant mass of a subset of the particles emitted in the decay, versus the squared invariant mass of another subset. If the decay proceeds via the creation of an intermediate state, one may observe a horizontal or vertical cluster of events. Judge by yourself: do the points appear to spread evenly in the allowed phase space of the B+ decays ?
The answer is no: a significant structure is seen corresponding to a definite mass of the system. A histogram of the difference between the reconstructed mass of the system and the mass is shown in the plot below: a near-threshold structure appears at just above 1 GeV energy. An unbinned fit to a relativistic Breit-Wigner signal shape on top of the expected background shape shows a signal at a mass difference of , with a width of 11.7+-5.7 MeV.
The significance of the signal is, after taking account of trial factors, equal to 3.8 standard deviations. For the non-zero width hypothesis, the significance is of 3.4 standard deviations, implying that the newfound structure has strong decay. The mass of the new state is thus of 4143+-2.9 MeV.
The new state is above the threshold for decay to pair of charmed hadrons. The decay of the state appears to occur to a pair of vector mesons, , in close similarity to a previous state found at 3930 MeV, the Y(3930), which also decays to two vector mesons in . Therefore, the new state can be also called a Y(4140).
Although the significance of this new signal has not reached the coveted threshold of 5 standard deviations, there are few doubts about its nature. Being a die-hard sceptic, I did doubt about the reality of the signal shown above for a while when I first saw it, but I must admit that the analysis was really done with a lot of care. Besides, CDF now has tens of thousands of fully reconstructed B meson decays available, with which it is possible to study and understand even the most insignificant nuances to every effect, including reconstruction problems, fit method, track characteristics, kinematical biases, you name it. So I am bound to congratulate with the authors of this nice new analysis, which shows once more how the CDF experiment is producing star new results not just in the high-energy frontier, but as well as in low-energy spectroscopy. Well done, CDF!
First observation of single top production from CDF!!! March 5, 2009Posted by dorigo in news, physics, science.
Tags: CDF, standard model, Tevatron, top quark
The paper, submitted to PRL yesterday evening, is here.
I will discuss the details later today…
UPDATE: a reader points out that the above link was broken. Now fixed.
Higgs decays to photon pairs! March 4, 2009Posted by dorigo in news, physics, science.
Tags: DZERO, Higgs boson, LHC, photons, standard model, Tevatron
It was with great pleasure that I found yesterday, in the public page of the DZERO analyses, a report on their new search for Higgs boson decays to photon pairs. On that quite rare decay process -along with another not trivial decay, the reaction- the LHC experiments base their hopes to see the Higgs boson if that particle has a mass close to the LEP II upper bound, i.e. not far from 115 GeV. And this is the first high-statistics search for the SM Higgs in that final state to obtain results that are competitive with the more standard searches!
My delight was increased when I saw that results of the DZERO search are based on a data sample corresponding to a whooping 4.2 inverse-femtobarns of integrated luminosity. This is the largest set of hadron-collider data ever used for an analysis. 4.2 inverse femtobarns correspond to about three-hundred trillion collisions, sorted out by DZERO. Of course, both DZERO and CDF have so far collected more than that statistics: almost five inverse femtobarns. However, it always takes some time before calibration, reconstruction, and production of the newest datasets is performed… DZERO is catching up nicely with the accumulated statistics, it appears.
The most interesting few tens of billions or so of those events have been fully reconstructed by the software algorithms, identifying charged tracks, jets, electrons, muons, and photons. Yes, photons: quanta of light, only very energetic ones: gamma rays.
When photons have an energy exceeding a GeV or so (i.e. one corresponding to a proton mass or above), they can be counted and measured individually by the electromagnetic calorimeter. One must look for very localized energy deposits which cannot be spatially correlated with a charged track: something hits the calorimeter after crossing the inner tracker, but no signal is found there, implying that the object was electrically neutral. The shape of the energy deposition then confirms that one is dealing with a single photon, and not -for instance- a neutron, or a pair of photons traveling close to each other. Let me expand on this for a moment.
Background sources of photon signals
In general, every proton-antiproton collision yield dozens, or even hundreds of energetic photons. This is not surprising, as there are multiple significant sources of GeV-energy gamma rays to consider.
- Electrons, as well as in principle any other electrically charged particle emitted in the collision, have the right to produce photons by the process called bremsstrahlung: by passing close to the electric field generated by a heavy nucleus, the particle emits electromagnetic radiation, thus losing a part of its energy. Note that this is a process which cannot happen in vacuum, since there are no target nuclei there to supply the electric field with which the charged particle interacts (one can have bremsstrahlung also in the presence of neutral particles, in principle, since what matters is the capability of the target to absorb a part of the colliding body’s momentum; but in that case, one needs a more complicated scattering process, so let us forget about it). For particles heavier than the electron, the process is suppressed up to the very highest energy (where particle masses are irrelevant with respect to their momenta), and is only worth mentioning for muons and pions in heavy materials.
- By far the most important process for photon creation at a collider is the decay of neutral hadrons. A high-energy collision at the Tevatron easily yields a dozen of neutral pions, and these particles decay more than 99% of the time into pairs of photons, . Of course, these photons would only have an energy equal to half the neutral pion mass -0.07 GeV- if the neutral pions were at rest; it is only through the large momentum of the parent that the photons may be energetic enough to be detected in the calorimeter.
- A similar fate to that of neutral pions awaits other neutral hadrons heavier than the : most notably the particle called eta, in the decay . The eta has a mass four times larger than that of the neutral pion, and is less frequently produced.
- And other hadrons may produce photons in de-excitation processes, albeit not in pairs: excited hadrons often decay radiatively into their lower-mass brothers, and the radiated photon may display a significant energy, again critically depending on the parent’s speed in the laboratory.
All in all, that’s quite a handful of photons our detectors are showered with on an event-by-event basis! How the hell can DZERO sort out then, amidst over three hundred trillion collisions, the maybe five or ten which saw the decay of a Higgs to two photons ?
And the Higgs signal amounts to…
Five to ten events. Yes, we are talking of a tiny signal here. To eyeball how many standard model Higgs boson decays to photon pairs we may expect in a sample of 4.2 inverse femtobarns, we make some approximations. First of all, we take a 115 GeV Higgs for a reference: that is the Higgs mass where the analysis should be most sensitive, if we accept that the Higgs cannot be much lighter than that: for heavier higgses, their number will decrease, because the heavier a particle is, the less frequently it is produced.
The cross-section for the direct-production process (where with X we denote our unwillingness to specify whatever else may be produced together with the Higgs) is, at the Tevatron collision energy of 1.96 TeV, of the order of one picobarn. I am here purposedly avoiding to fetch a plot of the xs vs mass to give you the exact number: it is in that ballpark, and that is enough.
The other input we need is the branching ratio of H decay to two photons. This is the fraction of disintegrations yielding the final state that DZERO has been looking for. It depends on the detailed properties of the Higgs particle, which likes to couple to particles depending on the mass of the latter. The larger a particle’s mass, the stronger its coupling to the Higgs, and the more frequent the H decay into a pair of those: the branching fraction depends on the squared mass of the particle, but since the sum of all branching ratios is one -if we say the Higgs decays, then there is a 100% chance of its decaying into something, no less and no more!- any branching fraction depends on ALL other particle masses!!!
“Wait a minute,” I would like to hear you say now, “the photon is massless! How can the Higgs couple to it?!”. Right. H does not couple directly to photons, but it can nevertheless decay into them via a virtual loop of electrically charged particles. Just as happens when your US plug won’t fit into an european AC outlet! You do not despair, and insert an adaptor: something endowed with the right holes on one side and pins on the other. Much in the same way, a virtual loop of top quarks, for instance, will do a good job: the top has a large mass -so it couples aplenty to the Higgs- and it has an electric charge, so it is capable of emitting photons. The three dominant Feynman diagrams for the decay are shown above: the first two of them involve a loop of W bosons, the third a loop of top quarks.
So, how much is the branching ratio to two photons in the end ? It is a complicated calculus, but the result is roughly one thousandth. One in a thousand low-mass Higgses will disintegrate into energetic light: two angry gamma rays, each roughly carrying the energy of a 2 milligram mosquito launched at the whooping speed of four inches per second toward your buttocks.
Now we have all the ingredients for our computation of the number of signal events we may be looking at, amidst the trillions produced. The master formula is just
where is the number of decays of the kind we want, is the production cross section for Higgs at the Tevatron, is the integrated luminosity on which we base our search, and B is the branching ratio of the decay we study.
With , , and , the result is, guess what, 4.2 events. 4.2 in three hundred trillions. A needle in the haystack is a kids’ game in comparison!
The DZERO analysis
I will not spend much of my and your time discussing the details of the DZERO analysis here, primarily because this post is already rather long, but also because the analysis is pretty straightforward to describe at an elementary level: one selects events with two photons of suitable energy, computes their combined invariant mass, and compares the expectation for Higgs decays -a roughly bell-shaped curve centered at the Higgs mass and with a width of ten GeV or so- with the expected backgrounds from all the processes capable of yielding pairs of energetic photons, plus all those yielding fake photons. [Yes, fake photons: of course the identification of gamma rays is not perfect -one may have not detected a charged track pointing at the calorimeter energy deposit, for instance.] Then, a fit of the mass distribution extracts an upper limit on the number of signal events that may be hiding there. From the upper limit on the signal size, an upper limit is obtained on the signal cross-section.
Ok, the above was a bit too quick. Let me be slightly more analytic. The data sample is collected by an online trigger requiring two isolated electromagnetic deposits in the calorimeter. Offline, the selection requires that both photon candidates have a transverse energy exceeding 25 GeV, and that they be isolated from other calorimetric activity -a requirement which removes fake photons due to hadronic jets.
Further, there must be no charged tracks pointing close to the deposit, and a neural-network classifier is used to discriminate real photons from backgrounds using the shape of the energy deposition and other photon quality variables. The NN output is shown in the figure below: real photons (described by the red histogram) cluster on the right. A cut on the data (black points) of a NN output larger than 0.1 accepts almost all signal and removes 50% of the backgrounds (the hatched blue histogram). One important detail: the shape of the NN output for real high-energy photons is modeled by Monte Carlo simulations, but is found in good agreement with that of real photons in radiative Z boson decay processes, . In those processes, the detected photon is 100% pure!
After the selection, surviving backgrounds are due to three main processes: real photon pairs produced by quark-antiquark interactions, compton-like gamma-jet events where the jet is mistaken for a photon, and Drell-Yan processes yielding two electrons, both of which are mistaken for photons. You can see the relative importance of the three sources in the graph below, which shows the diphoton invariant mass distribution for the data (black dots) compared to the sum of backgrounds. Real photons are in green, compton-like gamma-jet events are in blue, and the Drell-Yan contribution is in yellow.
The mass distribution has a very smooth exponential shape, and to search for Higgs events DZERO fits the spectrum with an exponential, obliterating a signal window where Higgs decays may contribute. The fit is then extrapolated into the signal window, and a comparison with the data found there provides the means for a measurement; different signal windows are assumed to search for different Higgs masses. Below are shown four different hypotheses for the Higgs mass, ranging from 120 to 150 GeV in 10-GeV intervals. The expected signal distribution, shown in purple, is multiplied by a factor x50 in the plots, for display purposes.
From the fits, a 95% upper limit on the Higgs boson production cross section is extracted by standard procedures. As by now commonplace, the cross-section limit is displayed by dividing it by the expected standard model Higgs cross section, to show how far one is from excluding the SM-produced Higgs at any mass value. The graph is shown below: readers of this blog may by now recognize at first sight the green 1-sigma and yellow 2-sigma bands showing the expected range of limits that the search was predicted to set. The actual limit is shown in black.
One notices that while this search is not sensitive to the Higgs boson yet, it is not so far from it any more! The LHC experiments will have a large advantage with respect to DZERO (and CDF) in this particular business, since there the Higgs production cross-section is significantly larger. Backgrounds are also larger, however, so a detailed understanding of the detectors will be required before such a search is carried out with success at the LHC. For the time being, I congratulate with my DZERO colleagues for pulling off this nice new result!