jump to navigation

No CHAMPS in CDF data January 12, 2009

Posted by dorigo in news, physics, science.
Tags: , , , ,
comments closed

A recent search for long-lived charged massive particles in CDF data has found no signal in 1.0 inverse femtobarns of proton-antiproton collisions produced by the Tevatron collider at Fermilab.

Most subnuclear particles we know have very short lifetimes: they disintegrate into lighter bodies by the action of strong, or electromagnetic, or weak interactions. In the first case the particle is by necessity a hadron- one composed of quarks and gluons-, and the strength of the interaction that disintegrates it is evident by the fact that the life of the particle is extremely short:  we are talking about a billionth of a trillionth of a second, or even less time. In the second case, the electromagnetic decay takes longer, but still in most instances a ridiculously small time; the neutral pion, for instance, decays to two photons (\pi^\circ \to \gamma \gamma) in about 8 \times 10^{-17} seconds: eighty billionths of a billionth of a second. In the third case, however, the weakness of the interaction manifests itself in decay times that are typically long enough that the particle is indeed capable of traveling for a while.

Currently, the longest-living subnuclear particle is the neutron, which lives about 15 minutes before undergoing the weak decay n \to p e \nu, the well-studied beta-decay process which is at the basis of a host of radioactive phenomena. The neutron is very lucky, however, because its long life is not only due to the weakness of virtual W-boson exchange, but also by the fact that this particle happens to have a mass just a tiny bit larger than the sum of the bodies it must decay into: this translates in a very, very small “phase space” for the decay products, and a small phase space means a small decay rate.

Of course, we have only discussed unstable particles so far: but the landscape of particle physics includes also stable particles, i.e. the proton, the electron, the photon, and (as far as we know) the neutrinos. We would be very surprised if this latter set included particles we have not discovered yet, but we should be more possibilistic.

A stable, electrically-neutral massive particle would be less easy to detect than we could naively think. In fact, most dark-matter searches aimed at detecting a signal of a stable massive particle are tuned to be sensitive to very small signals: if a gas of neutralinos pervaded the universe, we might be unaware of their presence until we looked at rotation curves of galaxies and other non-trivial data, and even then, a direct signal in a detector would require extremely good sensitivity, since a stable neutral particle would be typically very weakly interacting, which means that swarms of such bodies could easily  fly through whatever detector we cook up unscathed. Despite that, we of course are looking for such things, with CDMS, DAMA, and other dark-matter-dedicated experiments.

The existence of a charged massive stable particle (CHAMP for friends), however, is harder to buy. An electrically-charged particle does not go unseen for long: its electromagnetic interaction is liable to betray it easily. However, there is no need to require that a CHAMP is by force THE reason of missing mass in the universe. These particles could be rare, or even non-existent in the Universe today, and in that case our only chance to see them would be in hadron-collision experiments, where we could produce them if the energy and collision rate are sufficient.

What would happen in the event of a creation of a CHAMP in a hadron collision is that the particle would slowly traverse the detector, leaving a ionization trail. A weak-interacting CHAMP (and to some extent even a strongly-interacting one) would not interact strongly with the heavy layers of iron and lead making up the calorimeter systems of which collider experiments are equipped, and so it would be able to punch through, leaving a signal in the muon chambers before drifting away. What we could see, if we looked carefully, would be a muon track which ionizes the gas much more than muons usually do -because massive CHAMPS are heavy, and so they kick atoms around as they traverse the gas. Also, the low velocity of the particle (be it clear, here “low” means “only few tenths of the speed of light”!) would manifest itself in a delay in the detector signals as the particle traverses them in succession.

CDF has searched for such evidence in its data, by selecting muon candidates and determining whether their crossing time and ionization is compatible with muon tracks or not. More specifically, by directly measuring the time needed for the track to cross the 1.5 meter-radius of the inner tracker, and the particle momentum, the mass of the particle can be inferred. That is easier said than done, however: a muon takes about 5 nanoseconds to traverse the 1.5 meters of the tracker, and to discern a particle moving half that fast, one is requred to measure this time interval with a resolution better than a couple of nanoseconds.

The CDF Time-Of-Flight system (TOF) is capable of doing that. One just needs to determine the production time with enough precision, and then the scintillation bars which are mounted just outside of the tracking chamber (the COT, for central outer tracker) will measure the time delay. The problem with this technique, however, is that the time resolution has a distinctly non-Gaussian behaviour, which may introduce large backgrounds when one selects tracks compatible with a long travel time. The redundancy of CDF comes to the rescue: one can measure the travel time of the particles through the tracker by looking at the residuals of the track fit. Let me explain.

A charged particle crossing the COT leaves a ionization trail. These ions are detected by 96 planes of sense wires along the path, and from the pattern of hit wires the trajectory can be reconstructed. However, each wire will have recorded the released charge at a different time, because they are located at different distances from the track, and the ions take some time to drift in the electric field before their signal is collected. The hit time is used in the fit that determines the particle trajectory: residuals of these time measurements after the track is fit provide a measurement of the particle velocity. In fact, a particle moving slowly creates ionization signals that are progressively delayed as a function of radius; these residuals can be used to determine the travel time.

The resulting measurement has a three-times-worse precision than that coming from the dedicated TOF system (fortunately, I would say, otherwise the TOF itself would be a rather useless tool); however, the uncertainty on this latter measurement has a much more Gaussian behaviour! This is an important asset, since by requiring that the two time measurements are consistent with one another, one can effectively remove the non-Gaussian behavior of the TOF measurement.

By combining crossing time -i.e. velocity- and track momentum measurement, one may then derive a mass estimate for the particle. The distribution of reconstructed masses for CHAMP candidates is shown in the graph below. Overimposed to the data, the distribution expected for a 220-GeV CHAMP signal has been overlaid. It is clear to see that the mass resolution provided by the method is rather poor: despite of that, a high-mass charged particle would be easy to spot if it were there.

One note of warning about this graph: the distribution above shows masses ranging all the way from 0 to 100 GeV, but that does not mean that these tracks have similar masses: the vast majority of tracks are real muons, for which the velocity is underestimated due to instrumental effects: in a sense, the very shape of the curve describes the resolution of the time measurement provided by the analysis.

The absence of tracks compatible with a mass larger than 120 GeV in the data allows to place model-independent limits on the CHAMP mass. Weak-interacting CHAMPS are excluded, in the kinematic region |\eta|<0.7 covered by the muon chambers, and with P_T>40 GeV, if they are produced with a cross section larger than 10 fb. For strongly-interacting CHAMPS the search considers the case of a scalar top R-hadron, a particle which is predicted by Supersymmetric theories when the stable stop quark binds together with an ordinary quark. In that case, the 95% CL limit can be set at a mass of 249 GeV.

It is interesting to note that this analysis, while not using the magnitude of the ionization left by the track in the gas chamber (the so-called dE/dx on which most past searches of CHAMPS have been based, e.g. in CDF (Run I) and ALEPH) to identify the CHAMP signal candidates, still does use the dE/dx to infer the (background) particle species when determining the resolution of the time measurement from COT residuals. So the measurement shows once more how collider detectors really benefit from the high redundancy of their design!

[Post scriptum: I discuss in simple terms the ionization energy loss in the second half of this recent post.]

It only remains to congratulate with the main authors of this search, Thomas Phillips (from Duke University) and Rick Snider (Fermilab), for their nice result, which is being sent for publication as we speak. The public web page of the analysis, which contains more plots and an abstract, can be browsed here.

Some posts you might have missed in 2008 January 5, 2009

Posted by dorigo in cosmology, personal, physics, science.
Tags: , , , , , , , , , , ,
comments closed

To start 2009 with a tidy desk, I wish to put some order in the posts about particle physics I wrote in 2008. By collecting a few links here, I save from oblivion the most meaningful of them -or at least I make them just a bit more accessible. In due time, I will update the “physics made easy” page, but that is work for another free day.

The list below collects in reverse chronological order the posts from the first six months of 2008; tomorrow I will complete the list with the second half of the year. The list does not include guest posts nor conference reports, which may be valuable but belong to a different list (and are linked from permanent pages above).

June 17: A description of a general search performed by CDF for events featuring photons and missing transverse energy along with b-quark jets – a signature which may arise from new physics processes.

June 6: This post reports on the observation of the decay of J/Psi mesons to three photons, a rare and beautiful signature found by CLEO-c.

June 4 and June 5 offer a riddle from a simple measurement of the muon lifetime. Readers are given a description of the experimental apparatus, and they have to figure out what they should expect as the result of the experiment.

May 29: A detailed discussion of the search performed by CDF for a MSSM Higgs boson in the two-tau-lepton decay. Since this final state provided a 2.1-sigma excess in 2007, the topic deserved a careful look, which is provided in the post.

May 20: Commented slides of my talk at PPC 2008, on new results from the CDF experiment.

May 17: A description of the search for dimuon decays of the B mesons in CDF, which provides exclusion limits for a chunk of SUSY parameter space.

May 02 : A description of the search for Higgs bosons in the 4-jet final state, which is dear to me because I worked at that signature in the past.

Apr 29: This post describes the method I am working on to correct the measurement of charged track momenta by the CMS detector.

Apr 23, Apr 28, and May 6: This is a lengthy but simple, general discussion of dark matter searches with hadron colliders, based on a seminar I gave to undergraduate students in Padova. In three parts.

Apr 6 and Apr 11: a detailed two-part description of the detectors of electromagnetic and hadronic showers, and the related physics.

Apr 05: a general discussion of the detectors for LHC and the reasons they are built the way they are.

Mar 29: A discussion of the recent Tevatron results on Higgs boson searches, with some considerations on the chances for the consistence of a light Higgs boson with the available data.

Mar 25: A detailed discussion on the possibility that more than three families of elementary fermions exist, and a description of the latest search by CDF for a fourth-generation quark.

Mar 17: A discussion of the excess of events featuring leptons of the same electric charge, seen by CDF and evidenced by a global search for new physics. Can be read alone or in combination with the former post on the same subject.

Mar 10: This is a discussion of the many measurements obtained by CDF and D0 on the top-quark mass, and their combination, which involves a few subtleties.

Mar 5: This is a discussion of the CDMS dark matter search results, and the implications for Supersymmetry and its parameter space.

Feb 19: This is a divulgative description of the ways by which the proton structure can be studied in hadron collisions, studying the parton distribution functions and how these affect the scattering measurements in proton-antiproton collisions.

Feb 13: A discussion of luminosity, cross sections, and rate of collisions at the LHC, with some easy calculations of the rate of multiple hard interactions.

Jan 31: A summary of the enlightening review talk on the standard model that Guido Altarelli gave in Perugia at a meeting of the italian LHC community.

Jan 13: commented slides of the paper seminar gave by Julien Donini on the measurement of the b-jet energy scale and the p \bar p \to Z X \to b \bar b X cross section, the latter measured for the first time ever at a hadron machine. This is the culmination of a twelve-year effort by me and my group.

Jan 4: An account of the CDF search for Randall-Sundrum gravitons in the ZZ \to eeee final state.

Arkani-Hamed: “Dark Forces, Smoking Guns, and Lepton Jets at the LHC” December 11, 2008

Posted by dorigo in news, physics, science.
Tags: , , , , ,
comments closed

As we’ve been waiting for the LHC to turn on and turn the world upside down, some interesting data has been coming out of astrophysics, and a lot of striking new signals could show up. This motivates theoretical investigations on the origins of dark matter and related issues, particularly in the field of Supersymmetry.

Nima said he wanted to tell the story from the top-down approach: what all the
anomalies were, what motivated his and his colleagues’ work. But instead, he offered a parable as a starter.

Imagine there are creatures made of dark matter: ok, dark matter does not clump, but anyway, leaving disbelief aside, let’s imagine there are these dark astrophysicists, who work hard, make measurements, and eventually see that 4% of the universe dark to them, they can’t explain the matter budget of the universe. So they try to figure out what’s missing. A theorist comes out with a good idea: a single neutral fermion. This is quite economical, and this theory surely receives a lot of subscribers. But another theorist envisions that there is a totally unknown gauge theory, with a broken SU(2)xU(1) group, three generations of fermions, the whole shebang… It seems crazy, but this guy has the right answer!

So, we really do not know what’s in the dark sector. It could be more interesting than just a single neutral particle. Since this is going to be a top-down discussion, let us imagine the next most complicated thing you might imagine: Dark matter could be charged. If the gauge symmetry was exact, there would be some degenerate gauge bosons. How does this stuff have contact with the standard model ?

Let us take a mass of a TeV: everything is normal about it, and the coupling that stuff from this dark U(1) group can have is a kinetic mixing between our SM ones and these new gauge fields, a term of the form 1/2 \epsilon F_{\mu \nu}^{dark} F^{\mu \nu} in the Lagrangian density.

In general, any particle at any mass scale will induce a loop mixing through their hypercharge above the weak scale. All SM particles get a tiny charge under the new U(1)’. The coupling can be written as kinetic mixing term, and it will be proportional to their electric charge. The size of the coupling could be in the 10^-3, 10^-4 range.

This construct would mess up our picture of dark matter, and a lot about our
cosmology. But if there are higgses under this sector, we have the usual problem of hierarchy. We know the simplest solution to the hierarchy is SUSY. So we imagine to supersymmetrize the whole thing. There is then a MSSM in our sector, and a whole SUSY dark sector. Then there is a tiny kinetic mixing between the two. If the mixing is 10^-3, from the breaking of symmetry at a mass scale of about 100 GeV, the breaking induced in the DM world would be of radiative origin, through loop diagrams, at a few GeV mass scale.

So the gauge interaction in the DM sector is broken at the Gev scale. A bunch
of vectors, and other particles, right next door. Particles would couple
to SM ones proportionally to charge at levels of 10^-3 – 10^-4. This is dangerous since the suppression is not large. The best limits to such a scenario come from e+e- factories. It is really interesting to go back and look at these things in BaBar and other experiments: existing data on tape. We might discover something there!

All the cosmological inputs have difficulty with the standard WIMP scenario. DAMA, Pamela, Atic are recently evidenced anomalies that do not fit with our
simplest-minded picture. But they get framed nicely in our picture instead.

The scale of these new particles is more or less fixed at the GeV region. This has an impact in every way that you look at DM. As for the spectrum of the theory, there is a splitting in masses, given by the coupling constant \alpha in the DM sector times the mass in the DM sector: a scale of the order \alpha M.  It is radiative. There are thus MeV-like splittings between the states. And there are new bosons with GeV masses that couple to them. These vectors couple off-diagonally to the DM. This is a crucial fact, sinply because if you have N states, their gauge interaction is a subpart of a rotation between them. The only possible interaction that these particles can have with the vector is off-diagonal. That gives a cross section comparable to the weak scale.

The particles annihilate into the new vectors, which eventually have to decay. They would be stable, but there is a non-zero coupling to our world, so what do they decay into ? Not to proton-antiproton pairs, but electrons, or muon pairs. These features are things that are hard to get with ordinary WIMPS.

And there is something else to expect: these particles move slowly, have long range interaction, geometric cross sections, and they may go into excited states. Their splitting is of the order of the MeV, which is not different from the kinetic energy in the galaxy. So with the big geometric cross section they have, you expect them not to annihilate but excite. They decay back by emitting e+e- pairs. So that’s a source of low-energy e- and e+: that explains an integral excess in these particles from cosmic rays.

If they hit a nucleus, the nucleus has a charge, the vector is light, and thus the cross section is comparable to Z and H exchange. So the collision is not elastic, it changes the nature of the particle. This changes the analysis you would do, and it is possible for DAMA to be consistent with the other experiments.

Of course, the picture drawn above is not the most minimal possible thing, to
imagine that dark matter is charged and has gauge interactions is a quite far-fetched thing in fact. But it can give you a correlated explanation to the cosmological inputs.

Now, why does this have the potential of making life so good at the LHC ? Because we can actually probe this sector sitting next door, particularly in the SUSY picture. In fact, SUSY fits nicely in the picture, while being motivated elsewhere.

This new “hidden” sector has been studied by Strassler and friends in Hidden valley models. It is the leading way by means of which you can have a gauge sector talking to our standard model.

The particular sort of hidden valley model we have discussed is motivated if you take the hints from astrophysics seriously. Now what does it do to the LHC ? GeV particles unseen for thirty years….  But that is because we have to pay a price, the tiny mixing.

Now, what happens with SUSY is nice: if you produce superpartners you will always go into this sector. The reason is simple: normally particles decay into the LSP, which is stable. But now it cannot be stable any longer, because the coupling will give a mixing between gaugino in our sector and photino in their sector. Thus, the LSP will decay to lighter particle in the other sector, producing other particles. These particles are light, so they come out very boosted. They make a Higgs boson in the other sector, which decays to a W pair, and finally ends up with the lightest vector in the other sector: it ends up as an electron-positron pair in this sector.

There is a whole set of decays that gives lots of leptons, all soft in their sector. They are coming from the decay of a 100 GeV particle. The signature could be jets of leptons. Every SUSY event will contain two. Two jets of leptons, with at least two, if not many more, leptons with high-Pt, but featuring small opening angles and invariant masses. That is the smoking gun. As for lifetime, these leptons are typically prompt, but they might also have a lifetime. However the preferred situation is that they would not be displaced, they would be typically prompt.

Ridiculous branching fractions nailed November 28, 2008

Posted by dorigo in news, physics, science.
Tags: , , ,
comments closed

B hadrons are fascinating bodies. They are bound states of a bottom quark and lighter ones, and due to the smallness of a parameter called V_{cb}, the element of the Cabibbo-Kobayashi-Maskawa mixing matrix, they live more than a trillionth of a second before disintegrating.

A trillionth of a second sounds like a really short lifetime, but it is not so short in the realm of particle physics, where states with lifetime millions of times shorter yet are not uncommon. A B hadron created in a high-energy collision (one of the thousand per second produced in the core of the CDF detector at Fermilab) can travel several millimeters away from the collision point before decaying.

A recent analysis by the CDF collaboration has focused on rare decays of electrically-neutral B hadrons: both mesons (b \bar q states, with q any quark lighter than the bottom) and baryons (b q q' states, where q, q’ indicate light quarks). The searched decays were ones yielding just a pair of oppositely-charged lighter hadrons, none of which containing the next-lightest quark, the charm.

The bottom quark decays by charged current interaction, emitting a W boson and transmuting into a lighter quark. As I hinted above, the transition is usually b \to c, and its “strength” is proportional to the square of the CKM matrix element V_{cb}.  However, nothing prevents a direct transition to an up quark: b \to u. This process however is much less frequent, because the ratio V_{ub}/V_{cb} is very small: bottom hadrons usually decay to ones containing charm.

Studying the two-body decays of a few B hadrons to states not containing charm is difficult because of the rarity of these phenomena. However, CDF is well equipped: thanks to the Silicon Vertex Tracker, a wonderful device capable of measuring track parameters in a time of about 10 microseconds,  events with two tracks not pointing back to the point where the beams cross (where the proton-antiproton collision must have originated) can be collected with high efficiency.

The power of SVT is that it not only measures track momenta -from the curvature of tracks in the magnetic field: it also can measure the track impact parameter orthogonally to the beam direction, with a precision practically identical to the one that more sophisticated, slower algorithms can obtain. Huge samples of B hadron decays are thus made available to analysis.

A recent study of CDF has used track pairs to put in evidence charmless decays of B hadrons which have really tiny branching fractions: we go from the decay B^\circ_s \to K^- \pi^+, which is measured at BR=5 \pm 0.7 \pm 0.8 \times 10^{-6}, to the \Lambda^\circ_b \to p K^-, measured at BR=5.6 \pm 0.8 \pm 1.5 \times 10^{-6}, to the \Lambda^\circ_b \to p \pi^-, measured at BR=3.5 \pm 0.6 \pm 0.9 \times 10^{-6}. How did CDF measure such rare decays ?

The easy part is to reconstruct a mass distribution. You take the two tracks in SVT-triggered events passing an optimized data selction, and compute the track-track mass under the hypothesis that the two track are charged pions. You need to hypothesize some mass for the two bodies, which could be pions but could also be kaons or protons or other particles with lifetime long enough to leave a full track in the detector. Once you do that, you get a distribution like the one shown below (and for the moment, ignore the various colored distributions and concentrate on the black bullets with error bars):

Now, the distribution of black points contains several important features. Even ignoring the various coloured areas under the points, one can see them clearly. There is an evident background right under the main peak and across all mass values, but also a nasty shoulder at low mass. Furthermore, the main peak does appear to be the composition of different contributions. It is not too hard to figure out what is the origin of the different components, however, even without reading the fine print.

First of all, the flat background, visible mainly on the right, is plausibly due to random combinations of charged tracks which do not originate from a resonance decay. The flatness of their mass spectrum is in fact a trademark of the randomness with which one may associate pairs of tracks which have nothing to share.

Then, the “shoulder” on the left. This is trickier, but you can understand what is its source if you size it up: it is something which happens more or less as frequently as two-body B hadron decays, and yet does not produce a distinct peak at the mass of the B hadron (which is of the order of 5.3 GeV), but lower. These events are due to B hadrons which produced two charged tracks, plus other particles: either charged, or neutral ones. By picking only two tracks to compute the hadron mass in these cases, one seriously underestimates the mass of the decaying object. The effect has a “turn-off” for masses just below the B hadron mass, because it becomes quite infrequent to have lost a track and still manage to reconstruct a mass quite close to the true one: even having lost a single pion would result in a negative bias of about 140 MeV.

The signal peak at the center of the graph is the composition of several different ones. Here, we must remember two things. The first is that we are observing not one single particle, but at least three: the B^\circ, which has a mass of 5.279 GeV; the B^\circ_s, which is a meson containing a bottom and a strange quark in a q \bar q combination, and weighs 5.367 GeV; and the \Lambda^\circ_b, which is a udb baryon and weighs 5.620 GeV. So the “bump” is indeed the combination of three particle decays. But also important to remember is that we arbitrarily assigned the mass of the charged pion (0.139 GeV) to the two tracks! In a two-body decay of a B hadron not only pions, but also kaons and protons are produced: and since the masses of the latter particles are quite a bit larger than that of pions (respectively, 0.495 and 0.938 GeV), we underestimate appreciably the hadron mass when we reconstruct it that way!

I am sure I have managed to confuse most of you. What is the rationale, I can hear you mutter, of assuming the pion mass for tracks that are not pions ? Well: look at the plot! If we reconstructed all B^\circ_s mesons at the true B^\circ_s mass, we would be unable to tell the different decays of these mesons apart, because they would all peak in a very similar fashion at the same \pi \pi mass: we would lose the discriminating power of the reconstructed mass, the variable we are plotting. And besides: CDF cannot easily discriminate pions from kaons and protons, so we are somehow forced to make an assumption. The “minimal” one is the one which is used in such cases.

If you give another critical look at the plot, knowing now why the different decays are expected to produce peaks at different “track-track” mass values, you may well raise an eyebrow: there are eight different components contributing to the central structure, and the data points cannot certainly discriminate them all! How can CDF claim to be measuring each of those decays so precisely ?

Well, first of all, not all of those components are determined with precision: the two smaller ones, the B^\circ_s \to KK and B^\circ_s \to \pi \pi, are not determined by the CDF analysis (CDF only puts a 90%CL limit on their cross section, in fact).  But of course, there is one missing piece in the puzzle, which I have so far hidden from your view. It is the track ionization measurement.

Charged tracks ionize the gas filling the CDF tracking chamber at different rates per unit distance traversed, depending on their speed. We measure the momentum of tracks from their curvature in the magnetic field, but momentum is speed times mass. By determining the amount of charge that is freed by the gas atoms along the particle path, we have a handle to discriminate different ones, combining that information with the particle momentum. See the graph below:

In the graph -admittedly, a very complicated one-, you can see how different particles carrying the same momentum exhibit a different energy loss. On the horizontal axis you have the particle momentum in GeV units, on the vertical axis the amount of energy loss per unit distance they exhibit. Each detected particle is a small black dot in the graph, and you immediately see that the dots cluster along different lines. These lines -which are rather more like bands, since there is some uncertainty in the quantities plotted- characterize the different behavior of each particle. You see that the same behavior -a rapidly falling curve, followed by a slow rise for very high momenta- is repeated for the different particles at different values of momentum, because different particles have different masses, and the ionization loss only depends on speed, not momentum. (For electrons the functional dependence is different, but that is another story, worth a separate post…)

Now, to make an example: say you have a 0.7 GeV/c track and you measure a ionization of 2 “units” (the quantity on the Y axis). After checking the plot above, you can be reasonably sure it is a proton, if your ionization and momentum measurements are any good. In fact, the discrimination is not very efficient, because CDF can only measure ionization with low resolution, by the width of electronic pulses on the wires collecting the charge.

Now, let us go back to the problem of discriminating different B hadron decays. Each of the two tracks in these events is classified based on their measured ionization, and the information is used in a likelihood function. Another likelihood function incorporates all the information on the kinematics of the two particles, and the product of these functions is used to discriminate the different decays. In the end, the mass distribution you saw above displays the result of the fit, where the different components are fixed by not just their mass values, but by all the kinematic and energy loss information each event possesses.

Knowing the level of detail of the analysis which lies behind the measurement, I am impressed by the accuracy with which these rare decays have been nailed by CDF, and I do not hide the fact that it makes me proud to sign the paper which I finished reviewing today, and is just about to be sent to PRL. But, there remains a question. Now that we know these branching fractions so precisely, what do we do with them ?

Of course: we add a line to the PDG data book! But seriously, there are implications for new physics theories. Indeed, some of these decays are predicted to be larger by Supersymmetric theories with R-parity violation. So, these measurements are yet another small step in the same direction: kicking SUSY out of the table, bit by bit. It will take a while, though!

Predictions for SUSY particle masses! September 2, 2008

Posted by dorigo in cosmology, news, science.
Tags: , , , ,
comments closed

Dear reader, if you are not a particle physicist you might find this post rather obscure. I apologize to you in that case, and I rather prefer to direct you to some easier discussion of Supersymmetry than attempting to shed light for you on the highly technical information discussed below:

  • For an introduction, see here.
  • For dark matter searches at colliders, see a three-part post here and here and here.
  • Other dark matter searches and their implications for SUSY are discussed here.
  • For a discussion of the status of Higgs searches and the implications of SUSY see here and here.
  • For a discussion of the implications for supersymmetry of the g-2 measurement, see here;
  • A more detailed discussion can be found in a report of a seminar by Massimo Passera on the topic, here and here.
  • For B \to \mu \mu searches and their impact on SUSY parameter space, see here.
  • For other details on the subject, see this search result.
  • And for past rumors on MSSM Higgs signals found at the Tevatron, have a look at these links.

If you have some background in particle physics, instead, you should definitely give a look at this new paper, appeared on August 29th in the arxiv. Like previous studies, it uses a wealth of experimental input coming from precision Standard Model electroweak observables, B physics measurements, and cosmological constraints to determine the allowed range of parameters within two constrained models of Supersymmetry -namely, the CMSSM and the NUHM1. However, the new study does much more than just turning a crank for you. Here is what you get in the package:

  1. direct -and more up-to-date- assessments of the amount of data which LHC will need to wipe these models off the board, if they are incorrect;
  2. a credible spectrum of the SUSY particle masses, for the parameters which provide the best agreement between experimental data and the two models considered;
  3. a description of how much will be known about these models as soon as a few discoveries are made (if they are), such as the observation of an edge in the dilepton mass distribution extracted by CMS and ATLAS data;
  4. a sizing up of the two models, CMSSM and NUHM1 -which are just special cases of the generic minimal supersymmetric extension of the standard model. Their relative merit in accommodating the current value of SM parameters is compared;
  5. most crucially, a highly informative plot showing just how much we are going to learn on the allowed space of SUSY parameters from future improvements in a few important observables.

So, if you want to know what is currently the best estimate of the gluino mass: it is very high, above 700 GeV in the CMSSM and a bit below 600 for the NUHM1. The lightest Higgs boson, instead, is -perhaps unsurprisingly- lying very close to the lower LEP II limit, in the 115 GeV ballpark (actually, even a bit lower than that, but that is a detail – read the paper if you want to know more about that). The LSP is instead firmly in the 100 GeV range. For instance, check the figure below, showing the best fit for the CMSSM (which, by the way, implies M_0 = 60 GeV, M_{1/2}=310 GeV, A_0 = 240 GeV, and \tan \beta =11).

The best plots are however the two I attach below: they represent a commendable effort to make things simpler for us. Really a highly distilled result of the gazillions of CPU-intensive computations which went into the determination of the area of parameter space that current particle physics measurements are allowing. In them, you can read out the relative merit of future improvements in a few of the most crucial measurements in electroweak physics, B physics, and cosmology, as far as our knowledge of MSSM parameters are concerned. The allowed area in the space of two parameters -m_0 \div m_{1/2} as well as m_0 \div \tan \beta, at 95% confidence level, is studied as a function of the variation in the total uncertainty on five quantities: the error on the gyromagnetic ratio of the muon, \Delta (g-2)_\mu, the uncertainty in the radiative decay b \to s \gamma, the uncertainty in cold dark matter \Omega h^2, the branching fraction of B \to \tau \nu decays, and the W boson mass M_W.

Extremely interesting stuff! one learns that future improvements in the measurement of the dark matter fraction will yield NO improvement in the constraints of the MSSM parameter space. In a way, dark matter does point to a sparticle candidate, but WMAP has already measured it too well!

Another point to make from the graphs above is that of the observables listed the W boson mass is the one whose uncertainty is going to be reduced sizably very soon -that is where we expect to be improving matters most in the near future, of course if LHC does not see SUSY before! Instead, the b \to s \gamma branching fraction uncertainty might actually turn out to need larger uncertainties than those assumed in the paper, making the allowed MSSM parameter space larger rather than smaller. As for the muon g-2, things can go in both directions there as well, as more detailed estimates of the current uncertainties are revised. These issues are discussed in detail in the paper, so I have better direct you to it rather than inserting my own misunderstandings.

Finally, the current fits slightly favor the NUHM1 scenario (the single-parameter Non-Universal Higgs Model) over the CMSSM. The NUHM1 scenario includes one further parameter, governing the difference between the soft SUSY-breaking contribution to M_H^2 and to squark and sleptons masses. The overall best-fit \chi^2 is better, and this implies that the additional parameter is used successfully by the fitter. The lightest Higgs boson mass also comes up at a “non-excluded” value of 118 GeV, higher than for the best fit point of the CMSSM.

New zoom in on the Higgs mass from Summer 2008 Tevatron results! July 31, 2008

Posted by dorigo in news, physics, science.
Tags: , , , , ,
comments closed

Many thanks to Sven Heinemeyer, who provided me this morning with a fresh update of the traditional plot summarizing the status of Standard Model measurements of top quark and W boson masses, their consistency with SM and SUSY, and their impact on the Higgs boson mass. Have a look at it below (a better version, in .eps format, is here):

As you can see, the consistency between direct determinations at the Tevatron (blue ellipse) and the LEP II(black lines) and LEP I/SLD results (hatched purple lines) is still quite good.

One detail worth mentioning: when plotting a 68% CL ellipse atop a 68% interval, the interval will look more restrictive in the variable which is measured (in the case of blue and black lines, the W boson mass, which is in the Y axis), because of the need of the ellipse to extend way past the 1-sigma limits to accommodate a total area of 68%.

The Tevatron results on the W mass are no worse than the LEP II ones by now – and they are based on only one experiment -CDF- analyzing a twentieth of the currently available data! The W mass reach of CDF is estimated at 15 MeV, a result three times better than the current one.

So, there is still a lot to squeeze from Tevatron data, despite the update you are looking at now “only” includes an improved measurement of the top quark mass, which now sits at 172.4 +-1.2 GeV – a 0.7% accuracy on this important parameter of the Standard Model.

It remains me to congratulate with my colleagues in CDF and D0 for their continuing effort. Well done, folks!

UPDATE: a commenter asks for the 95% CL ellipse in the plot above. I advise him and whomever else wants much more information to visit Sven’s site.

Also, two other blogs have posted today discussing this result: Lubos Motl and Marco Frasca. NB: Lubos advertises his blog in the comment section below, and he says he did a much better job than me in discussing the new results… I believe him: I wrote mine with my kids running around, asking me to finally leave for a hike on the mountains. I believe Lubos has no kids so… Enjoy!

String theorists betting against SUSY July 23, 2008

Posted by dorigo in physics, science.
Tags: , ,
comments closed

This post contains second-hand information, but I place it here anyways because a blog is also a record of things. So, I read with interest on Peter Woit’s blog a summary of the latest paper posted on the ArXiv by Bert Schellekens. Peter’s review is worth reading head to tail (I don’t know about the 80-something page long article), but I especially found interesting a quote from Schellekens’ paper, which says it as clear as one can make it:

With the start of the LHC just months away (at least, I hope so), this is more or less the last moment to make a prediction. Will low energy supersymmetry be found or not? I am convinced that without the strange coincidence of the gauge coupling convergence, many people (including myself) would bet against it. It just seems to have been hiding itself too well, and it creates the need for new fine-tunings that are not even anthropic (and hence more serious than the one supersymmetry is supposed to solve).

Be sure to get this right: he is a die-hard landscape-enthusiast string theorist. And he is saying he would bet against SUSY at the LHC.

With the CERN machine’s start just around the corner, things are indeed getting to some accumulation point -I myself, after having bet heavily (well, for my standards) against the observation of SUSY at LHC, am starting to think I might in the end turn out the happy loser.

What is worth mentioning and is my final prediction, however, is that as soon as protons will start hitting other protons in the head at 10 TeV this fall, we will slowly relax and realize it’s going to take a while before we can say anything from that mess of hadrons that is going to come out of the center of CMS and ATLAS every 25 nanoseconds.


Get every new post delivered to your Inbox.

Join 96 other followers