jump to navigation

An update on the 2.1-sigma MSSM Higgs signal May 29, 2008

Posted by dorigo in news, physics, science.
Tags: , ,
trackback

By doing some cleanup, I discovered this afternoon a forgotten text I had written seven months ago and put in stand-by, awaiting the proper time to post it. The reason for the delay -which is an order of magnitude longer than I had originally intended- is explained in the text highlighted in purple, while in green is any correction and amendment I made to the original post. Anyway, despite the fact that the topic of this post is no longer “new”, I think it remains interesting – and the result described is still the best so far on this channel. So please find the recovered text below. Before it, I chose to re-post the introductory explanation of the physics.

Last January [2007] readers of this blog and Cosmic Variance got acquainted with a funny effect seen by CDF in the data where they were searching for a signal of supersymmetric Higgs boson decays to tau lepton pairs: the data did allow for a small signal of H \to \tau \tau decays, if a higgs mass of about 150-160 GeV was hypothesized, and for a hiterto not excluded value of some critical parameters describing the model considered in the search. The plot below shows the mass distribution of events compatible with the searched double tau-lepton final state: backgrounds from QCD, electroweak, and Drell-Yan processes are in grey, magenta and blue, respectively, and the tentative signal is shown in yellow.

Despite John Conway (the writer in CV and one of the analysis authors) and I were quite adamant in explaining that the effect was most likely due to a fluctuation of the data, and that its significance was in all cases very scarce, the rumor of a possible discovery spread around the web, and was eventually picked up in articles which appeared in March on New Scientist and the Economist. I have described in detail the whole process and its implications time and again (check my Higgs search page), so I will not add anything about that here.

What I wish I could discuss today is the new result obtained by John and his team in the same search, which is now based on twice as much statistics. You would guess that if you double the statistics, a true signal would roughly double in size, and its significance would grow by about 40%: Correct. Further, if you also had some experience with hadron collider results, you would actually expect an even larger increase, because analyses in that environment continue to improve as time goes by and a better understanding of backgrounds is achieved. On the other hand, a fluctuation would be likely to get washed away by a doubling of the data…

CDF has a policy of making public a physics result only after a careful internal scrutiny and several passes of review. After the result is “blessed”, there is nothing wrong in distributing it – but a nagging moral responsibility remains toward the very authors, which have to be left the chance of being the first to present their findings to the outside world. I used to not consider this to be a real obligation in the past, until I discussed the matter with a few colleagues. Among them, the same John Conway who is the mastermind behind the H \to \tau \tau analysis. I have a high esteem of John, which I maturated during a decade of collaboration; he was instrumental in making me change my mind about the issue. For that reason, I am not able to disclose the details of his brand new result here, which was blessed last week in CDF, until I get news about a public talk on the matter.

Because of the above, this post will not discuss the details of the new result, and it will remain unfinished business for a while. I will update it with the description of the result when I have a green light; for the time being, I think I can still do something useful though: make an attempt at putting readers in the condition of understanding the main nuts and bolts of the theoretical model within which the 2.1 sigma excess was found nine months ago.

I will describe the new result below, but first let me introduce the topic and put it in context.

1 – TWO WORDS ABOUT SUSY

First of all, what is the MSSM ? MSSM stands for “Minimal SuperSymmetric Model”. It is an extension of the Standard Model of particle physics which attempts a solution of some of its wanting features; it is the minimal version of a class of theories called SUperSYmmetric – SUSY for friends. These theories postulate a symmetry between fermions (particles having a half-integer value of the quantum number called “spin”) and bosons (particles with zero or integer spin): for every known fermion (spin 1/2) there exists a supersymmetric partner, whose characteristics are the same except for having spin 1; and likewise for bosons (spin 1). Such a doubling of all known particles would allow to automatically solve the problem of “fine tuning” of the Standard Model (which was excellently explained by Michelangelo Mangano recently; also see Andrea Romanino’s perspective on the issue), and it would have the added benefit of allowing a unification of coupling constants for the different interactions at a common, yet very high energy scale. Some say SUSY would make the whole theory of elementary particles considerably prettier; others disagree. If you ask me where I stand, I think it just makes things messier.

Physicists have always been wary of adding parameters or entities to their model of nature, even the model is obviously incomplete or when the addition appears justified by experimental observation. Scientific investigation proceeds well by following Occam’s principle: “entia non sunt multiplicanda praeter necessitatem“, entities should not be multiplied needlessly.

The extension of the standard model to SUSY not only implies the existence of not just one but a score of new, as-of-yet unseen elementary particles: in order for SUSY to be there and still yet to be discovered, we need to have so far missed all these bodies, and the only way that is possible is if all SUSY particles have large masses – so large that we have so far been unable to produce them in our accelerators. Such a striking difference between particles and s-particles can be due to a “SUSY-breaking” mechanism, a contraption by which the symmetry between particles and sparticles is broken, endowing all sparticles with masses much larger than that of the corresponding particles: and funnily enough, their value has to be juuuuust right above the lower limits set by direct investigation at the Tevatron and elsewhere, in order for the coveted “unification of coupling constants” to be possible.

So if we marry the hypothesis of SUSY, we need to swallow the existence of a whole set of new bodies AND a uncalled-for mechanism which hid them from view until today. Plus, of course, scores of new parameters: mass values, mixing matrix elements, what-not. Occam’s razor is drooling to come into action. In fact, so many choices are possible for the free parameters of the theory, that in order to be sure of talking about the same model phenomenologists have conceived some “benchmark scenarios”: choices of parameters that describe “typical” points in the multi-dimensional parameter space.

2 – THE HIGGS SECTOR OF THE MSSM

A very important subclass of these benchmarks (but some would frown at my calling a benchmark: it is more like a space of models) is the so-called “Minimal Supersymmetric extension” of the standard model, also known as MSSM. In the MSSM the Higgs mechanism yields the smallest number of Higgs bosons: five physical particles, as opposed to a single neutral scalar particle in ths standard model. Let me introduce them:

  • two neutral, CP-even states: h, H (with m_h < m_H)
  • one neutral, CP-odd state, A
  • two electrically charged CP-odd states: H^+, H^-.

The CP-parity of the states need not bother you. It is irrelevant for the searches discussed in this post. However, you should take away the fact that there are three, and not just one, neutral scalar boson to search for.

Where do these five states come from ? Well, the symmetry structure of SUSY requires that two different higgs boson doublets are responsible for the mass of up-type (u,c,t quarks and e,$\mu$, and $\tau$ leptons) and down-type (d,s,b quarks and the three neutrinos) fermions. Two (2) doublets (x2) of complex (x2) scalar fields make for a total of eight degrees of freedom – eight different real numbers, to be clear; three of these are spent to give rise to masses of W and Z bosons by the higgs mechanism, and five physical particles remain.

There are a few interesting “benchmarks” in the MSSM. One is called no mixing scenario, and is the one most frequently used by experimentalists – mainly because it is one of the most accessible by direct searches. There are quite a few others: “Mh max”, “Gluophobic Higgs”, “Small \alpha(eff)… but we need not discuss them here. What matters is that once the no mixing scenario or any other has been selected, just two additional parameters are necessary to calculate the masses and couplings of the five higgs bosons: the mass of the A boson, m_A, and tan(\beta), a ratio between the characteristics of the two higgs doublets.

It turns out that if tan (\beta) is large, then the production rate of higgs bosons can be hundreds of times higher than that predicted in the standard model! Of course, very large values of tan(\beta) have already been excluded by direct searches because of that very feature: if no higgs bosons have been found this far, then their production rate must be smaller than a certain value, and that translates in an upper bound for tan(\beta). Nonetheless the parameter space – usually plotted as the plane where the abscissa is m_A and the y-axis represents tan(\beta) – is still mostly to be explored experimentally. Below you can see the excluded region by the analysis of Conway et al. in January 2007.

One thing to keep in mind when discussing the phenomenology of these theories is the following: among the three neutral scalars, a pair of them ([h,A] or [H,A]) are usually very close in mass, such that they effectively add together their signals, which are by all means indistinguishable. Therefore, rather than discussing the search for a specific state among h, H, and A, experimentalists prefer to discuss a generic scalar \phi, a placeholder for the two degenerate states.

3 – MSSM HIGGS PRODUCTION AND DECAY

Higgs production in the MSSM is not too special: the diagrams producing a neutral scalar (h, H, or A) are the same. However, due to the highly boosted couplings of two of these three states with down-type fermions (an increase roughly equal to tan(\beta), two diagrams contribute the most: gluon-gluon fusion via a b-quark loop (see below, left) or direct b-quark annihilation (right). The b-quark in fact is privileged by being a down-type quark AND having a large mass.

As for the decay of these particles, the same enhancement in the couplings rules that the most likely decay is to b-quark pairs (about 85 to 90%). The remainder is essentially a 10-15% chance of decay to tau-lepton pairs, which are also down-type fermions and also have a largish mass: 1.777 GeV, to be compared to the about 3-4 GeV of b-quarks “photographed” at high Q^2. Decay rates scale with the square of the coupling, and the coupling scales with the mass: that explains the order of magnitude difference in decay rates.

Because of the impossibility of going on to describe the analysis, I will conclude this incomplete post with a point about the parameter space. [Let’s see this anyway before I describe the result]. There is in fact one subtlety to mention. As tan(\beta) becomes large, the usually narrow higgs bosons acquire a very large width. The width of a particle is an attribute which defines how close to the nominal mass the actual mass of the state can be. Now, the higgs boson in the standard model has a width much smaller than 1 GeV, which is totally irrelevant when compared with the experimental precision of the mass reconstruction. The same cannot be said for MSSM higgs bosons if tan(\beta) is large: it is the large coupling to down-type fermions the cause of the large indetermination in the mass, in fact. As tan(\beta) grows larger than about 60, the coupling actually becomes non-calculable by perturbation theory, the width becomes really large and rather undetermined (10 GeV and above), and the higgs resonances lose their most significant attribute, i.e. a well-defined mass.

The effect discussed above has two consequences: one is that the region of parameter space corresponding to too large values of tan(\beta) is not well-defined theoretically. The other is that if one were to perform the search carefully in that region, one would need to consider the effect of the large width to the mass templates used to search for the higgs bosons. Given a mass of the A particle, a different mass template would be then needed for each value of tan(\beta), making the analysis quite a bit more complex. Physicists like to approximate, and mostly they get away with it when the neglected effects are small, but in the case of large tan(\beta) an approximation fails and a precise computation is not possible.

The bottomline is: a grain of salt is really needed when interpreting the results of a MSSM Higgs search.

That said, I think it is time for a rapid description of the analysis.

4 – THE EXPERIMENTAL SEARCH BY CDF

CDF and D0 have been searching for MSSM neutral higgs bosons for a while now. I reported about the latest result by CDF, obtained from the analysis of events with three b-quark jets, [just a month ago] last fall. Now it is the time for the brand new CDF search for the decay H \to \tau \tau, which in January 2007 made headlines due to the excess it was showing.

The analysis was not modified appreciably from its former instantiation. However, I took the time to read the internal analysis note describing in detail the studies which the authors performed in order to understand the data and tune the selection cuts, and I must say I was really impressed by the amount and quality of the work they performed. I have to tip my hat to John Conway – the person who has been after this signature of higgs decay since more than a decade ago – and to Anton Anastassov, also a tau identification expert and a renowned Higgs hunter. Other authors are Cristobal Cuenca, Dongwook Jang, and Amit Lath.

I was mentioning that the analysis has stayed the same during 2007: indeed, it was a very good idea, in order to avoid a potential signal from being washed away by a modified analysis – although I feel urged to say that a genuine signal cannot hide forever and is bound to creep out of the data at some point!

A total of 1.8 inverse femtobarns of data were analyzed. These correspond to a hundred thousand billion collisions, as I have grown tired of mentioning: among these, a online trigger system selected those containing a likely signal of an electron or muon. Tau leptons do in fact decay to these lighter leptons about a third of the times: \tau \to e \nu_e \nu_\tau, \tau \to \mu \nu_\mu \nu_\tau. Yes: you read it correctly: they yield an electron or a muon accompanied by two neutrinos. The latter provide no chance of detection.

And what about the remaining two thirds ? These are “hadronic” decays: tau leptons yield a narrow jet of light hadrons in the remaining cases. These jets contain few charged particles (typically one or three) and leave a signal in the calorimeter which refined algorithms can distinguish – but only on a statistical basis – from jets originated by fragmenting quarks and gluons.

If one is looking for two tau leptons from the decay of a \phi neutral scalar boson, one has to decide which kind of decay of the taus to look for. Electrons and muons provide a clean signature but are rather infrequent, while hadrons are more frequent but also background-ridden. The analysis considers three final states that are a good compromise of rate and background contamination:

  1. \tau_e \tau_\mu
  2. \tau_e \tau_h
  3. \tau_\mu \tau_h

where subscripts indicate the decay to electrons, muons, or hadrons. Double decay to electrons and double decay to muons are neglected because of the large background from Drell-Yan production of lepton pairs, and double decay to hadrons is not considered due to the too large background.

One thing that should be clear is that all three final states contain at least two tau neutrinos: \nu_\tau are in fact the final product of the decay chain in all cases; the semi-leptonic final states 2. and 3. also contain an additional \nu_e or \nu_\mu, respectively; and the dileptonic final state 1. yields a total of four neutrinos. Neutrinos make the event reconstruction tough, because they escape carrying away energy and momentum, and making a direct reconstruction of the invariant mass of the body producing the two tau leptons impossible.

Despite that problem, a partial reconstruction of the mass of the tentative Higgs boson producing the taus is possible by using only the visible energy – that of electrons, muons, and jets, plus the so-called “missing transverse energy”, a vector equal to the imbalance in transverse momentum obtained by vectorially adding together all visible transverse momenta. The reason for only considering the transverse component is due to the fact that in hadron collisions the longitudinal motion of the initial state – i.e., the speed of the center of mass of the collision along the beam direction – is not known: it is not a proton and an antiproton of 980 GeV each that collide, but rather a quark and a antiquark, each of unknown energy.

The figure below shows that a discrimination between Higgs boson signals of different masses is indeed possible: a Z \to \tau \tau decay yields a signal which, once reconstructed, looks like the empty histogram, while higgs bosons of 115 and 200 GeV give the distributions pictured in blue and magenta, respectively.

A selection of tau candidates in the three detection categories is performed by requiring electrons and muons to be well-identified and isolated, while jets have to be narrow, contain only one or three charged tracks, and be flagged by a fine-tuned tau-identification algorithm. Then, the kinematics of the event is also required to be Higgs-like, by exploiting angular dependence of missing transverse energy and tau decay candidates.

After the selection, the invariant mass distribution of candidates in the three search categories are finally understood as a sum of the different background processes contributing to the selected data. A refined likelihood fitting technique incorporating systematic uncertainties as nuisance parameters and template morphing method to account for jet energy scale uncertainty provides a quite accurate determination of the amount if signal allowed by the data, as a function of the mass of the CP-odd state m_A. Below are shown the fits in the category \tau_X \tau_h (top, where X = e, \mu) and \tau_e \tau_\mu (bottom). The abscissa is the reconstructed visible mass of the tau pair, the black points are experimental data, and the various histograms stacked one on top of the other show the expected amount of QCD background from fake tau signals (red), electroweak and top pair backgrounds (blue), Drell-Yan dilepton production including Z decays (white), and in yellow the amount of \phi signal that the data excludes at 95% confidence level.

As is clear by looking at the plots, the data follow quite well the sum of backgrounds in both datasets, and no signal is apparent. And in fact, the cross section limit (red curve in the graph below) follows very closely the expectation calculated from pseudoexperiments (hatched curve, with 1- and 2-sigma blue and grey bands overlaid):

From the cross section limit, an exclusion limit in the plane of A mass and tan(beta) can be determined. This is much more stringent than the one John and company obtained in January 2007, and it provides a significant advance in our investigation of supersymmetric models.

So, a question remains to be answered at the end of this longish post. And that is, How should we interpret the 2.1 \sigma excess that Conway et al. found in their former analysis ? Of course, as a statistical fluctuation! One that happens roughly once in a hundred cases. Discovering a supersymmetric higgs, on the other hand, is something that can only happen once in a lifetime. Not in this lifetime, if you ask me.

Comments

1. Luboš Motl - June 5, 2008

Frank Shoemaker would call this noise.

2. A Novel Higgs Discovery Channel « Collider Blog - December 31, 2008

[…] mechanism with the highest cross section – gluon fusion. (For an excellent discussion, see a post by Tommaso Dorigo from May of this year.) At the Tevatron, the cross section is about 1.6 pb. So we might take […]


Sorry comments are closed for this entry

%d bloggers like this: