jump to navigation

New bounds for the Higgs: 115-135 GeV! August 1, 2008

Posted by dorigo in news, physics, science.
Tags: , ,
comments closed

From yesterday’s talk by Johannes Haller at ICHEP 2008, I post here today two plots showing the latest result of global fits to standard model observables, evidencing the Higgs mass constraints. The first only includes indirect information, the second also includes direct search resuls.

The above plot is tidy, yet the amount of information that the Gfitter digested to produce it is gigantic. Decades of studies at electron-positron colliders, precision electroweak measurements, W and top mass determinations. Probably of the order of fifty thousand man-years of work, distilled and summarized in a single, useless graph.

Jokes aside, the plot does tell us a lot. Let me try to discuss it. The graph shows the variation from its minimum value of the fit chisquared -the standard quantity describing how well the data agree with the model- as a function of the Higgs boson mass, interpreted as a free parameter. The fit prefers a 80 GeV mass for the Higgs boson, but the range of allowed values is still broad: at 1-sigma, the preferred range is within 57-110 GeV. At 2-sigma, the range is of course even wider, from 39 and 156 GeV. If we keep the two-sigma variation as a reference, we note that the H \to WW decay is not likely to be the way by which the Higgs will be discovered.

Also note that the LEP II experiment limits have not been inserted in the fit: in fact, the 114 GeV lower limit is hatched but has no impact in the curve, which is smooth because unaffected by direct Higgs searches.

Take a look instead at the plot below, which attempts at summarizing the whole picture, by including the direct search results at LEP II and at the Tevatron (without the latest results however) in the fit.

This is striking new information! I will only comment the yellow band, which -like the one in the former plot- describes the deviation of the log-likelihood ratio in data and the signal plus background hypothesis. If you do not know what that means, fear not. Let’s disregard how the band was obtained and concentrate instead on what it means. It is just a measure of how likely it is that the Higgs mass sits at a particular value of mass, given all the information from electroweak fits AND the direct search results, which have in various degrees “excluded” (at 95% confidence level) or made less probable (at 80%, 90% CL or below) specific Higgs mass values.

In the plot you can read off that the Higgs mass has a preferred range of masses which is now quite narrow! M_H = 120_{-5}^{+15} GeV. That is correct: the lower 1-sigma error is very small because the LEP II limit is very strong. Instead, the upper limit is much less constrained. Still, the above 1-sigma band is very bad news for the LHC: it implies that the Higgs is very unlikely to be discovered soon. That is because at low invariant mass ATLAS and CMS need to rely on very tough discovery channels, relying on the very rare H \to \gamma \gamma decay (one in a thousand Higgses decay that way) or the even more problematic H \to \tau \tau decay. Not to mention the H \to b \bar b final state, which can be extracted only when the Higgs is produced in association with other bodies, and still with huge difficulties, given the prohibitive amount of backgrounds from QCD processes mimicking the same signature.

The 2-sigma range is wider but not much prettier: 114.4 – 144 GeV. I think this really starts to be a strong indication after all: the Higgs boson, if it exists, is light! And probably close to the reach of LEP II. Too bad for the LEP II experiments – which I dubbed “a fiasco” in a former post to inflame my readers for once. In truth, LEP II appear likely to one day turn out to have been very, very unlucky!

An update on the 2.1-sigma MSSM Higgs signal May 29, 2008

Posted by dorigo in news, physics, science.
Tags: , ,
comments closed

By doing some cleanup, I discovered this afternoon a forgotten text I had written seven months ago and put in stand-by, awaiting the proper time to post it. The reason for the delay -which is an order of magnitude longer than I had originally intended- is explained in the text highlighted in purple, while in green is any correction and amendment I made to the original post. Anyway, despite the fact that the topic of this post is no longer “new”, I think it remains interesting – and the result described is still the best so far on this channel. So please find the recovered text below. Before it, I chose to re-post the introductory explanation of the physics.

Last January [2007] readers of this blog and Cosmic Variance got acquainted with a funny effect seen by CDF in the data where they were searching for a signal of supersymmetric Higgs boson decays to tau lepton pairs: the data did allow for a small signal of H \to \tau \tau decays, if a higgs mass of about 150-160 GeV was hypothesized, and for a hiterto not excluded value of some critical parameters describing the model considered in the search. The plot below shows the mass distribution of events compatible with the searched double tau-lepton final state: backgrounds from QCD, electroweak, and Drell-Yan processes are in grey, magenta and blue, respectively, and the tentative signal is shown in yellow.

Despite John Conway (the writer in CV and one of the analysis authors) and I were quite adamant in explaining that the effect was most likely due to a fluctuation of the data, and that its significance was in all cases very scarce, the rumor of a possible discovery spread around the web, and was eventually picked up in articles which appeared in March on New Scientist and the Economist. I have described in detail the whole process and its implications time and again (check my Higgs search page), so I will not add anything about that here.

What I wish I could discuss today is the new result obtained by John and his team in the same search, which is now based on twice as much statistics. You would guess that if you double the statistics, a true signal would roughly double in size, and its significance would grow by about 40%: Correct. Further, if you also had some experience with hadron collider results, you would actually expect an even larger increase, because analyses in that environment continue to improve as time goes by and a better understanding of backgrounds is achieved. On the other hand, a fluctuation would be likely to get washed away by a doubling of the data…

CDF has a policy of making public a physics result only after a careful internal scrutiny and several passes of review. After the result is “blessed”, there is nothing wrong in distributing it – but a nagging moral responsibility remains toward the very authors, which have to be left the chance of being the first to present their findings to the outside world. I used to not consider this to be a real obligation in the past, until I discussed the matter with a few colleagues. Among them, the same John Conway who is the mastermind behind the H \to \tau \tau analysis. I have a high esteem of John, which I maturated during a decade of collaboration; he was instrumental in making me change my mind about the issue. For that reason, I am not able to disclose the details of his brand new result here, which was blessed last week in CDF, until I get news about a public talk on the matter.

Because of the above, this post will not discuss the details of the new result, and it will remain unfinished business for a while. I will update it with the description of the result when I have a green light; for the time being, I think I can still do something useful though: make an attempt at putting readers in the condition of understanding the main nuts and bolts of the theoretical model within which the 2.1 sigma excess was found nine months ago.

I will describe the new result below, but first let me introduce the topic and put it in context.

1 – TWO WORDS ABOUT SUSY

First of all, what is the MSSM ? MSSM stands for “Minimal SuperSymmetric Model”. It is an extension of the Standard Model of particle physics which attempts a solution of some of its wanting features; it is the minimal version of a class of theories called SUperSYmmetric – SUSY for friends. These theories postulate a symmetry between fermions (particles having a half-integer value of the quantum number called “spin”) and bosons (particles with zero or integer spin): for every known fermion (spin 1/2) there exists a supersymmetric partner, whose characteristics are the same except for having spin 1; and likewise for bosons (spin 1). Such a doubling of all known particles would allow to automatically solve the problem of “fine tuning” of the Standard Model (which was excellently explained by Michelangelo Mangano recently; also see Andrea Romanino’s perspective on the issue), and it would have the added benefit of allowing a unification of coupling constants for the different interactions at a common, yet very high energy scale. Some say SUSY would make the whole theory of elementary particles considerably prettier; others disagree. If you ask me where I stand, I think it just makes things messier.

Physicists have always been wary of adding parameters or entities to their model of nature, even the model is obviously incomplete or when the addition appears justified by experimental observation. Scientific investigation proceeds well by following Occam’s principle: “entia non sunt multiplicanda praeter necessitatem“, entities should not be multiplied needlessly.

The extension of the standard model to SUSY not only implies the existence of not just one but a score of new, as-of-yet unseen elementary particles: in order for SUSY to be there and still yet to be discovered, we need to have so far missed all these bodies, and the only way that is possible is if all SUSY particles have large masses – so large that we have so far been unable to produce them in our accelerators. Such a striking difference between particles and s-particles can be due to a “SUSY-breaking” mechanism, a contraption by which the symmetry between particles and sparticles is broken, endowing all sparticles with masses much larger than that of the corresponding particles: and funnily enough, their value has to be juuuuust right above the lower limits set by direct investigation at the Tevatron and elsewhere, in order for the coveted “unification of coupling constants” to be possible.

So if we marry the hypothesis of SUSY, we need to swallow the existence of a whole set of new bodies AND a uncalled-for mechanism which hid them from view until today. Plus, of course, scores of new parameters: mass values, mixing matrix elements, what-not. Occam’s razor is drooling to come into action. In fact, so many choices are possible for the free parameters of the theory, that in order to be sure of talking about the same model phenomenologists have conceived some “benchmark scenarios”: choices of parameters that describe “typical” points in the multi-dimensional parameter space.

2 – THE HIGGS SECTOR OF THE MSSM

A very important subclass of these benchmarks (but some would frown at my calling a benchmark: it is more like a space of models) is the so-called “Minimal Supersymmetric extension” of the standard model, also known as MSSM. In the MSSM the Higgs mechanism yields the smallest number of Higgs bosons: five physical particles, as opposed to a single neutral scalar particle in ths standard model. Let me introduce them:

  • two neutral, CP-even states: h, H (with m_h < m_H)
  • one neutral, CP-odd state, A
  • two electrically charged CP-odd states: H^+, H^-.

The CP-parity of the states need not bother you. It is irrelevant for the searches discussed in this post. However, you should take away the fact that there are three, and not just one, neutral scalar boson to search for.

Where do these five states come from ? Well, the symmetry structure of SUSY requires that two different higgs boson doublets are responsible for the mass of up-type (u,c,t quarks and e,$\mu$, and $\tau$ leptons) and down-type (d,s,b quarks and the three neutrinos) fermions. Two (2) doublets (x2) of complex (x2) scalar fields make for a total of eight degrees of freedom – eight different real numbers, to be clear; three of these are spent to give rise to masses of W and Z bosons by the higgs mechanism, and five physical particles remain.

There are a few interesting “benchmarks” in the MSSM. One is called no mixing scenario, and is the one most frequently used by experimentalists – mainly because it is one of the most accessible by direct searches. There are quite a few others: “Mh max”, “Gluophobic Higgs”, “Small \alpha(eff)… but we need not discuss them here. What matters is that once the no mixing scenario or any other has been selected, just two additional parameters are necessary to calculate the masses and couplings of the five higgs bosons: the mass of the A boson, m_A, and tan(\beta), a ratio between the characteristics of the two higgs doublets.

It turns out that if tan (\beta) is large, then the production rate of higgs bosons can be hundreds of times higher than that predicted in the standard model! Of course, very large values of tan(\beta) have already been excluded by direct searches because of that very feature: if no higgs bosons have been found this far, then their production rate must be smaller than a certain value, and that translates in an upper bound for tan(\beta). Nonetheless the parameter space – usually plotted as the plane where the abscissa is m_A and the y-axis represents tan(\beta) – is still mostly to be explored experimentally. Below you can see the excluded region by the analysis of Conway et al. in January 2007.

One thing to keep in mind when discussing the phenomenology of these theories is the following: among the three neutral scalars, a pair of them ([h,A] or [H,A]) are usually very close in mass, such that they effectively add together their signals, which are by all means indistinguishable. Therefore, rather than discussing the search for a specific state among h, H, and A, experimentalists prefer to discuss a generic scalar \phi, a placeholder for the two degenerate states.

3 – MSSM HIGGS PRODUCTION AND DECAY

Higgs production in the MSSM is not too special: the diagrams producing a neutral scalar (h, H, or A) are the same. However, due to the highly boosted couplings of two of these three states with down-type fermions (an increase roughly equal to tan(\beta), two diagrams contribute the most: gluon-gluon fusion via a b-quark loop (see below, left) or direct b-quark annihilation (right). The b-quark in fact is privileged by being a down-type quark AND having a large mass.

As for the decay of these particles, the same enhancement in the couplings rules that the most likely decay is to b-quark pairs (about 85 to 90%). The remainder is essentially a 10-15% chance of decay to tau-lepton pairs, which are also down-type fermions and also have a largish mass: 1.777 GeV, to be compared to the about 3-4 GeV of b-quarks “photographed” at high Q^2. Decay rates scale with the square of the coupling, and the coupling scales with the mass: that explains the order of magnitude difference in decay rates.

Because of the impossibility of going on to describe the analysis, I will conclude this incomplete post with a point about the parameter space. [Let’s see this anyway before I describe the result]. There is in fact one subtlety to mention. As tan(\beta) becomes large, the usually narrow higgs bosons acquire a very large width. The width of a particle is an attribute which defines how close to the nominal mass the actual mass of the state can be. Now, the higgs boson in the standard model has a width much smaller than 1 GeV, which is totally irrelevant when compared with the experimental precision of the mass reconstruction. The same cannot be said for MSSM higgs bosons if tan(\beta) is large: it is the large coupling to down-type fermions the cause of the large indetermination in the mass, in fact. As tan(\beta) grows larger than about 60, the coupling actually becomes non-calculable by perturbation theory, the width becomes really large and rather undetermined (10 GeV and above), and the higgs resonances lose their most significant attribute, i.e. a well-defined mass.

The effect discussed above has two consequences: one is that the region of parameter space corresponding to too large values of tan(\beta) is not well-defined theoretically. The other is that if one were to perform the search carefully in that region, one would need to consider the effect of the large width to the mass templates used to search for the higgs bosons. Given a mass of the A particle, a different mass template would be then needed for each value of tan(\beta), making the analysis quite a bit more complex. Physicists like to approximate, and mostly they get away with it when the neglected effects are small, but in the case of large tan(\beta) an approximation fails and a precise computation is not possible.

The bottomline is: a grain of salt is really needed when interpreting the results of a MSSM Higgs search.

That said, I think it is time for a rapid description of the analysis.

4 – THE EXPERIMENTAL SEARCH BY CDF

CDF and D0 have been searching for MSSM neutral higgs bosons for a while now. I reported about the latest result by CDF, obtained from the analysis of events with three b-quark jets, [just a month ago] last fall. Now it is the time for the brand new CDF search for the decay H \to \tau \tau, which in January 2007 made headlines due to the excess it was showing.

The analysis was not modified appreciably from its former instantiation. However, I took the time to read the internal analysis note describing in detail the studies which the authors performed in order to understand the data and tune the selection cuts, and I must say I was really impressed by the amount and quality of the work they performed. I have to tip my hat to John Conway – the person who has been after this signature of higgs decay since more than a decade ago – and to Anton Anastassov, also a tau identification expert and a renowned Higgs hunter. Other authors are Cristobal Cuenca, Dongwook Jang, and Amit Lath.

I was mentioning that the analysis has stayed the same during 2007: indeed, it was a very good idea, in order to avoid a potential signal from being washed away by a modified analysis – although I feel urged to say that a genuine signal cannot hide forever and is bound to creep out of the data at some point!

A total of 1.8 inverse femtobarns of data were analyzed. These correspond to a hundred thousand billion collisions, as I have grown tired of mentioning: among these, a online trigger system selected those containing a likely signal of an electron or muon. Tau leptons do in fact decay to these lighter leptons about a third of the times: \tau \to e \nu_e \nu_\tau, \tau \to \mu \nu_\mu \nu_\tau. Yes: you read it correctly: they yield an electron or a muon accompanied by two neutrinos. The latter provide no chance of detection.

And what about the remaining two thirds ? These are “hadronic” decays: tau leptons yield a narrow jet of light hadrons in the remaining cases. These jets contain few charged particles (typically one or three) and leave a signal in the calorimeter which refined algorithms can distinguish – but only on a statistical basis – from jets originated by fragmenting quarks and gluons.

If one is looking for two tau leptons from the decay of a \phi neutral scalar boson, one has to decide which kind of decay of the taus to look for. Electrons and muons provide a clean signature but are rather infrequent, while hadrons are more frequent but also background-ridden. The analysis considers three final states that are a good compromise of rate and background contamination:

  1. \tau_e \tau_\mu
  2. \tau_e \tau_h
  3. \tau_\mu \tau_h

where subscripts indicate the decay to electrons, muons, or hadrons. Double decay to electrons and double decay to muons are neglected because of the large background from Drell-Yan production of lepton pairs, and double decay to hadrons is not considered due to the too large background.

One thing that should be clear is that all three final states contain at least two tau neutrinos: \nu_\tau are in fact the final product of the decay chain in all cases; the semi-leptonic final states 2. and 3. also contain an additional \nu_e or \nu_\mu, respectively; and the dileptonic final state 1. yields a total of four neutrinos. Neutrinos make the event reconstruction tough, because they escape carrying away energy and momentum, and making a direct reconstruction of the invariant mass of the body producing the two tau leptons impossible.

Despite that problem, a partial reconstruction of the mass of the tentative Higgs boson producing the taus is possible by using only the visible energy – that of electrons, muons, and jets, plus the so-called “missing transverse energy”, a vector equal to the imbalance in transverse momentum obtained by vectorially adding together all visible transverse momenta. The reason for only considering the transverse component is due to the fact that in hadron collisions the longitudinal motion of the initial state – i.e., the speed of the center of mass of the collision along the beam direction – is not known: it is not a proton and an antiproton of 980 GeV each that collide, but rather a quark and a antiquark, each of unknown energy.

The figure below shows that a discrimination between Higgs boson signals of different masses is indeed possible: a Z \to \tau \tau decay yields a signal which, once reconstructed, looks like the empty histogram, while higgs bosons of 115 and 200 GeV give the distributions pictured in blue and magenta, respectively.

A selection of tau candidates in the three detection categories is performed by requiring electrons and muons to be well-identified and isolated, while jets have to be narrow, contain only one or three charged tracks, and be flagged by a fine-tuned tau-identification algorithm. Then, the kinematics of the event is also required to be Higgs-like, by exploiting angular dependence of missing transverse energy and tau decay candidates.

After the selection, the invariant mass distribution of candidates in the three search categories are finally understood as a sum of the different background processes contributing to the selected data. A refined likelihood fitting technique incorporating systematic uncertainties as nuisance parameters and template morphing method to account for jet energy scale uncertainty provides a quite accurate determination of the amount if signal allowed by the data, as a function of the mass of the CP-odd state m_A. Below are shown the fits in the category \tau_X \tau_h (top, where X = e, \mu) and \tau_e \tau_\mu (bottom). The abscissa is the reconstructed visible mass of the tau pair, the black points are experimental data, and the various histograms stacked one on top of the other show the expected amount of QCD background from fake tau signals (red), electroweak and top pair backgrounds (blue), Drell-Yan dilepton production including Z decays (white), and in yellow the amount of \phi signal that the data excludes at 95% confidence level.

As is clear by looking at the plots, the data follow quite well the sum of backgrounds in both datasets, and no signal is apparent. And in fact, the cross section limit (red curve in the graph below) follows very closely the expectation calculated from pseudoexperiments (hatched curve, with 1- and 2-sigma blue and grey bands overlaid):

From the cross section limit, an exclusion limit in the plane of A mass and tan(beta) can be determined. This is much more stringent than the one John and company obtained in January 2007, and it provides a significant advance in our investigation of supersymmetric models.

So, a question remains to be answered at the end of this longish post. And that is, How should we interpret the 2.1 \sigma excess that Conway et al. found in their former analysis ? Of course, as a statistical fluctuation! One that happens roughly once in a hundred cases. Discovering a supersymmetric higgs, on the other hand, is something that can only happen once in a lifetime. Not in this lifetime, if you ask me.

Did we scr** up all the Higgs branching ratios ? May 27, 2008

Posted by dorigo in news, physics, science.
Tags: , ,
comments closed

An interesting paper (hep-ph/0804.1753) appeared in the arxiv last month, but I was late in noticing it. Have you ? It is titled “Higgs-dependent Yukawa couplings“, authors G.F. Giudice -formerly Padova University- and O.Lebedev. It seems that as we get closer to the time when the Large hadron collider (LHC) will be turned on at the CERN laboratories, phenomenological papers based on LHC signatures pop up like mushrooms after a September shower.

The idea of the study is not easy to explain to outsiders, but I want to keep this post simple enough -I have drifted a bit too much toward technicalities because of the PPC 2008 conference last week- and I will not go in the details. Besides, the paper is very clear, so whomever wants more information is encouraged to click the link above.

Basically the authors imagine that the pattern of masses we observe for quarks and leptons -which range from sub-electron volt for neutrinos (their masses are in fact not yet known precisely, but they are surely less than that tiny value), to 172.6 billion electron volts for the top quark- may result from a simple formula valid at energies below a fundamental mass scale M, not yet tested by present accelerators but of the order of a trillion electron-volts, and thus accessible at the Large Hadron Collider (the highest energy proton collisions provided by the LHC will equal a dozen trillion electron-volts).

Masses of fermions in the Standard Model Lagrangian -the function which enshrines the dynamics of a physical system- depend by so-called Yukawa couplings. In Giudice’s and Lebedev’s scheme these couplings are simple formulas, powers of the Higgs boson’s vacuum expectation value divided by a suitable mass, <H>/M. This is an effective theory -one that is only valid below a certain energy, pretty much like newtonian mechanics which is valid for speeds much smaller than that of light. and indeed the formulation is not based on first principles. However, it has the merit of providing a reason for the expected pattern of masses and the weak interaction phenomenology, which depend on some further sets of unexplained numbers (describing the flavor matrix) in the Standard Model.

The nice thing is that we get very clear and testable predictions. The introduction of a reachable mass scale connected to the weak flavor matrix has direct consequences on the existence of processes whereby quarks change flavor without changing electric charge: flavor-changing neutral currents, known as FCNC for insiders. But, hear this: the most interesting, intriguing consequence is that these effective couplings modify the way the Higgs boson couples to fermions, resulting in a completely different pattern of fermion-fermion decays of the Higgs. The Higgs in this model decays more often to fermion pairs than it does in the Standard Model, and it is also produced in top quark decays, such as t \to Hc!

The only Yukawa coupling which is unaffected by the scheme described is the one of the top quark, which is equal to one to very good approximation (even a suspiciously good one, as Alejandro Rivero, Chris Quigg, and others have noticed in the past) in the Standard Model: therefore, the branching fraction of Higgs bosons to top quark pairs is unaffected. So are the branching fractions of H \to WW and H \to ZZ. However, the boosts in the lighter fermion couplings mean a lot in some cases: the Higgs may decay 25 times more frequently to muon pairs, making itself observable at low mass (with enough statistics) despite the depletion in H \to \gamma \gamma final states !

Below is a picture worth a thousand words: it shows how the complex and fascinating phenomenology of Higgs decay varies as a function of its mass (on the x axis) in the Standard Model (hatched lines) and in Giudice and Lebedev model (full lines). As you see, the gamma-gamma final state is reduced by almost an order of magnitude (since it does not receive any boost, it has to shrink to give extra space to the fermion decays), and the b \bar b one remains dominant up to 150 GeV: in this picture, the LHC may need to resort to dimuon final states to see a light Higgs boson!!

I must give it to Giudice and Lebedev: an extremely interesting, fresh new idea, with clear and easily testable predictions. Way to go, in times when the theoretical panorama is dominated by strings, supersymmetry, and large extra dimensions – all things that either give no predictions or have so many degrees of freedom they can accommodate almost anything new we see…