jump to navigation

## 3 megatons strike in central ItalyApril 6, 2009

Posted by dorigo in news, science.
Tags:
comments closed

A destructive earthquake has struck last night in central Italy, at 3.32AM in a mountainous region of the Appennini, close to the city of L’Aquila. The magnitude of the earthquake has been estimated at 6.3 on the Richter scale, for a release of energy equal to about 3 megatons of TNT (not 16 as I previously reported, which corresponds to 6.7 degrees in the Richter scale).

Many small towns close to the epicenter report more than half of the houses grounded. The biggest worries come from L’Aquila, which counts about 70,000 inhabitants; but many smaller towns scattered around in the mountainous region of the Abruzzi have certainly suffered major damage. There are reports of tens of dead bodies already extracted from the rubble. I will have updates here as soon as I gather more information.

UPDATE: while dead bodies continue to be drawn out from collapsed buildings, a disturbing detail emerges. It transpires that a researcher at the Gran Sasso national laboratories had predicted the event, and had warned that a disastrous seismic event would occur. Giampaolo Giuliani had recorded a large release of radon gas from the ground on March 29th, and had concluded that an earthquake would probably take place in the matter of hours. Giuliani had predicted the event would happen a week before it actually did, and on March 31st the head of civil protection Guido Bertolaso had bitterly criticized the prediction and “quegli imbecilli che si divertono a diffondere notizie false” (those imbeciles that enjoy diffusing false news). Giuliani is facing charges of causing a false alarm, but he was right after all.

UPDATE: here are a few excerpts from an interview given by Giampaolo Giuliani this morning:

“Se commento adesso c’e’ il rischio che a me domani mi mettono in galera. Allora, non e’ vero, e’ falso, che i terremoti non possono essere previsti. Sono quasi dieci anni che noi riusciamo a prevedere eventi nel raggio d’azione di 120-150 chilometri dai nostri rivelatori.”
“Sono tre giorni che vedevamo un forte incremento di Radon. I forti incrementi di Radon, al di fuori delle soglie di sicurezza, significano forti terremoti.”
“Anche la tecnologia classica avrebbe potuto prevederlo. Se qualcuno fosse stato a lavorare, ai posti dovuti, o se qualcuno si fosse preoccupato.”
“Questa notte anche la sala sismica si sarebbe potuto accorgere che sarebbe avvenuta una forte scossa. Il mio sismografo indicava una forte scossa di terremoto e ce l’avevamo online, tutti potevano osservarlo, e tanti lo hanno osservato e si sono resi conto che le scosse crescevano.”

(“If I comment now there is the risk that tomorrow I get imprisoned. Now: it is not true, it is false, that earthquakes cannot be predicted. We have been able to predict events for almost ten years in a range of 120-150 kilometers from our detectors.”

“In the last three days we saw a large increase of Radon. Large increases of Radon, above safety thresholds, mean strong earthquakes.”

“Even classic technology could have been used to predict it. If somebody had been working, at their place, or if somebody had gotten worried.”

“Tonight even the seismic room could have realized that a strong shake was going to happen. My seismograph indicated a strong earthquake and we had it online, everybody could watch it, and many did and realized that the tremors were increasing.”)

UPDATE: Michelangelo Ambrosio, a director of research of the INFN (national institute for nuclear physics) section in Napoli, thus defends the claims of Giuliani:

“trascurare con superficialita’ le applicazioni di nuove tecnologie solo perche’ proposte da ricercatori non appartenenti allo establishment preposto a tale funzione e’ una negligenza criminale di cui oggi paghiamo le conseguenze.”

(“Disregarding with superficiality the applications of new technologies only for the reason they are brought forward by researchers not belonging to the establishment addressing those functions is a criminal neglect of which today we all pay the consequences”.)

## Just a linkApril 5, 2009

Posted by dorigo in Blogroll, news, physics, science.
Tags: , ,
comments closed

I read with amusement (and some effort) a spanish account by Francis (th)E mule of Michael Dittmar’s controversial seminar of last March 19th. I paste the link here for several reasons: since I believe it might be of interest to some of you, to have a place to store it, and because I am not insensitive to flattery:

“Entre el público se encontraba Tomasso Dorigo […] (r)esponsable del mejor blog sobre física de partículas elementales del mundo”

Muchas gracias, Francis -but please note: my name spells with two m’s and one s!

## NeuTel 09: Oscar Blanch Bigas, update on Auger limits on the diffuse flux of neutrinosApril 3, 2009

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , ,
comments closed

With this post I continue the series of short reports on the talks I heard at the Neutrino Telescopes 2009 conference, held three weeks ago in Venice.

The Pierre Auger Observatory is a huge (3000 km^2) hybrid detector of ultra-energetic cosmic rays -that means ones having an energy above 10^18 eV. The detector is located in Malargue, Argentina, at 1400m above sea level.

There are four six-eyed fluorescent detectors: when the shower of particles created by a very energetic primary cosmic ray develops in the atmosphere, it excites nitrogen atoms which emit energy in fluorescent light, collected in telescope. It is a calorimetric measurement of the shower, since the number of particles in the shower gives a measurement of the energy of the incident primary particle.

The main problem of the fluorescent detection method is statistics: fluorescent detectors have a reduced duty cycle because they can only observe in moonless nights. That amounts to a 10% duty cycle. So these are complemented by a surface detector, which has a 100% duty cycle.

The surface detector is composed by water Cherenkov detectors on the ground, which can detect light with PMT tubes. The signal is sampled as a function of distance from the center of shower. The measurement depends on a Monte Carlo simulation, so there are some systematic uncertainties present in the method.

The assembly includes 1600 surface detectors (red points), surrounded by four fluorescence detectors (shown by green lines in the map above). These study the high-energy cosmics, their spectra, their arrival direction, and their composition. The detector has some sensitivity to unidentified ultra-high energy neutrinos. A standard cosmic ray interacts at the top of atmosphere, and yields an extensive air shower which has an electromagnetic component developing on the ground; but if the arrival direction of the primary is tilted with respect to the vertical, the e.m. component is absorbed when it arrives on the ground, so it contains only muons. For neutrinos, which can penetrate deep in the atmosphere before interacting, the shower will instead have a significant e.m. component regardless of the angle of incidence.

The “footprint” is the pattern of firing detectors on the ground. It encodes information on the angle of incidence. For tilted showers, the presence of an e.m. component is strong indication of neutrino shower. An elongated footprint and a wide time structure of signal is seen for tilted showers.

There is a second method to detect neutrinos. This is based on the so-called “skimming neutrinos“: the Earth-skimming mechanism occurs when neutrinos interact in the Earth, producing a charged lepton via charged current interaction. The lepton produces a shower that can be detected above the ground. This channel has better sensitivity than neutrinos interacting in the atmosphere. It can be used for tau neutrinos, due to early tau decay in the atmosphere. The distance of interaction for a muon neutrino is 500 km, for a tau neutrino is 50 km. for electrons it is 10 km. These figures apply to 1 EeV primaries. If you are unfamiliar with these ultra-high energies, 1 Eev = 1000 PeV = 1,000,000 TeV: this is roughly equivalent to the energy drawn in a second by a handful of LEDs.

Showers induced by emerging tau leptons start close to the detector, and are very inclined. So one asks for an elongated footprint, and a shower moving at the speed of light using the available timing information. The background to such a signature is of the order of one event every 10 years. The most important drawback of Earth-skimming neutrinos is the large systematic uncertainty associated with the measurement.

Ideally, one would like to produce a neutrino spectrum or an energy-dependent limit on the flux, but no energy reconstruction is available. Observed energy depends on the height at which the shower is developing, and since this is not known for penetrating particles as neutrinos, one can only give a flux limit for them. The limit is in the range of energy where GZK neutrinos should peak, but its value is an order of magnitude above the expected flux of GZK neutrinos. A differential limit in energy is much worse in reach.

The figure below shows the result for the integrated flux of neutrinos obtained by the Pierre Auger observatory in 2008 (red line), compared with other limits and with expectations for GKZ neutrinos.

## Neutrino telescopes 2009: Steve King, Neutrino Mass ModelsApril 2, 2009

Posted by dorigo in news, physics, science.
Tags: , , ,
comments closed

This post and the few ones that will follow are for experts only, and I apologize in advance to those of you who do not have a background in particle physics: I will resume more down-to-earth discussions of physics very soon. Below, a short writeup is offered of Steve King’s talk, which I listened to during day three of the “Neutrino Telescopes” conference in Venice, three weeks ago. Any mistake in these writeups is totally my own fault. The slides of all talks, including the one reported here, have been made available at the conference site.

Most of the talk focused on a decision tree for neutrino mass models. This is some kind of flow diagram to decide -better, decode- the nature of neutrinos and their role in particle physics.

In the Standard Model there are no right-handed neutrinos, only Higgs doublets of the symmetry group $SU(2)_L$, and the theory contains only renormalizable terms. If the above hypotheses all apply, then neutrinos are massless, and three separate lepton numbers are conserved. To generate neutrino masses, one must relax one of the three conditions.

The decision tree starts with the question: is the LSND result true or false ? if it is true, then are neutrinos sterile or CPT-Violating ? Otherwise, if the LSND result is false, one must decide whether neutrinos are Dirac or Majorana particles. If they are Dirac particles, they point to extra dimensions, while if they are Majorana ones, this brings to several consequences, tri-bimaximal mixing among them.

So, to start with the beginning: Is LSND true or false ? MiniBoone does not support the LSND result but it does support three neutrinos mixing. LSND is assumed false in this talk. So one then has to answer the question, are then neutrinos Dirac or Majorana ? Depending on that you can write down masses of different kinds in the Lagrangian. Majorana ones violate lepton number and separately the three of them. Dirac masses couple L-handed antineutrinos to R-handed neutrinos. In this case the neutrino is not equal to the antineutrino.

The first possibility is that the neutrinos are Dirac particles. This raises interesting questions: they must have very small Yukawa coupling. The Higgs Vacuum Expectation Value is about 175 GeV, and the Yukawa coupling is 3E-6 for electrons: this is already quite small. If we do the same with neutrinos, the Yukawa coupling must be of the order of 10^-12 for an electron neutrino mass of 0.2 eV. This raises the question of why this is so small.

One possibility then is provided by theories with extra dimensions: first one may consider flat extra dimensions, with right-handed neutrinos in the bulk (see graph on the right). These particles live in the bulk, whereas we are trapped in a brane. When we write a Yukawa term for neutrinos we get a volume suppression, corresponding to the spread of the wavefunction outside of our world. It goes as one over the square root of the volume, so if the string scale is smaller than the Planck scale ($10^7/10^{19} = 10^{-12}$ we get the right scale.

The other sort of extra dimensions (see below) are the warped ones, with the standard model sitting in the bulk. The wavefunction of the Higgs overlaps with fermions, and this gives exponentially suppressed Dirac masses, depending on the fermion profiles. Because electrons and muons peak in the Planck brane while we live in the TeV brane, where the top quark peaks, this provides a natural way of giving a hierarchy to particle masses.

Some of these models address the problem of dark energy in the Universe. Neutrino telescopes studying neutrinos from Gamma-ray bursts may shed light on this issue along with Quantum Gravity and neutrino mass. The time delay relative to low-energy photons as a function of redshift can be studied against the energy of neutrinos. The function lines are different, and they depend on the models of dark energy. The point is that by studying neutrinos from gamma-ray bursts, one
has a handle to measure dark energy.

Now let us go back to the second possibility: namely, that neutrinos are Majorana particles. In this case you have two choices: a renormalizable operator with a Higgs triplet, and a non-renormalizable operator with a lepton violation term, $\delta L =2$. Because it is non-renormalizable you get a mass suppression, a mass at the denominator, which corresponds to some high energy scale. The way to implement this is to imagine that the mass scale is due to the exchange of a massive particle in the s-channel between Higgs and leptons, or in the t-channel.

We can concentrate on see-saw mechanisms in the rest of the talk. There are several types of such models, type I is essentially exchanging a heavy right-handed neutrino in the s-channel with the Higgs. Type II is instead when you exchange something in the t-channel, this could be a heavy Higgs triplet, and this would also give a suppressed mass.

The two types of see-saw types can work together. One may think of a unit matrix coming from a type-II see-saw, with the mass splittings and mixings coming from the type-I contribution. In this case the type II would render the neutrinoless double beta decay observable.

Moving down the decision tree, we come to the question of whether we have precise tri-bimaximal mixing (TBM). The matrix (see figure on the right) corresponds to angles of the standard parametrization, $\theta_{12}=35^\circ$, $\theta_{23}=45^\circ$, $\theta_{13}=0$. These values are consistent with observations so far.

Let us consider the form of the neutrino mass matrix assuming the correctness of the TBM matrix. We can derive what the mass matrix is by multiplying it by the mixing matrix. It has three terms, one proportional to mass $m_1$, one to $m_2$, and one multiplied to $m_3$. These matrices can be decomposed into column vectors. These are the columns of the TBM matrix. When you add the matrices together, you get the total matrix, symmetric, with the six terms populating the three rows ($a b c$, $b d e$, $c e f$)  satisfying some relations: $c=b$, $e=a+b-d$, $d=f$.

Such a mass matrix is called “form-diagonalizable” since it is diagonalized by the TBM matrix for all values of a,b,d. A,b,d translate into the masses. There is no cancelation of parameters involved, and the whole thing is extremely elegant. This suggests something called “form dominance”, a mechanism to achieve a form-diagonalizable effective neutrino mass matrix from the type-I see-saw. Working in the diagonal MRR basis, if $M_d$ is the Dirac mass, this can be written as three column vectors, and the effective light neutrino mass matrix is the sum of three terms. Form dominance is the assumption that the columns of the Dirac matrix are proportional to the columns of the TBM matrix (see slide 16 of the talk). Then one can generate the TBM mass matrix. In this case the physical neutrino masses are given by a combination of parameters. This constitutes a very nice way to get a diagonalizable mass matrix from the see-saw mechanism.

Moving on to symmetries, clearly, the TBM matrix suggests some family symmetry. This is badly broken in the charged lepton sector, so one can write explicitly what the Lagrangian is, and the neutrino Majorana matrix respects the muon-tauon interchange, whereas the charged matrix does not. So this is an example of a symmetry working in one way but not in the other. To achieve different symmetries in the neutrino and charged lepton sectors we need to align the Higgs fields which break the family symmetry (called flavons) along different symmetry-preserving directions (called vacuum alignment). We need to have a triplet of flavons which breaks the A4 symmetry.

A4 see-saw models satisfy form dominance. There are two models. Both have R=1. These models are economical, they involve only two flavons. A4 is economical: yet, one must assume that there are some cancelations of the vacuum expectation values in order to achieve consistency with experimental measurements of atmospheric and solar mixing. This suggests a “natural form dominance”, less economical but involving no cancelations. A different flavon is associated to each neutrino mass. An extension is “constrained sequential dominance”, which is a special case, which supplies strongly hierarchical neutrino masses.

As far as family symmetry is concerned, the idea is that there are two symmetries, two family groups from the group $SU(3)$. You get certain relations which are quite interesting. The CKM mixing is in relation with the Yukawa matrix. You can make a connection between the down-Yukawa matrix and the electron Yukawa. This leads to some mixing sum rule relations, because the PMNS matrix is the product of a Cabibbo-like matrix and a TBM matrix. The mixing angles carry information on corrections to TBM. The mixing sum rule one gets is a deviation from 35 degrees of $\theta_{12}$, which is due to a Cabibbo angle coming from the charged sector. Putting two things together, one can get a physical relation between these angles. A mixing sum rule, $\theta_{12} = 35^\circ + \theta_{13} \cos \delta$.

The conclusions are that neutrino masses and mixing require new physics beyond the Standard Model. There are many roads for model building, but their answers to key experimental questions will provide the signposts. If TMB is accurately realized, this may imply a new symmetry of nature: a family symmetry, broken by flavons. The whole package is a very attractive scheme, sum rules underline the importance of showing that the deviations from TBM are non-zero. Neutrino telescopes may provide a window into neutrino mass, quantum gravity and dark energy.

After the talk, there were a few questions from the audience.

Q: although true that MiniBoone is not consistent with LSND in a simple 2-neutrino mixing model, in more complex models the two experiments may be consistent. King agrees.

Q: The form dominance scenario in some sense would not apply to the quark sector. It seems it is independent of A4. King’s answer: form dominance is a general framework for achieving form-diagonalizable elements starting from the see-saw mechanism. This includes the A4 model as an example, but does not restricts to it. There are a large class of models in this framework.

Q: So it is not specific enough to extend to the quark sector ? King: form dominance is all about the see-saw mechanism.

Q: So, cannot we extend this to symmetries like T’ which involve the quarks ? King: the answer is yes. Because of time this was only flashed in the talk. It is a very good talk to do by itself.

## Variation found in a dimensionless constant!April 1, 2009

Posted by dorigo in cosmology, mathematics, news, physics, science.
Tags: , ,
comments closed

I urge you to read this preprint by R.Scherrer (from Vanderbilt University), which appeared yesterday on the arxiv. It is titled “Time variation of a fundamental dimensionless constant“, and I believe it might have profound implications in our understanding of cosmology, as well as theoretical physics. I quote the incipit of the paper below:

“Physicists have long speculated that the fundamental constants might not, in fact, be constant, but instead might vary with time. Dirac was the first to suggest this possibility [1], and time variation of the fundamental constants has been investigated numerous times since then. Among the various possibilities, the fine structure constant and the gravitational constant have received the greatest attention, but work has also been done, for example, on constants related to the weak and strong interactions, the electron-proton mass ratio, and several others.”

Many thanks to Massimo Pietroni for pointing out the paper to me this morning. I am now collecting information about the study, and will update this post shortly.

## Latest global fits to SM observables: the situation in March 2009March 25, 2009

Posted by dorigo in news, physics, science.
Tags: , , , , , , , , , ,
comments closed

A recent discussion in this blog between well-known theorists and phenomenologists, centered on the real meaning of the experimental measurements of top quark and W boson masses, Higgs boson cross-section limits, and other SM observables, convinces me that some clarification is needed.

The work has been done for us: there are groups that do exactly that, i.e. updating their global fits to express the internal consistency of all those measurements, and the implications for the search of the Higgs boson. So let me go through the most important graphs below, after mentioning that most of the material comes from the LEP electroweak working group web site.

First of all, what goes in the soup ? Many things, but most notably, the LEP I/SLD measurements at the Z pole, the top quark mass measurements by CDF and DZERO, and the W mass measurements by CDF, DZERO, and LEP II. Let us give a look at the mass measurements, which have recently been updated.

For the top mass, the situation is the one pictured in the graph shown below. As you can clearly see, the CDF and DZERO measurements have reached a combined precision of 0.75% on this quantity.

The world average is now at $M_t = 173.1 \pm 1.3 GeV$. I am amazed to see that the first estimate of the top mass, made by a handful of events published by CDF in 1994 (a set which did not even provide a conclusive “observation-level” significance at the time) was so dead-on: the measurement back then was $M_t=174 \pm 15 GeV$! (for comparison, the DZERO measurement of 1995, in their “observation” paper, was $M_t=199 \pm 30 GeV$).

As far as global fits are concerned, there is one additional point to make for the top quark: knowing the top mass any better than this has become, by now, useless. You can see it by comparing the constraints on $M_t$ coming from the indirect measurements and W mass measurements (shown by the blue bars at the bottom of the graph above) with the direct measurements at the Tevatron (shown with the green band). The green band is already too narrow: the width of the blue error bars compared to the narrow green band tells us that the SM does not care much where exactly the top mass is, by now.

Then, let us look at the W mass determinations. Note, the graph below shows the situation BEFORE the latest DZERO result;, obtained with 1/fb of data, and which finds $M_W = 80401 \pm 44 MeV$; its inclusion would not change much of the discussion below, but it is important to stress it.

Here the situation is different: a better measurement would still increase the precision of our comparisons with indirect information from electroweak measurements at the Z. This is apparent by observing that the blue bars have width still smaller than the world average of direct measurements (again in green). Narrow the green band, and you can still collect interesting information on its consistency with the blue points.

Finally, let us look at the global fit: the electroweak working group at LEP displays in the by now famous “blue band plot”, shown below for March 2009 conferences. It shows the constraints on the Higgs boson mass coming from all experimental inputs combined, assuming that the Standard Model holds.

I will not discuss this graph in details, since I have done it repeatedly in the past. I will just mention that the yellow regions have been excluded by direct searches of the Higgs boson at LEP II (on the left, the wide yellow area) and the Tevatron ( the narrow strip on the right). From the plot you should just gather that a light Higgs mass is preferred (the central value being 90 GeV, with +36 and -27 GeV one-sigma error bars). Also, a 95% confidence-level exclusion of masses above 163 GeV is implied by the variation of the global fit $\chi^2$ with Higgs mass.

I have started to be a bit bored by this plot, because it does not do the best job for me. For one thing, the LEP II limit and the Tevatron limit on the Higgs mass are treated as if they were equivalent in their strength, something which could not be possibly farther from the truth. The truth is, the LEP II limit is a very strong one -the probability that the Higgs has a mass below 112 GeV, say, is one in a billion or so-, while the limit obtained recently by the Tevatron is just an “indication”, because the excluded region (160 to 170 GeV) is not excluded strongly: there still is a one-in-twenty chance or so that the real Higgs boson mass indeed lies there.

Another thing I do not particularly like in the graph is that it attempts to pack too much information: variations of $\alpha$, inclusion of low-Q^2 data, etcetera. A much better graph to look at is the one produced by the GFitter group instead. It is shown below.

In this plot, the direct search results are introduced with their actual measured probability of exclusion as a function of Higgs mass, and not just in a digital manner, yes/no, as the yellow regions in the blue band plot. And in fact, you can see that the LEP II limit is a brick wall, while the Tevatron exclusion acts like a smooth increase in the global $\chi^2$ of the fit.

From the black curve in the graph you can get a lot of information. For instance, the most likely values, those that globally have a 1-sigma probability of being one day proven correct, are masses contained in the interval 114-132 GeV. At two-sigma, the Higgs mass is instead within the interval 114-152 GeV, and at three sigma, it extends into the Tevatron-excluded band a little, 114-163 GeV, with a second region allowed between 181 and 224 GeV.

In conclusion, I would like you to take away the following few points:

• Future indirect constraints on the Higgs boson mass will only come from increased precision measurements of the W boson mass, while the top quark has exhausted its discrimination power;
• Global SM fits show an overall very good consistency: there does not seem to be much tension between fits and experimental constraints;
• The Higgs boson is most likely in the 114-132 GeV range (1-sigma bounds from global fits).

## Zooming in on the HiggsMarch 24, 2009

Posted by dorigo in news, physics, science.
Tags: , , , , , , , , ,
comments closed

Yesterday Sven Heinemeyer kindly provided me with an updated version of a plot which best describes the experimental constraints on the Higgs boson mass, coming from electroweak observables measured at LEP and SLD, and from the most recent measurements of W boson and top quark masses. It is shown on the right (click to get the full-sized version).

The graph is a quite busy one, but I will try below to explain everything one bit at a time, hoping I keep things simple enough that a non-physicist can understand it.

The axes show suitable ranges of values of the top quark mass (varying on the horizontal axis) and of the W boson masses (on the vertical axis). The value of these quantities is functionally dependent (because of quantum effects connected to the propagation of the particles and their interaction with the Higgs field) on the Higgs boson mass.

The dependence, however, is really “soft”: if you were to double the Higgs mass by a factor of two from its true value, the effect on top and W masses would be only of the order of 1% or less. Because of that, only recently have the determinations of top quark and W boson masses started to provide meaningful inputs for a guess of the mass of the Higgs.

Top mass and W mass measurements are plotted in the graphs in the form of ellipses encompassing the most likely values: their size is such that the true masses should lie within their boundaries, 68% of the time. The red ellipse shows CDF results, and the blue one shows DZERO results.

There is a third measurement of the W mass shown in the plot: it is displayed as a horizontal band limited by two black lines, and it comes from the LEP II measurements. The band also encompasses the 68% most likely W masses, as ellipses do.

In addition to W and top masses, other experimental results constrain the mass of top, W, and Higgs boson. The most stringent of these results are those coming from the LEP experiment at CERN, from detailed analysis of electroweak interactions studied in the production of Z bosons. A wide band crossing the graph from left to right, with a small tilt, encompasses the most likely region for top and W masses.

So far we have described measurements. Then, there are two different physical models one should consider in order to link those measurements to the Higgs mass. The first one is the Standard Model: it dictates precisely the inter-dependence of all the parameters mentioned above. Because of the precise SM predictions, for any choice of the Higgs boson mass one can draw a curve in the top mass versus W mass plane. However, in the graph a full band is hatched instead. This correspond to allowing the Higgs boson mass to vary from a minimum of 114 GeV to 400 GeV. 114 GeV is the lower limit on the Higgs boson mass found by the LEP II experiments in their direct searches, using electron-positron collisions; while 400 GeV is just a reference value.

The boundaries of the red region show the functional dependence of Higgs mass on top and W masses: an increase of top mass, for fixed W mass, results in an increase of the Higgs mass, as is clear by starting from the 114 GeV upper boundary of the red region, since one then would move into the region, to higher Higgs masses. On the contrary, for a fixed top mass, an increase in W boson mass results in a decrease of the Higgs mass predicted by the Standard Model. Also note that the red region includes a narrow band which has been left white: it is the region corresponding to Higgs masses varying between 160 and 170 GeV, the masses that direct searches at the Tevatron have excluded at 95% confidence level.

The second area, hatched in green, is not showing a single model predictions, but rather a range of values allowed by varying arbitrarily many of the parameters describing the supersymmetric extension of the SM called “MSSM”, its “minimal” extension. Even in the minimal extension there are about a hundred additional parameters introduced in the theory, and the values of a few of those modify the interconnection between top mass and W mass in a way that makes direct functional dependencies in the graph impossible to draw. Still, the hatched green region shows a “possible range of values” of the top quark and W boson masses. The arrow pointing down only describes what is expected for W and top masses if the mass of supersymmetric particles is increased from values barely above present exclusion limits to very high values.

So, to summarize, what to get from the plot ? I think the graph describes many things in one single package, and it is not easy to get the right message from it alone. Here is a short commentary, in bits.

• All experimental results are consistent with each other (but here, I should add, a result from NuTeV which finds indirectly the W mass from the measured ratio of neutral current and charged current neutrino interactions is not shown);
• Results point to a small patch of the plane, consistent with a light Higgs boson if the Standard Model holds
• The lower part of the MSSM allowed region is favored, pointing to heavy supersymmetric particles if that theory holds
• Among experimental determinations, the most constraining are those of the top mass; but once the top mass is known to within a few GeV, it is the W mass the one which tells us more about the unknown mass of the Higgs boson
• One point to note when comparing measurements from LEP II and the Tevatron experiments: when one draws a 2-D ellipse of 68% contour, this compares unfavourably to a band, which encompasses the same probability in a 1-D distribution. This is clear if one compares the actual measurements: CDF $80.413 \pm 48 MeV$ (with 200/pb of data), DZERO $80,401 \pm 44 MeV$ (with five times more statistics), LEP II $80.376 \pm 33 MeV$ (average of four experiments). The ellipses look like they are half as precise as the black band, while they are actually only 30-40% worse. If the above is obscure to you, a simple graphical explanation is provided here.
• When averaged, CDF and DZERO will actually beat the LEP II precision measurement -and they are sitting on 25 times more data (CDF) or 5 times more (DZERO).

## Ten photons per hourMarch 23, 2009

Posted by dorigo in astronomy, games, mathematics, personal, physics, science.
Tags: , , ,
comments closed

Every working day I walk for about a mile to my physics department in Padova from the train station in the morning. I find it is a healthy habit, but I sometimes fear it also in some sense is a waste of time: if I catched a bus, I could be at work ten minutes earlier. I hate losing time, so I sometimes use the walking time to set physics problems to myself, trying to see whether I can solve them by heart. It is a way to exercise my mind while I exercise my body.

Today I was thinking at the night of stargazing I treated myself with last Saturday. I had gone to Casera Razzo, a secluded place in the Alps, and observed galaxies for four hours in a row with a 16″ dobsonian telescope, in the company of four friends (and three other dobs). One thing we had observed with amazement was a tiny speck of light coming from the halo of an interacting pair of galaxies in Ursa Major, the one pictured below.

The small speck of light shown in the upper left of the picture above, labeled as MGC 10-17-5, is actually a faint galaxy in the field of view of NGC3690. It has a visual magnitude of +15.7: this is a measure of its integrated luminosity as seen from the Earth. It is a really faint object, and barely at the limit of visibility with the instrument I had. The question I arrived at formulating to myself this morning was the following: how many photons did we get to see per second through the eyepiece, from that faint galaxy ?

This is a nice, simple question, but computing its answer by heart took me the best part of my walk. My problem was that I did not have a clue of the relationship between visual magnitude and photon fluxes. So I turned to things I did know.

Some background is needed to those of you who do not know how visual magnitudes are computed, so I will make a small digression here. The scale of visual magnitude is a semi-empirical one, which sets the brightest stars at magnitude zero or so, and defines a decrease of luminosity by a factor 100 per every five magnitudes difference. The faintest stars visible with the naked eye in a moonless night are of magnitude +6, and that means they are about 250 times fainter than the brightest ones. On the other hand, Venus shines at magnitude -4.5 at its brightest -almost 100 times as bright as the brightest stars-, and our Sun shines at a visual magnitude of about -27, more than a billion times brighter than Venus. The magnitude difference between two objects is in a relation with their relative brightness by a power law: $L_1/L_2 = 2.5^{-M_1+M_2}$; the factor 2.5 is an approximation for the fifth root of 100, and it corresponds to the brigthness ratio of two objects that differ by one unit of visual magnitude.

Ok, so we know how bright is the Sun. Now, if I could get how many photons reach our eye from it every second, I would make some progress. I reasoned that I knew the value of the solar constant: that is the energy radiated by the Sun on an area of 1 square meter on the ground of the Earth. I remembered a value of about 1 kilowatt (it is actually 1.366 kW, as I found out later in wikipedia).

Now, how many photons of visible light arriving per second on that square meter of ground correspond to 1 kilowatt of power ? I reasoned that I did not remember the energy of a single visible photon -I remembered it was in the electron-Volt range but I was not really sure- so I had to compute it.

The energy of a quantum of light is given by the formula $E = h \nu$, where $h$ is Planck’s constant and $\nu$ is the light frequency. However, all I knew was that visible light has a wavelength of about 500 nanometers (which is $5 \times 10^{-7} m$), so I had to use the more involved formula $E = hc/\lambda$, where now $c$ is the speed of light and $\lambda$ is the wavelength. I remembered that $h=6 \times 10^{-34} Js$, and that $c=3 \times 10^8 m/s$, so with some effort I could get $E=6 \times 10^{-34} \times 3 \times 10^8 / (5 \times 10^-7) = 4 \times 10^{-19}$, more or less.

My brains were a bit strained by the simple calculation above, but I was relieved to get back an energy roughly equal to that I expected -in the eV range (one eV equals $1.6 \times 10^{-19}$ Joules -that much I do know).

Now, if the Sun radiates 1 kW of power, which is a thousand Joules per second, how many visible photons do we get ? Here there is a subtlety I did not even bother considering in my walk to the physics department: only about half of the power from the Sun is in the form of visible light, so one should divide that power by two. But I was unhindered by this in my order-of-magnitude walk-estimate. Of course, 1 kW divided by $4 \times 10^{-19}$ makes $2.5 \times 10^{21}$ visible quanta of light per square meter per second.

Now, visual magnitude is expressed as the amount of light hitting the eye. A human eye has a surface of about 20 square millimeters, which is 20 millionths of a square meter: so the number of photons you get by looking straight at the sun (do not do it) is $1.2 \times 10^{14}$ per second. That’s a hundred trillions of ’em photons per second!

I was close to my goal now: the magnitude of the speck of galaxy I saw on Saturday is +15.7, the magnitude of the Sun is -27, so the difference is 43 magnitudes. This corresponds to $2.5^{43}$, which you might throw up your hands at, until you realize that every 5 units of the exponent the number increases by 100, so you just do $100^{43/5}$ which is $100^{8.6}$ which is $10^{17.2}$… Simple, isn’t it ?

Now, taking the number of photons reaching the eye from the Sun every second, and dividing by the ratio of apparent luminosities of the Sun and the galaxy, I could get $N_{\gamma}=10^{14} / 10^{17} = 10^{-3}$. One photon every thousand seconds!

Let me stress this: if you watch that patch of sky at night, the number of photons you get from that source alone is a few per hour! With my dobson telescope, which intensifies light by almost 10,000 times, I could get a rate of a few tens of photons per second, and the detail was indeed detectable!

If you are intested in the exact number, which I worked out after reaching my office and the tables of constants in the PDG booklet, I computed a rate of $N_{\gamma}=3.4 \times 10^{-3}$ photons per second with unaided eye, and 22 per second through the eyepiece of the telescope. Without telescope, that galaxy sends to each of us about 10 photons per hour!

UPDATE: this post will remain as one clear example of how dangerous it is to compute by heart! Indeed, somewhere in my order-of-magnitude conversions above I dropped a factor 10^2 -which, mind you, is not horrible in numbers which have 20 digits or so; but when one wants to get back to reasonable estimates for reasonably small numbers, it does count a lot. So, after taking care of some other (more legitimate) approximations, if one computes things correctly, the number of photons from the galaxy seen with the unaided eye is more like two hundred per hour, and in the telescope it is of about 350 per second.

## A seminar against the Tevatron!March 20, 2009

Posted by dorigo in news, physics, science.
Tags: , , , ,
comments closed

I spent this week at CERN to attend the meetings of the CMS week – an event which takes place four times a year, when collaborators of the CMS experiment, coming from all parts of the world, get together at CERN to discuss detector commissioning, analysis plans, and recent results. It was a very busy and eventful week, and only now, sitting on a train that brings me back from Geneva to Venice, can I find the time to report with the due dedication on some things you might be interested to know about.

One thing to report on is certainly the seminar I eagerly attended on Thursday morning, by Michael Dittmar (ETH-Zurich). Dittmar is a CMS collaborator, and he talked at the CERN theory division on a tickling subject:”Why I never believed in the Tevatron Higgs sensitivity claims for Run 2ab”. The title did promise a controversial discussion, but I was really startled by its level, as much as by the defamation of which I felt personally to be a target. I will explain this below.

I have also to mention that by Thursday I had already attended to a reduced version of his talk, since he had given it on the previous day in another venue. Both I and John Conway had corrected him on a few plainly wrong statements back then, but I was puzzled to see he reiterated those false statements in the longer seminar! More on that below.

Dittmar’s obnoxious seminar

Dittmar started by saying he was infuriated by the recent BBC article where “a statement from the director of a famous laboratory” claimed that the Tevatron had 50% odds of finding a Higgs boson, in a certain mass range. This prompted him to prepare a seminar to express his scepticism. However, it turned out that his scepticism was not directed solely at the optimistic statement he had read, but at every single result on Higgs searches that CDF and DZERO had produced since Run I.

In order to discuss sensitivity and significances, the speaker made a un-illuminating digression on how counting experiments can or cannot obtain observation-level significances with their data depending on the level of background of their searches and the associated systematical uncertainties. His statements were very basic and totally uncontroversial on this issue, but he failed to focus on the fact that nowadays, nobody does counting experiments any more when searching for evidence of a specific model: our confidence in advanced analysis methods involving neural networks, shape analysis, and likelihood discriminants; the tuning of Monte Carlo simulations; and the accurate analytical calculations of high-order diagrams for Standard Model processes, have all grown tremendously with years of practice and studies, and these methods and tools overcome the problems of searches for small signals immersed in large backgrounds. One can be sceptical, but one cannot ignore the facts, as the speaker seemed inclined to.

Then Dittmar said that in order to judge the value of sensitivity claims for the future, one may turn to past studies and verify their agreement with the actual results. So he turned to the Tevatron Higgs Sensitivity studies of 2000 and 2003, two endeavours to which I had participated with enthusiasm.

He produced a plot showing the small signal of $ZH \to l^+ l^- b \bar b$ decays that the Tevatron 2000 study believed the two experiments could achieve with 10 inverse femtobarns of data, expressing his doubts that the “tiny excess” could mean an evidence for Higgs production. On the side of that graph, he had for comparison placed a result of CDF on real Run I data, where a signal of WH or ZH decays to four jets had been searched in the dijet invariant mass distribution of the two b-jets.

He commented that figure by saying half-mockingly that the data could have been used to exclude the standard model process of associated $Z+jets$ production, since the contribution from Z decays to b-quark pairs was sitting at a mass where one bin had fluctuated down by two standard deviations with respect to the sum of background processes. This ridiculous claim was utterly unsupported by the plot -which had an overall very good agreement between data and MC sources- and by the fact that the bins adjacent to the downward-fluctuating one were higher than the prediction. I found this claim really disturbing, because it tried to denigrate my experiment with a futile and incorrect argument. But I was about to get more upset for his next statement.

In fact, he went on to discuss the global expectation of the Tevatron on Higgs searches, a graph (see below) produced in 2000 after a big effort from several tens of people in CDF and DZERO.

He started by saying that the graph was confusing, and that it was not clear in the documentation how it had been produced, nor that it was the combination of CDF and DZERO sensitivity. This was very amusing, since sitting from the far back John Conway, a CDF colleague, shouted: “It says it in print on top of it: combined thresholds!”, then adding, with a pacate voice “…In case you’re wondering, I made that plot.” John had in fact been the leader of the Tevatron Higgs sensitivity study, not to mention the author of many of the most interesting searches for the higgs boson in CDF since then.

Dittmar continued his surreal talk with an overbid, by claiming that the plot had been produced “by assuming a 30% improvement in the mass resolution of pairs of b-jets, when nobody had not even the least idea on how such improvement could be achieved”.

I could not have put together a more personal, direct attack to years of my own work myself! It is no mystery that I worked on dijet resonances since 1992, but of course I am a rather unknown soldier in this big game; however, I felt the need to interrupt the speaker at this point -exactly as I had done at the shorter talk the day before.

I remarked that in 1998, one year before the Tevatron sensitivity study, I had produced a PhD thesis and public documents showing the observation of a signal of $Z \to b \bar b$ decays in CDF Run I data, and had demonstrated on that very signal how the use of ingenuous algorithms could reduce by at least 30% the dijet mass resolution, making the signal more prominent. The relevant plots are below, directly from my PhD thesis: judge for yourself.

In the plots, you can see how the excess over background predictions moves to the right as more and more refined jet energy corrections are applied, starting from the result of generic jet energy corrections (top) to optimized corrections (bottom) until the signal becomes narrower and centered at the true value. The plots on the left show the data and the background prediction, those on the right show the difference, which is due to Z decays to b-quark jet pairs. Needless to say, the optimization is done on Monte Carlo Z events, and only then checked on the data.

So I said that Dittmar’s statement was utterly false: we had an idea of how to do it, we had proven we could do it, and besides, the plots showing what we had done had been indeed included in the Tevatron 2000 report. Had he overlooked them ?

Escalation!

Dittmar seemed unbothered by my remark, and he responded that that small signal had not been confirmed in Run II data. His statement constituted an even more direct attack to four more years of my research time, spent on that very topic. I kept my cool, because when your opponent offers you on a silver plate the chance to verbally sodomize him, you cannot be too angry with him.

I remarked that a signal had indeed been found in Run II, amounting to about 6000 events after all selection cuts; it confirmed the past results. Dittmar then said that “to the best of his knowledge” this had not been published, so it did not really count. I then explained it was a 2008 NIM publication, and would he please document himself before making such unsubstantiated allegations? He shrugged his shoulders, said he would look more carefully for the paper, and went back to his talk.

His points about the Tevatron sensitivity studies were laid down: for a low-mass Higgs boson, the signal is just too small and backgrounds are too large, and the sensitivity of real searches is below expectations by a large factor. To stress this point, he produced a slide containing a plot he had taken from this blog! The plot (see on the left), which is my own concoction and not Tevatron-approved material, shows the ratio between observed limit to Higgs production and the expectations of the 2000 study. He pointed at the two points for 100-140 GeV Higgs boson masses, trying to prove his claim: The Tevatron is now doing three times worse than expected. He even uttered “It is time to confess: the sensitivity study was wrong by a large factor!”.

I could not help interrupting again: I had to stress that the plot was not approved material and was just a private interpretation of Tevatron results, but I did not deny its contents. The plot was indeed showing that low-mass searches were below par, but it was also showing that high-mass ones were amazingly in agreement with expectations worked at 10 years before. Then John Conway explained the low-mass discrepancy for the benefit of the audience, as he had done one day before for no apparent benefit of the speaker.

Conway explained that the study had been done under the hypothesis that an upgrade of our silicon detector would be financed by the DoE: it was in fact meant to prove the usefulness of funding an upgrade. A larger acceptance of inner silicon tracking boosts the sensitivity to identify b-quark jets from Higgs decays by a large factor, because any acceptance increase gets squared when computing the over-efficiency. So Dittmar could not really blame the Tevatron experiments for predicting something that would not materialize in a corresponding result, given that the DoE had denied the funding to build the upgraded detector!

I then felt compelled to add that by using my plot Dittmar was proving the opposite thesis of what he wanted to demonstrate: low-mass Tevatron searches were shown to underperform because of funding issues, rather than because of a wrong estimate of sensitivity; and high-mass searches, almost unhindered by the lack of an upgraded silicon, were in excellent agreement with expectations!

The speaker said that no, the high-mass searches were not in agreement, because their results could not be believed, and moved on to discuss those by taking real-data results by the Tevatron.

He said that the $H \to WW$ is a great channel at the LHC.

“Possible at the Tevatron ? I believe that the WW continuum background is much larger at a ppbar collider than at a pp collider, so my personal conclusion is that if the Tevatron people want to waste their time on it, good luck to them.”

Now, come on. I cannot imagine how a respectable particle physicist could drive himself into making such statements in front of a distinguished audience (which, have I mentioned it, included several theorists of the highest caliber, including none less than Edward Witten). Waste their time ? I felt I was wasting my time listening to him, but my determination of reporting his talk here kept me anchored to my chair, taking notes.

So this second part of the talk was not less unpleasant than the first part: Dittmar criticized the Tevatron high-mass Higgs results in the most incorrect, and scientifically dishonest, way that I could think of. Here is just a summary:

• He picked up a distribution of one particular sub-channel from one experiment, noting that it seemed to have the most signal-rich region showing a deficit of events. He then showed the global CDF+DZERO limit, which did not show a departure between expected and observed limit on Higgs cross section, and concluded that there was something fishy in the way the limit had been evaluated. But the limit is extracted from literally several dozens of those distributions -something he failed to mention despite having been warned of that very issue in advance.
• He picked up two neural-network output distributions for a search of Higgs at 160 and 165 GeV, and declared they could not be correct since they were very different in shape! John, from the back, replied “You have never worked with neural networks, have you ?” No, he had not. Had he, he would probably have understood that different mass points, optimized differently, can provide very different NN outputs.
• He showed another Neural Network output based on 3/fb of data, which had a pair of data points lying one standard deviation above the background predictions, and the corresponding plot for a search performed with improved statistics, which had instead a underfluctuation. He said he was puzzled by the effect. Again, some intervention from the audience was necessary, explaining that the methods are constantly reoptimized, and there is no wonder that adding more data can result in a different outcome. This produced a discussion when somebody from the audience tried to speculate that searches were maybe performed by looking at the data before choosing which method to use for a limit extraction! On the contrary of course, all Tevatron searches of the Higgs are blind analyses, where the optimization is performed on expected limits, using control samples, and Monte Carlo, and the data is only looked at afterwards.
• He showed that the Tevatron 2000 report had estimated a maximum Signal/Noise ratio for the H–>WW search of 0.34, and he picked up one random plot from the many searches of that channel by CDF and DZERO, showing that the signal to noise there was never larger than 0.15 or so. Explaining to him that the S/N of searches based on neural networks and combined discriminants is not a fixed value, and that many improvements have occurred in data analysis techniques in 10 years was useless.

Dittmar concluded his talk by saying that:

“Optimistic expectations might help to get funding! This is true, but it is also true that this approach eventually destroys some remaining confidence in science of the public.”.

His last slide even contained the sentence he had previously brought himself to uttering:

“It is the time to confess and admit that the sensitivity predictions were wrong”.

Finally, he encouraged LHC experiments to looking for the Higgs where the Tevatron had excluded it -between 160 and 170 GeV- because Tevatron results cannot be believed. I was disgusted: he most definitely places a strong claim on the prize of the most obnoxious talk of the year. Unfortunately for all, it was just as much an incorrect, scientifically dishonest, and dilettantesque lamentation, plus a defamation of a community of 1300 respected physicists.

In the end, I am really wondering what really moved Dittmar to such a disastrous performance. I think I know the answer, at least in part: he has been an advocate of the $H \to WW$ signature since 1998, and he must now feel bad for that beautiful process being proven hard to see, by his “enemies”. Add to that the frustration of seeing the Tevatron producing brilliant results and excellent performances, while CMS and Atlas are sitting idly in their caverns, and you might figure out there is some human factor to take into account. But nothing, in my opinion, can justify the mix he put together: false allegations, disregard of published material, manipulation of plots, public defamation of respected colleagues. I am sorry to say it, but even though I have nothing personal against Michael Dittmar -I do not know him, and in private he might even be a pleasant person-, it will be very difficult for me to collaborate with him for the benefit of the CMS experiment in the future.

## The say of the weekMarch 19, 2009

Posted by dorigo in games, humor, internet, italian blogs, physics, science.
Tags: ,
comments closed

Questi c’hanno sistematici che fanno provincia

[These fellas have got county-wide systematics]

(Xisy, from a comment in M.Dal Mastro’s blog)