##
Physics Highlights – May 2009
*June 2, 2009*

*Posted by dorigo in news, physics, science.*

Tags: CDF, DZERO, Fermi, heavy quarks, Hess, QCD, Randall, standard model

comments closed

Tags: CDF, DZERO, Fermi, heavy quarks, Hess, QCD, Randall, standard model

comments closed

Here is a list of noteworthy pieces I published on my new blog site in May. Those of you who have not yet updated their links to point there might benefit from it…

Four things about four generations -the three families of fermions in the Standard Model could be complemented by a fourth: a recent preprint discusses the possibility.

Fermi and Hess do not confirm a dark matter signal: a discussion of recent measurements of the electron and positron cosmic ray fluxes.

Nit-picking on the Omega_b Discovery: A discussion of the significance of the signal found by DZERO, attributed to a Omega_b particle.

Nit-picking on the Omega_b Baryon -part II: A pseudoexperiments approach to the assessment of the significance of the signal found by DZERO.

The real discovery of the Omega_b released by CDF today: Announcing the observation of the Omega_b by CDF.

CDF versus DZERO: and the winner is…: A comparison of the two “discoveries” of the Omega_b particle.

The Tevatron Higgs limits strenghtened by a new theoretical study: a discussion of a new calculation of Higgs cross sections, showing an increase in the predictions with respect to numbers used by Tevatron experiments.

Citizen Randall: a report of the giving of honorary citizenship in Padova to Lisa Randall.

Hadronic Dibosons seen -next stop: the Higgs: A report of the new observation of WW/WZ/ZZ decays where one of the bosons decays to jet pairs.

##
Testing the Bell inequality with Lambda hyperons
*April 14, 2009*

*Posted by dorigo in news, physics, science.*

Tags: bell inequality, quantum mechanics, quantum optics, stern gerlach

comments closed

Tags: bell inequality, quantum mechanics, quantum optics, stern gerlach

comments closed

This morning I came back from Easter vacations to my office and was suddenly assaulted by a pile of errands crying to be evaded, but I prevailed, and I still found some time to get fascinated by browsing through a preprint appeared a week ago on the Arxiv, 0904.1000. The paper, by Xi-Qing Hao, Hong-Wei Ke, Yi-Bing Ding, Peng-Nian Shen, and Xue-Qian Li [wow, I’m past the hard part of this post], is titled “**Testing the Bell Inequality at Experiments of High Energy Physics**“. Here is the abstract:

Besides using the laser beam, it is very tempting to directly testify the Bell inequality at high energy experiments where the spin correlation is exactly what the original Bell inequality investigates. In this work, we follow the proposal raised in literature and use the successive decays to testify the Bell inequality. […] (We) make a Monte-Carlo simulation of the processes based on the quantum field theory (QFT). Since the underlying theory is QFT, it implies that we pre-admit the validity of quantum picture. Even though the QFT is true, we need to find how big the database should be, so that we can clearly show deviations of the correlation from the Bell inequality determined by the local hidden variable theory. […]

Testing the Bell inequality with the decay of short-lived subatomic particles sounds really cool, doesn’t it ? Or does it ? Unfortunately, my quantum mechanics is too rusty to allow me to get past a careful post which explains things tidily, in the short time left between now and a well-deserved sleep. You can read elsewhere about the Bell inequality, and how it tests whether pure quantum mechanics rules -destroying correlations between quantum systems separated by a space-like interval- or whether a local hidden variable theory holds instead: and besides, almost anybody can write a better account of that than me, so if you feel you can help, you are invited to guest-blog about it here.

Besides embarassing myself, I still wanted to mention the paper today, because the authors make a honest attempt at proposing an experiment which might actually work, and which could avoid some drawbacks of all experimental tests so far attempted, which belong to the realm of quantum optics. In their own words,

Over a half century, many experiments have been carried out […] among them, the polarization entanglement experiments of two-photons and multi-photons attract the widest attention of the physics society. All photon experimental data indicate that the Bell inequality and its extension forms are violated, and the results are fully consistent with the prediction of QM. The consistency can reach as high as 30 standard deviations. […] when analyzing the data, one needs to introduce additional assumptions, so that the requirement of LHVT cannot be completely satisfied. That is why as generally considered, so far, the Bell inequality has not undergone serious test yet.

Being totally ignorant of quantum optics I am willing to buy the above as true, although, being a sceptical son of a bitch, the statement makes me slightly dubious. Anyway, let me get to the point of this post.

Any respectable quantum mechanic could convince you that in order to check the Bell inequality with the decay chain mentioned above, it all boils down to measuring the correlation between the pions emitted in the decay of the Lambda particles, i.e., the polarization of the Lambda baryons: in the end, one just measures one single, clean angle between the observed final state pions. The authors show that this would require about one billion decays of the mesons produced by an electron-positron collider running at 3.09 GeV center-of-mass energy (the mass of the J/psi resonance): this is because the decay chain involving the clean final state is rare: the branching fraction of is 0.013, the decay occurs once in a thousand cases, and finally, each Lambda hyperon has a 64% chance to yield a proton-pion final state. So, 0.013 times 0.001 times 0.64 squared makes a chance about as frequent as a Pope appointment. However, if we *had* such a sample, here is what we would get:

The plot shows the measured angle between the two charged pions one would obtain from 3382 pion pairs (resulting from a billion decays through double hyperon decay) compared with pure quantum mechanics predictions (the blue line) and by the Bell inequality (the area within the green lines). The simulated events are taken to follow the QM predictions, and such statistics would indeed refute the Bell inequality -although not by a huge statistical margin.

So, the one above is an interesting distribution, but if the paper was all about showing they can compute branching fractions and run a toy Monte Carlo simulation (which even I could do in the time it takes to write a lousy post), it would not be worth much. Instead, they have an improved idea, which is to apply a suitable magnetic field and exploit the anomalous magnetic moment of the Lambda baryons to measure simultaneously their polarization along three independent axes, turning a passive measurement -one involving a check of the decay kinematics of the Lambda particles- into an active one -directly figuring out the polarization. This is a sort of double Stern-Gerlach experiment. Here I would really love to explain what a Stern-Gerlach experiment is, and even more to make sense of the above gibberish, but for today I feel really drained out, and I will just quote the authors again:

One can install two Stern-Gerlach apparatuses at two sides with flexible angles with respect to according to the electron-positron beams. The apparatus provides a non-uniform magnetic field which may decline trajectory of the neutral () due to its non-zero anomalous magnetic moment i.e. the force is proportional to where is the anomalous magnetic moment of , B is a non-uniform external magnetic field and d/n is a directional derivative. Because is neutral, the Lorentz force does not apply, therefore one may expect to use the apparatus to directly measure the polarization […]. But one must first identify the particle flying into the Stern-Gerlach apparatus […]. It can be determined by its decay product […]. Here one only needs the decay product to tag the decaying particle, but does not use it to do kinematic measurements.

I think this idea is brilliant and it might actually be turned into a technical proposal. However, the experimental problems connected to setting up such an apparatus, detecting the golden decays in a huge background of impure quantum states, and capturing Lambdas inside inhomogeneous magnetic fields, are mindboggling: no wonder the authors do not have a Monte Carlo for that. Also, it remains to be seen whether such pains are really called for. If you ask me, quantum mechanics is right, period: why bother ?

##
Things I should have blogged on last week
*April 13, 2009*

*Posted by dorigo in cosmology, news, physics, science.*

Tags: anomalous muons, CDF, dark matter, DZERO, Higgs boson, neutrino

comments closed

Tags: anomalous muons, CDF, dark matter, DZERO, Higgs boson, neutrino

comments closed

It rarely happens that four days pass without a new post on this site, but it is never because of the lack of things to report on: the world of experimental particle physics is wonderfully active and always entertaining. Usually hiatuses are due to a bout of laziness on my part. In this case, I can blame Vodafone, the provider of the wireless internet service I use when I am on vacation. From Padola (the place in the eastern italian Alps where I spent the last few days) the service is horrible, and I sometimes lack the patience to find the moment of outburst when bytes flow freely.

Things I would have wanted to blog on during these days include:

- The document describing the DZERO search of a CDF-like anomalous muon signal is finally public, about two weeks after the talk which announced the results at Moriond. Having had in my hands a unauthorized draft, I have a chance of comparing the polished with the unpolished version… Should be fun, but unfortunately unbloggable, since I owe some respect to my colleagues in DZERO. Still, the many issues I raised after the Moriond seminar should be discussed in light of an official document.
- DZERO also produced a very interesting search for production. This is the associated production of a Higgs boson and a pair of top quarks, a process whose rate is made significant by the large coupling of top quarks and Higgs bosons, by virtue of the large top quark mass. By searching for a top-antitop signature and the associated Higgs boson decay to a pair of b-quark jets, one can investigate the existence of Higgs bosons in the mass range where the decay is most frequent -i.e., the region where all indirect evidence puts it. However, tth production is invisible at the Tevatron, and very hard at the LHC, so the DZERO search is really just a check that there is nothing sticking out which we have missed by just forgetting to look there. In any case, the signature is extremely rich and interesting to study (I had a PhD doing this for CMS a couple of years ago), thus my interest.
- I am still sitting on my notes for Day 4 of the NEUTEL2009 conference in Venice, which included a few interesting talks on gravitational waves, CMB anisotropies, the PAMELA results, and a talk by Marco Cirelli on dark matter searches. With some effort, I should be able to organize these notes in a post in a few days.
- And new beautiful physics results are coming out of CDF. I cannot anticipate much, but I assure you there will be much to read about in the forthcoming weeks!

##
Milind Diwan: recent MINOS results
*April 8, 2009*

*Posted by dorigo in news, physics, science.*

Tags: minos, neutrino, neutrino experiments, neutrino oscillations, Sterile neutrinos

comments closed

Tags: minos, neutrino, neutrino experiments, neutrino oscillations, Sterile neutrinos

comments closed

*I offer below another piece of the notes I took at the NEUTEL09 conference in Venice last month. For the slides of the talk reported here, see the conference site.*

Milind’s presentation concentrated on results of muon-neutrino to electron-neutrino conversions. Minos is a “Main-Injecor Neutrino Oscillation Search”. It is a long-baseline experiment: the beam from the Main injector, Fermilab’s high-intensity source of protons feeding the Tevatron accelerator, can be sent from Batavia (IL) 735km away to the Soudan mine in Minnesota. There are actually two detectors, a near and a far detector: this is the unique feature of MINOS. The spectra collected at the two sites are compared to measure muon neutrino disappearance and appearance. The near detector is 1km away from the target.

The beam is a horn-focused muon-neutrino one. Horns are parabolic-shaped magnets. 120 GeV protons originate pions, which are focused by these structures; negative ones are defocused, and the beam contains predominantly muon neutrinos from the decay of these pions. The accelerator provides 10-microsecond pulses every 2.2 seconds, with protons per pulse. 95% of the resulting neutrino flux is , 4% are .

Besides the presence of two detectors in line, another unique feature of the Fermilab beam is the possibility to move the target in and out, shifting the spectrum of neutrinos that come out, because the focal point of the horns changes. Two positions of the target are used, corresponding to two beam configurations. In the high-energy configuration one can get a beam centered at an energy of 8 GeV or so, while the low-energy configuration is centered at 3 GeV. Most of the time Minos runs with the 3 GeV beam.

Detectors are a kiloton-worth of steel and scintillator planes in the near detector, and 5.4-kT in the far detector. Scintillator strips are 1 cm thick, 4.1 cm wide, and their Moliere radius is of 3.7cm. A 1-GeV muon crosses 27 planes. The iron in the detectors is magnetized with a 1 Tesla magnetic field.

Minos event topologies include CC-like and NC-like events. A charged-current (CC) event gives a muon plus hadrons: a long charged track from the muon, which is easy to find. A neutral current (NC) event will make a splash and it is diffuse, since all one sees is the signal from the disgregation of the target nucleus; an electron CC event will leave a dense, short shower, with a typical electromagnetic shower profile. The three processes giving rise to the observed signatures are described by the Feynman diagrams below.

The analysis challenge is to put together a selection algorithm capable of rejecting backgrounds and select CC events. Fluxes are measured in the near detector, and they allow to predict what can be found in the far detector. This minimizes the dependence on MC, because there are too many factors that may cause fluctuations in the real data, and the simulation cannot possibly handle them all. So they carry out a blind analysis. They check background estimates with independent samples: this serves the purpose of avoiding to bias oneself with what one should observe. They generate many simulated samples not containing an oscillation signal, to check all analysis procedures.

Basic cuts are applied on their data sample to ensure data quality. Fiducial volume cuts provide rejection to cosmic ray backgrounds. Simple cuts lead to a S/N ratio of 1:12. By “signal” one means the appearance of electron neutrinos. events are selected with artificial Neural Networks, which use the properties of the shower, the lateral shower spread, etcetera. These can discriminate NC interactions from electron-neutrino-induced CC interactions. After the application of the algorithm, the S/N ratio is 1/4. At this stage, one important remaining source of background is due to muon-neutrino CC backgrounds which can be mistaken from electron-neutrino ones when the muon is not seen in the detector.

They can select events with a “library event matching” (LEM). This matches the data event with a shower library, reconstructing the fraction of the best 50 matches which are electron-neutrino events. This is more or less like an evolved “nearest-neighbor” algorithm. As a result, they get a better separation. However, according to the speaker this method is not ready yet, since they still need to understand its details better.

[*As I was taking these notes, I observed that data and Monte Carlo simulation do not match well in the low-ANN output region. The speaker claims tha the fraction of events in the tail of the Monte Carlo distribution can be modeled only with some approximation, but that they do not need to model that region too well for their result. However, it looks as if the discrepancy between data and MC not well understood. Please refer to the graph shown below, which shows the NN output in data and simulation at a preselection level.*]

Back to the presentation. To obtain a result, the calculation they performis simple: how many events are expected in the far detector ? The ratio of far to near flux is known, 1.3E-6. This includes all geometrical factors. For this analysis they have 3E20 protons on target. They expect 27 events for the ANN selection, and 22 for the LEM analysis.

They need to separate backgrounds in NC and CC, so they do a trick: they take data in two different beam configurations, then they look at the spectrum in the near detector, where they expect muon-type events to be rejected much more easily because they are more deflected. From this they can separate the two contributions.

Their final result for the predicted number of electron-induced CC events is 27+-5 (stat)+-2 (syst).

A second check on the background calculation consists in removing the muon in tagged CC events, and use these for two different calculations. One is an independent background calculation: they can add a simulated electron to the event raw data after removing the muon. This checks whether the signal is modeled correctly. From these studies they conclude that the signal is modeled well.

The results show that there is indeed a small signal: they observe 35 events, when they expect 27, in the high-NN output region, as shown in the figure above. For the other method, LEM, results are consistent. The energy spectrum of the selected events is shown in the graph below.

With the observation of this small excess (which is compatible with predictions), a confidence level is set in the plane of the two parameters versus , at 90%. It goes up to 0.35, with a small oscillation dependent on the value of . You can see it in the figure on the right below.

The speaker claims that if the excess they are observing disappears with further accumulated data, they will be able to reach below the existing bound.

The other result of MINOS is a result from disappearance studies. The signal amounts to several hundred events of deficit. They can put a limit on an empirical parameter which determines what fraction of the initial flux has gone into sterile neutrinos. They have 6.6E20 protons on target now taken. The fraction of sterile

neutrinos is less than 0.68 at 90%CL.

##
What is the Y(4140)? The plot thickens
*April 6, 2009*

*Posted by dorigo in news, physics, science.*

Tags: charmonium, heavy quarks, QCD, Y(4140)

comments closed

Tags: charmonium, heavy quarks, QCD, Y(4140)

comments closed

I read with interest -but it would probably be more honest to say I browsed, since I could understand less than 50%- a preprint released three days ago on “**The hidden charm decay of Y(4140) by the rescattering mechanism**“, by Xiang Liu, from Peking University (now at Coimbra, PT). The Y particle has been recently discovered by CDF.

The existence of the several new resonances of masses above 3 GeV recently unearthed by B factories and by the CDF experiment poses a challenge to our interpretation of these states as simple quark-antiquark bound states, because of their properties -in particular, their decay pattern and their natural widths.

Already with the first “exotic” meson discovered a few years ago (and recently measured with great precision by CDF), the X(3872), the puzzle was evident: at a mass almost coincident with twice the mass of conventional charmed mesons (states which are labeled “D”, which are composed of two quarks: a charm and a up or down quark, like or ), the X was immediately suggested to be a molecular state of two D particles. I wrote an account of the studies of the nature of the X particle a few years ago if you are interested -but mind you, the advancements in this research field are quick, and I believe the material I wrote back then is a bit aged by now.

The paper by Liu tries to determine whether the interpretation of the Y particle as a pure second radial excitation of P-wave charmonium (, with ) holds water once the observed branching ratio of the Y into the final state seen by CDF (), and the measured decay width, are compared to a theoretical calculation.

The nice thing about the decay of the Y into the observed final state is that it occurs only through a so-called “rescattering” mechanism, by means of the diagrams shown in the graph below (the ones shown refer to the J=0 hypothesis of the , but similar diagrams are discussed for the J=1 state in the paper).

As you can see, the Y produces the two final state particles by means of a triangle loop of D mesons. These diagrams usually describe rare processes, and in fact Liu’s calculations end up finding a small branching fraction. I am unable to delve into the details of the computation, so I will just state the result: the typical values of the branching ratio depend on a parameter, which, if taken in a “reasonable” range of values, provides estimates in the ballpark of a few . This appears inconsistent with the observation provided by the CDF experiment.

Clearly, work is in progress here, so I would abstain from concluding anything definite on the matter. So, for now, let us call this an indication that the simple interpretation of the Y as a excited charmonium state is problematic.

##
Just a link
*April 5, 2009*

*Posted by dorigo in Blogroll, news, physics, science.*

Tags: Higgs boson, science reporting, Tevatron

comments closed

Tags: Higgs boson, science reporting, Tevatron

comments closed

I read with amusement (and some effort) a spanish account by Francis (th)E mule of Michael Dittmar’s controversial seminar of last March 19th. I paste the link here for several reasons: since I believe it might be of interest to some of you, to have a place to store it, and because I am not insensitive to flattery:

“Entre el público se encontraba Tomasso Dorigo […] (r)esponsable del mejor blog sobre física de partículas elementales del mundo”

Muchas gracias, Francis -but please note: my name spells with two m’s and one s!

##
NeuTel 09: Oscar Blanch Bigas, update on Auger limits on the diffuse flux of neutrinos
*April 3, 2009*

*Posted by dorigo in astronomy, cosmology, news, physics, science.*

Tags: auger, cosmic rays, GKZ, neutrino

comments closed

Tags: auger, cosmic rays, GKZ, neutrino

comments closed

*With this post I continue the series of short reports on the talks I heard at the Neutrino Telescopes 2009 conference, held three weeks ago in Venice.
*

The Pierre Auger Observatory is a huge (3000 km^2) hybrid detector of ultra-energetic cosmic rays -that means ones having an energy above 10^18 eV. The detector is located in Malargue, Argentina, at 1400m above sea level.

There are four six-eyed fluorescent detectors: when the shower of particles created by a very energetic primary cosmic ray develops in the atmosphere, it excites nitrogen atoms which emit energy in fluorescent light, collected in telescope. It is a calorimetric measurement of the shower, since the number of particles in the shower gives a measurement of the energy of the incident primary particle.

The main problem of the fluorescent detection method is statistics: fluorescent detectors have a reduced duty cycle because they can only observe in moonless nights. That amounts to a 10% duty cycle. So these are complemented by a surface detector, which has a 100% duty cycle.

The surface detector is composed by water Cherenkov detectors on the ground, which can detect light with PMT tubes. The signal is sampled as a function of distance from the center of shower. The measurement depends on a Monte Carlo simulation, so there are some systematic uncertainties present in the method.

The assembly includes 1600 surface detectors (red points), surrounded by four fluorescence detectors (shown by green lines in the map above). These study the high-energy cosmics, their spectra, their arrival direction, and their composition. The detector has some sensitivity to unidentified ultra-high energy neutrinos. A standard cosmic ray interacts at the top of atmosphere, and yields an extensive air shower which has an electromagnetic component developing on the ground; but if the arrival direction of the primary is tilted with respect to the vertical, the e.m. component is absorbed when it arrives on the ground, so it contains only muons. For neutrinos, which can penetrate deep in the atmosphere before interacting, the shower will instead have a significant e.m. component regardless of the angle of incidence.

The “footprint” is the pattern of firing detectors on the ground. It encodes information on the angle of incidence. For tilted showers, the presence of an e.m. component is strong indication of neutrino shower. An elongated footprint and a wide time structure of signal is seen for tilted showers.

There is a second method to detect neutrinos. This is based on the so-called “skimming neutrinos“: the Earth-skimming mechanism occurs when neutrinos interact in the Earth, producing a charged lepton via charged current interaction. The lepton produces a shower that can be detected above the ground. This channel has better sensitivity than neutrinos interacting in the atmosphere. It can be used for tau neutrinos, due to early tau decay in the atmosphere. The distance of interaction for a muon neutrino is 500 km, for a tau neutrino is 50 km. for electrons it is 10 km. These figures apply to 1 EeV primaries. If you are unfamiliar with these ultra-high energies, 1 Eev = 1000 PeV = 1,000,000 TeV: this is roughly equivalent to the energy drawn in a second by a handful of LEDs.

Showers induced by emerging tau leptons start close to the detector, and are very inclined. So one asks for an elongated footprint, and a shower moving at the speed of light using the available timing information. The background to such a signature is of the order of one event every 10 years. The most important drawback of Earth-skimming neutrinos is the large systematic uncertainty associated with the measurement.

Ideally, one would like to produce a neutrino spectrum or an energy-dependent limit on the flux, but no energy reconstruction is available. Observed energy depends on the height at which the shower is developing, and since this is not known for penetrating particles as neutrinos, one can only give a flux limit for them. The limit is in the range of energy where GZK neutrinos should peak, but its value is an order of magnitude above the expected flux of GZK neutrinos. A differential limit in energy is much worse in reach.

The figure below shows the result for the integrated flux of neutrinos obtained by the Pierre Auger observatory in 2008 (red line), compared with other limits and with expectations for GKZ neutrinos.

##
Neutrino telescopes 2009: Steve King, Neutrino Mass Models
*April 2, 2009*

*Posted by dorigo in news, physics, science.*

Tags: conferences, extra dimensions, neutrino, standard model

comments closed

Tags: conferences, extra dimensions, neutrino, standard model

comments closed

*This post and the few ones that will follow are for experts only, and I apologize in advance to those of you who do not have a background in particle physics: I will resume more down-to-earth discussions of physics very soon. Below, a short writeup is offered of Steve King’s talk, which I listened to during day three of the “Neutrino Telescopes” conference in Venice, three weeks ago. Any mistake in these writeups is totally my own fault. The slides of all talks, including the one reported here, have been made available at the conference site.
*

Most of the talk focused on a **decision tree** for neutrino mass models. This is some kind of flow diagram to decide -better, decode- the nature of neutrinos and their role in particle physics.

In the Standard Model there are no right-handed neutrinos, only Higgs doublets of the symmetry group , and the theory contains only renormalizable terms. If the above hypotheses all apply, then neutrinos are massless, and three separate lepton numbers are conserved. To generate neutrino masses, one must relax one of the three conditions.

The decision tree starts with the question: is the LSND result true or false ? if it is true, then are neutrinos sterile or CPT-Violating ? Otherwise, if the LSND result is false, one must decide whether neutrinos are Dirac or Majorana particles. If they are Dirac particles, they point to extra dimensions, while if they are Majorana ones, this brings to several consequences, tri-bimaximal mixing among them.

So, to start with the beginning: Is LSND true or false ? MiniBoone does not support the LSND result but it does support three neutrinos mixing. LSND is assumed false in this talk. So one then has to answer the question, are then neutrinos Dirac or Majorana ? Depending on that you can write down masses of different kinds in the Lagrangian. Majorana ones violate lepton number and separately the three of them. Dirac masses couple L-handed antineutrinos to R-handed neutrinos. In this case the neutrino is not equal to the antineutrino.

The first possibility is that the neutrinos are Dirac particles. This raises interesting questions: they must have very small Yukawa coupling. The Higgs Vacuum Expectation Value is about 175 GeV, and the Yukawa coupling is 3E-6 for electrons: this is already quite small. If we do the same with neutrinos, the Yukawa coupling must be of the order of 10^-12 for an electron neutrino mass of 0.2 eV. This raises the question of why this is so small.

One possibility then is provided by theories with extra dimensions: first one may consider flat extra dimensions, with right-handed neutrinos in the bulk (see graph on the right). These particles live in the bulk, whereas we are trapped in a brane. When we write a Yukawa term for neutrinos we get a volume suppression, corresponding to the spread of the wavefunction outside of our world. It goes as one over the square root of the volume, so if the string scale is smaller than the Planck scale ( we get the right scale.

The other sort of extra dimensions (see below) are the warped ones, with the standard model sitting in the bulk. The wavefunction of the Higgs overlaps with fermions, and this gives exponentially suppressed Dirac masses, depending on the fermion profiles. Because electrons and muons peak in the Planck brane while we live in the TeV brane, where the top quark peaks, this provides a natural way of giving a hierarchy to particle masses.

Some of these models address the problem of dark energy in the Universe. Neutrino telescopes studying neutrinos from Gamma-ray bursts may shed light on this issue along with Quantum Gravity and neutrino mass. The time delay relative to low-energy photons as a function of redshift can be studied against the energy of neutrinos. The function lines are different, and they depend on the models of dark energy. The point is that by studying neutrinos from gamma-ray bursts, one

has a handle to measure dark energy.

Now let us go back to the second possibility: namely, that neutrinos are Majorana particles. In this case you have two choices: a renormalizable operator with a Higgs triplet, and a non-renormalizable operator with a lepton violation term, . Because it is non-renormalizable you get a mass suppression, a mass at the denominator, which corresponds to some high energy scale. The way to implement this is to imagine that the mass scale is due to the exchange of a massive particle in the s-channel between Higgs and leptons, or in the t-channel.

We can concentrate on see-saw mechanisms in the rest of the talk. There are several types of such models, type I is essentially exchanging a heavy right-handed neutrino in the s-channel with the Higgs. Type II is instead when you exchange something in the t-channel, this could be a heavy Higgs triplet, and this would also give a suppressed mass.

The two types of see-saw types can work together. One may think of a unit matrix coming from a type-II see-saw, with the mass splittings and mixings coming from the type-I contribution. In this case the type II would render the neutrinoless double beta decay observable.

Moving down the decision tree, we come to the question of whether we have precise tri-bimaximal mixing (TBM). The matrix (see figure on the right) corresponds to angles of the standard parametrization, , , . These values are consistent with observations so far.

Let us consider the form of the neutrino mass matrix assuming the correctness of the TBM matrix. We can derive what the mass matrix is by multiplying it by the mixing matrix. It has three terms, one proportional to mass , one to , and one multiplied to . These matrices can be decomposed into column vectors. These are the columns of the TBM matrix. When you add the matrices together, you get the total matrix, symmetric, with the six terms populating the three rows (, , ) satisfying some relations: , , .

Such a mass matrix is called “form-diagonalizable” since it is diagonalized by the TBM matrix for all values of a,b,d. A,b,d translate into the masses. There is no cancelation of parameters involved, and the whole thing is extremely elegant. This suggests something called “form dominance”, a mechanism to achieve a form-diagonalizable effective neutrino mass matrix from the type-I see-saw. Working in the diagonal MRR basis, if is the Dirac mass, this can be written as three column vectors, and the effective light neutrino mass matrix is the sum of three terms. Form dominance is the assumption that the columns of the Dirac matrix are proportional to the columns of the TBM matrix (see slide 16 of the talk). Then one can generate the TBM mass matrix. In this case the physical neutrino masses are given by a combination of parameters. This constitutes a very nice way to get a diagonalizable mass matrix from the see-saw mechanism.

Moving on to symmetries, clearly, the TBM matrix suggests some family symmetry. This is badly broken in the charged lepton sector, so one can write explicitly what the Lagrangian is, and the neutrino Majorana matrix respects the muon-tauon interchange, whereas the charged matrix does not. So this is an example of a symmetry working in one way but not in the other. To achieve different symmetries in the neutrino and charged lepton sectors we need to align the Higgs fields which break the family symmetry (called flavons) along different symmetry-preserving directions (called vacuum alignment). We need to have a triplet of flavons which breaks the A4 symmetry.

A4 see-saw models satisfy form dominance. There are two models. Both have R=1. These models are economical, they involve only two flavons. A4 is economical: yet, one must assume that there are some cancelations of the vacuum expectation values in order to achieve consistency with experimental measurements of atmospheric and solar mixing. This suggests a “natural form dominance”, less economical but involving no cancelations. A different flavon is associated to each neutrino mass. An extension is “constrained sequential dominance”, which is a special case, which supplies strongly hierarchical neutrino masses.

As far as family symmetry is concerned, the idea is that there are two symmetries, two family groups from the group . You get certain relations which are quite interesting. The CKM mixing is in relation with the Yukawa matrix. You can make a connection between the down-Yukawa matrix and the electron Yukawa. This leads to some mixing sum rule relations, because the PMNS matrix is the product of a Cabibbo-like matrix and a TBM matrix. The mixing angles carry information on corrections to TBM. The mixing sum rule one gets is a deviation from 35 degrees of , which is due to a Cabibbo angle coming from the charged sector. Putting two things together, one can get a physical relation between these angles. A mixing sum rule, .

The conclusions are that neutrino masses and mixing require new physics beyond the Standard Model. There are many roads for model building, but their answers to key experimental questions will provide the signposts. If TMB is accurately realized, this may imply a new symmetry of nature: a family symmetry, broken by flavons. The whole package is a very attractive scheme, sum rules underline the importance of showing that the deviations from TBM are non-zero. Neutrino telescopes may provide a window into neutrino mass, quantum gravity and dark energy.

*After the talk, there were a few questions from the audience.*

**Q:** although true that MiniBoone is not consistent with LSND in a simple 2-neutrino mixing model, in more complex models the two experiments may be consistent. **King **agrees.

**Q: **The form dominance scenario in some sense would not apply to the quark sector. It seems it is independent of A4. **King’s answer:** form dominance is a general framework for achieving form-diagonalizable elements starting from the see-saw mechanism. This includes the A4 model as an example, but does not restricts to it. There are a large class of models in this framework.

**Q: **So it is not specific enough to extend to the quark sector ? **King:** form dominance is all about the see-saw mechanism.

**Q:** So, cannot we extend this to symmetries like T’ which involve the quarks ? **King:** the answer is yes. Because of time this was only flashed in the talk. It is a very good talk to do by itself.

##
Variation found in a dimensionless constant!
*April 1, 2009*

*Posted by dorigo in cosmology, mathematics, news, physics, science.*

Tags: cosmology, mathematics, theoretical physics

comments closed

Tags: cosmology, mathematics, theoretical physics

comments closed

I urge you to read this preprint by R.Scherrer (from Vanderbilt University), which appeared yesterday on the arxiv. It is titled “**Time variation of a fundamental dimensionless constant**“, and I believe it might have profound implications in our understanding of cosmology, as well as theoretical physics. I quote the incipit of the paper below:

“Physicists have long speculated that the fundamental constants might not, in fact, be constant, but instead might vary with time. Dirac was the first to suggest this possibility [1], and time variation of the fundamental constants has been investigated numerous times since then. Among the various possibilities, the fine structure constant and the gravitational constant have received the greatest attention, but work has also been done, for example, on constants related to the weak and strong interactions, the electron-proton mass ratio, and several others.”

Many thanks to Massimo Pietroni for pointing out the paper to me this morning. I am now collecting information about the study, and will update this post shortly.