##
Neutrino telescopes 2009: Steve King, Neutrino Mass Models *April 2, 2009*

*Posted by dorigo in news, physics, science.*

Tags: conferences, extra dimensions, neutrino, standard model

trackback

Tags: conferences, extra dimensions, neutrino, standard model

trackback

*This post and the few ones that will follow are for experts only, and I apologize in advance to those of you who do not have a background in particle physics: I will resume more down-to-earth discussions of physics very soon. Below, a short writeup is offered of Steve King’s talk, which I listened to during day three of the “Neutrino Telescopes” conference in Venice, three weeks ago. Any mistake in these writeups is totally my own fault. The slides of all talks, including the one reported here, have been made available at the conference site.
*

Most of the talk focused on a **decision tree** for neutrino mass models. This is some kind of flow diagram to decide -better, decode- the nature of neutrinos and their role in particle physics.

In the Standard Model there are no right-handed neutrinos, only Higgs doublets of the symmetry group , and the theory contains only renormalizable terms. If the above hypotheses all apply, then neutrinos are massless, and three separate lepton numbers are conserved. To generate neutrino masses, one must relax one of the three conditions.

The decision tree starts with the question: is the LSND result true or false ? if it is true, then are neutrinos sterile or CPT-Violating ? Otherwise, if the LSND result is false, one must decide whether neutrinos are Dirac or Majorana particles. If they are Dirac particles, they point to extra dimensions, while if they are Majorana ones, this brings to several consequences, tri-bimaximal mixing among them.

So, to start with the beginning: Is LSND true or false ? MiniBoone does not support the LSND result but it does support three neutrinos mixing. LSND is assumed false in this talk. So one then has to answer the question, are then neutrinos Dirac or Majorana ? Depending on that you can write down masses of different kinds in the Lagrangian. Majorana ones violate lepton number and separately the three of them. Dirac masses couple L-handed antineutrinos to R-handed neutrinos. In this case the neutrino is not equal to the antineutrino.

The first possibility is that the neutrinos are Dirac particles. This raises interesting questions: they must have very small Yukawa coupling. The Higgs Vacuum Expectation Value is about 175 GeV, and the Yukawa coupling is 3E-6 for electrons: this is already quite small. If we do the same with neutrinos, the Yukawa coupling must be of the order of 10^-12 for an electron neutrino mass of 0.2 eV. This raises the question of why this is so small.

One possibility then is provided by theories with extra dimensions: first one may consider flat extra dimensions, with right-handed neutrinos in the bulk (see graph on the right). These particles live in the bulk, whereas we are trapped in a brane. When we write a Yukawa term for neutrinos we get a volume suppression, corresponding to the spread of the wavefunction outside of our world. It goes as one over the square root of the volume, so if the string scale is smaller than the Planck scale ( we get the right scale.

The other sort of extra dimensions (see below) are the warped ones, with the standard model sitting in the bulk. The wavefunction of the Higgs overlaps with fermions, and this gives exponentially suppressed Dirac masses, depending on the fermion profiles. Because electrons and muons peak in the Planck brane while we live in the TeV brane, where the top quark peaks, this provides a natural way of giving a hierarchy to particle masses.

Some of these models address the problem of dark energy in the Universe. Neutrino telescopes studying neutrinos from Gamma-ray bursts may shed light on this issue along with Quantum Gravity and neutrino mass. The time delay relative to low-energy photons as a function of redshift can be studied against the energy of neutrinos. The function lines are different, and they depend on the models of dark energy. The point is that by studying neutrinos from gamma-ray bursts, one

has a handle to measure dark energy.

Now let us go back to the second possibility: namely, that neutrinos are Majorana particles. In this case you have two choices: a renormalizable operator with a Higgs triplet, and a non-renormalizable operator with a lepton violation term, . Because it is non-renormalizable you get a mass suppression, a mass at the denominator, which corresponds to some high energy scale. The way to implement this is to imagine that the mass scale is due to the exchange of a massive particle in the s-channel between Higgs and leptons, or in the t-channel.

We can concentrate on see-saw mechanisms in the rest of the talk. There are several types of such models, type I is essentially exchanging a heavy right-handed neutrino in the s-channel with the Higgs. Type II is instead when you exchange something in the t-channel, this could be a heavy Higgs triplet, and this would also give a suppressed mass.

The two types of see-saw types can work together. One may think of a unit matrix coming from a type-II see-saw, with the mass splittings and mixings coming from the type-I contribution. In this case the type II would render the neutrinoless double beta decay observable.

Moving down the decision tree, we come to the question of whether we have precise tri-bimaximal mixing (TBM). The matrix (see figure on the right) corresponds to angles of the standard parametrization, , , . These values are consistent with observations so far.

Let us consider the form of the neutrino mass matrix assuming the correctness of the TBM matrix. We can derive what the mass matrix is by multiplying it by the mixing matrix. It has three terms, one proportional to mass , one to , and one multiplied to . These matrices can be decomposed into column vectors. These are the columns of the TBM matrix. When you add the matrices together, you get the total matrix, symmetric, with the six terms populating the three rows (, , )Â satisfying some relations: , , .

Such a mass matrix is called “form-diagonalizable” since it is diagonalized by the TBM matrix for all values of a,b,d. A,b,d translate into the masses. There is no cancelation of parameters involved, and the whole thing is extremely elegant. This suggests something called “form dominance”, a mechanism to achieve a form-diagonalizable effective neutrino mass matrix from the type-I see-saw. Working in the diagonal MRR basis, if is the Dirac mass, this can be written as three column vectors, and the effective light neutrino mass matrix is the sum of three terms. Form dominance is the assumption that the columns of the Dirac matrix are proportional to the columns of the TBM matrix (see slide 16 of the talk). Then one can generate the TBM mass matrix. In this case the physical neutrino masses are given by a combination of parameters. This constitutes a very nice way to get a diagonalizable mass matrix from the see-saw mechanism.

Moving on to symmetries, clearly, the TBM matrix suggests some family symmetry. This is badly broken in the charged lepton sector, so one can write explicitly what the Lagrangian is, and the neutrino Majorana matrix respects the muon-tauon interchange, whereas the charged matrix does not. So this is an example of a symmetry working in one way but not in the other. To achieve different symmetries in the neutrino and charged lepton sectors we need to align the Higgs fields which break the family symmetry (called flavons) along different symmetry-preserving directions (called vacuum alignment). We need to have a triplet of flavons which breaks the A4 symmetry.

A4 see-saw models satisfy form dominance. There are two models. Both have R=1. These models are economical, they involve only two flavons. A4 is economical: yet, one must assume that there are some cancelations of the vacuum expectation values in order to achieve consistency with experimental measurements of atmospheric and solar mixing. This suggests a “natural form dominance”, less economical but involving no cancelations. A different flavon is associated to each neutrino mass. An extension is “constrained sequential dominance”, which is a special case, which supplies strongly hierarchical neutrino masses.

As far as family symmetry is concerned, the idea is that there are two symmetries, two family groups from the group . You get certain relations which are quite interesting. The CKM mixing is in relation with the Yukawa matrix. You can make a connection between the down-Yukawa matrix and the electron Yukawa. This leads to some mixing sum rule relations, because the PMNS matrix is the product of a Cabibbo-like matrix and a TBM matrix. The mixing angles carry information on corrections to TBM. The mixing sum rule one gets is a deviation from 35 degrees of , which is due to a Cabibbo angle coming from the charged sector. Putting two things together, one can get a physical relation between these angles. A mixing sum rule, .

The conclusions are that neutrino masses and mixing require new physics beyond the Standard Model. There are many roads for model building, but their answers to key experimental questions will provide the signposts. If TMB is accurately realized, this may imply a new symmetry of nature: a family symmetry, broken by flavons. The whole package is a very attractive scheme, sum rules underline the importance of showing that the deviations from TBM are non-zero. Neutrino telescopes may provide a window into neutrino mass, quantum gravity and dark energy.

*After the talk, there were a few questions from the audience.*

**Q:** although true that MiniBoone is not consistent with LSND in a simple 2-neutrino mixing model, in more complex models the two experiments may be consistent. **King **agrees.

**Q: **The form dominance scenario in some sense would not apply to the quark sector. It seems it is independent of A4. **King’s answer:** form dominance is a general framework for achieving form-diagonalizable elements starting from the see-saw mechanism. This includes the A4 model as an example, but does not restricts to it. There are a large class of models in this framework.

**Q: **So it is not specific enough to extend to the quark sector ? **King:** form dominance is all about the see-saw mechanism.

**Q:** So, cannot we extend this to symmetries like T’ which involve the quarks ? **King:** the answer is yes. Because of time this was only flashed in the talk. It is a very good talk to do by itself.

## Comments

Sorry comments are closed for this entry

Nice matrices!! A shame though that people focus so much on symmetry groups and Lagrangians when the beyond SM physics involved probably won’t be framed in that language. And a pity he didn’t mention the quantum Fourier transform.

Form diagonalizability is interesting. Since TBM = (F3)(F2) it makes sense to think of it as a diagonalisation operator involving both mass and spin, in the same way that the Fourier transform diagonalises circulants – ie. the Koide mass matrices.

Although “neutrino telescope” detectors have been used in looking for proton decay,

I did not see any talks about proton decay.

Did any people at the meeting discuss proton decay,

or is it something that is not fashionable now ?

Personally, I would like to see significant effort toward observing a proton decay lifetime with well-understood background.

Tony Smith

PS – For perspective, in a talk last year – see

http://www.e15.physik.tu-muenchen.de/

fileadmin/downloads/LENA/Talks/Marrodan_NNN08_Paris.pdf

(please correct for line break in the long URL)

Teresa Marrodan Undagoitia said:

“… Super-Kamiokande best limits:

tau( p to e+ pi0 ) …[is at least]… 5.4 x 10^33 y (90% C.L.)

tau( p to K+ nubar ) …[is at least]… 2.3 x 10^33 y (90% C.L.)

…

Potential of LENA (10 y measuring time)

For Superkamiokande current limit: tau = 2.3 x 10^33 y

About 40 events in LENA and …[no more than]… 1 background

Limit at 90% (C.L.) for no signal in LENA:

tau greater than 4.1 x 10^34 y with …[epsion]… = 65% …”.

Dorigo: Great post! Thank you.

Regarding the questions about incorporating the flavor scenario to quarks, it is in fact an interesting issue. It is not at all trivial, and as King mentioned T’ can be used in this way. I have worked on this problem and found out that is doable. The challenge is to incorporate the basic (minimal) model in complete grand unified scenario. You can take a glance at this from http://arxiv.org/abs/0707.3661.

T’ is also interesting because there is the possibility of using it as a local discrete symmetry (although concrete realizations need an extra Z3 symmetry). Cheers.

Kea, of course new attempts and formalisms can only be meaningful when the old ones fail. So I do not see it as a bad thing that old formalisms try to encode an metabolize results that tend to go outside the box, as some neutrino results appear to start doing.

Tony, proton decay was mentioned in a few talks, but only as a by-product of detectors designed for neutrino physics as their main goal. This is not the case of superkamiokande, for which indeed proton decay was a primary goal, nor for a few others; however, the failure of SU(5) predictions seems to have dampened the enthusiasm for these searches.

Fefino, thank you for your note. I will have a look!

Cheers,

T.

I respect the flow chart based on SM but I would like to try going off the grid for a second (please bear with me):

Using a formula that seems (hypothetically) to relate rest mass (m) to force range (x):

mc^2 = (1/2)kx^2

where m = mass of particle and k = 7.18 x10^17 (you will see why in a second), some interesting relations are found for the boundary conditions using k and the formula above:

x = est. radius of universe = 1.9×10^26 meters, m = est. mass of universe = 1.44×10^53 Kg.

x = Planck Length, m = 10^-69 Kg ??,

x = strong force range = 10^-15 meters, m = mass of pi-meson 139.6 MeV

x = weak force range = 10^-18 meters , m = predicted mass of sum(neutrinos) = 2.24 eV/c2 .

The 2.24 eV/c^2 is about where the Mainz upper limit for neturino mass is estimated. Any thoughts?

Thanks,

Mike

Hi Mike,

there is nothing wrong in your exercise, but we know that not everything in the world works like an elastic potential. If you get numbers that can be associated, with some fantasy, to “physical” masses by taking physical length scales, it is through a combination of chance, real facts (the yukawa coincidence is a well-established one), and speculation.

Until one attaches some underlying theoretical meaning to this framework, and becomes capable of some genuine predictions, this remains just what it is – an exercise.

Cheers,

T.