jump to navigation

Neutrino telescopes 2009: Steve King, Neutrino Mass Models April 2, 2009

Posted by dorigo in news, physics, science.
Tags: , , ,
comments closed

This post and the few ones that will follow are for experts only, and I apologize in advance to those of you who do not have a background in particle physics: I will resume more down-to-earth discussions of physics very soon. Below, a short writeup is offered of Steve King’s talk, which I listened to during day three of the “Neutrino Telescopes” conference in Venice, three weeks ago. Any mistake in these writeups is totally my own fault. The slides of all talks, including the one reported here, have been made available at the conference site.

Most of the talk focused on a decision tree for neutrino mass models. This is some kind of flow diagram to decide -better, decode- the nature of neutrinos and their role in particle physics.

In the Standard Model there are no right-handed neutrinos, only Higgs doublets of the symmetry group SU(2)_L, and the theory contains only renormalizable terms. If the above hypotheses all apply, then neutrinos are massless, and three separate lepton numbers are conserved. To generate neutrino masses, one must relax one of the three conditions.

The decision tree starts with the question: is the LSND result true or false ? if it is true, then are neutrinos sterile or CPT-Violating ? Otherwise, if the LSND result is false, one must decide whether neutrinos are Dirac or Majorana particles. If they are Dirac particles, they point to extra dimensions, while if they are Majorana ones, this brings to several consequences, tri-bimaximal mixing among them.

So, to start with the beginning: Is LSND true or false ? MiniBoone does not support the LSND result but it does support three neutrinos mixing. LSND is assumed false in this talk. So one then has to answer the question, are then neutrinos Dirac or Majorana ? Depending on that you can write down masses of different kinds in the Lagrangian. Majorana ones violate lepton number and separately the three of them. Dirac masses couple L-handed antineutrinos to R-handed neutrinos. In this case the neutrino is not equal to the antineutrino.

The first possibility is that the neutrinos are Dirac particles. This raises interesting questions: they must have very small Yukawa coupling. The Higgs Vacuum Expectation Value is about 175 GeV, and the Yukawa coupling is 3E-6 for electrons: this is already quite small. If we do the same with neutrinos, the Yukawa coupling must be of the order of 10^-12 for an electron neutrino mass of 0.2 eV. This raises the question of why this is so small.

One possibility then is provided by theories with extra dimensions: first one may consider flat extra dimensions, with right-handed neutrinos in the bulk (see graph on the right). These particles live in the bulk, whereas we are trapped in a brane. When we write a Yukawa term for neutrinos we get a volume suppression, corresponding to the spread of the wavefunction outside of our world. It goes as one over the square root of the volume, so if the string scale is smaller than the Planck scale (10^7/10^{19} = 10^{-12} we get the right scale.

The other sort of extra dimensions (see below) are the warped ones, with the standard model sitting in the bulk. The wavefunction of the Higgs overlaps with fermions, and this gives exponentially suppressed Dirac masses, depending on the fermion profiles. Because electrons and muons peak in the Planck brane while we live in the TeV brane, where the top quark peaks, this provides a natural way of giving a hierarchy to particle masses.

Some of these models address the problem of dark energy in the Universe. Neutrino telescopes studying neutrinos from Gamma-ray bursts may shed light on this issue along with Quantum Gravity and neutrino mass. The time delay relative to low-energy photons as a function of redshift can be studied against the energy of neutrinos. The function lines are different, and they depend on the models of dark energy. The point is that by studying neutrinos from gamma-ray bursts, one
has a handle to measure dark energy.

Now let us go back to the second possibility: namely, that neutrinos are Majorana particles. In this case you have two choices: a renormalizable operator with a Higgs triplet, and a non-renormalizable operator with a lepton violation term, \delta L =2. Because it is non-renormalizable you get a mass suppression, a mass at the denominator, which corresponds to some high energy scale. The way to implement this is to imagine that the mass scale is due to the exchange of a massive particle in the s-channel between Higgs and leptons, or in the t-channel.

We can concentrate on see-saw mechanisms in the rest of the talk. There are several types of such models, type I is essentially exchanging a heavy right-handed neutrino in the s-channel with the Higgs. Type II is instead when you exchange something in the t-channel, this could be a heavy Higgs triplet, and this would also give a suppressed mass.

The two types of see-saw types can work together. One may think of a unit matrix coming from a type-II see-saw, with the mass splittings and mixings coming from the type-I contribution. In this case the type II would render the neutrinoless double beta decay observable.

Moving down the decision tree, we come to the question of whether we have precise tri-bimaximal mixing (TBM). The matrix (see figure on the right) corresponds to angles of the standard parametrization, \theta_{12}=35^\circ, \theta_{23}=45^\circ, \theta_{13}=0. These values are consistent with observations so far.

Let us consider the form of the neutrino mass matrix assuming the correctness of the TBM matrix. We can derive what the mass matrix is by multiplying it by the mixing matrix. It has three terms, one proportional to mass m_1, one to m_2, and one multiplied to m_3. These matrices can be decomposed into column vectors. These are the columns of the TBM matrix. When you add the matrices together, you get the total matrix, symmetric, with the six terms populating the three rows (a b c, b d e, c e f)  satisfying some relations: c=b, e=a+b-d, d=f.

Such a mass matrix is called “form-diagonalizable” since it is diagonalized by the TBM matrix for all values of a,b,d. A,b,d translate into the masses. There is no cancelation of parameters involved, and the whole thing is extremely elegant. This suggests something called “form dominance”, a mechanism to achieve a form-diagonalizable effective neutrino mass matrix from the type-I see-saw. Working in the diagonal MRR basis, if M_d is the Dirac mass, this can be written as three column vectors, and the effective light neutrino mass matrix is the sum of three terms. Form dominance is the assumption that the columns of the Dirac matrix are proportional to the columns of the TBM matrix (see slide 16 of the talk). Then one can generate the TBM mass matrix. In this case the physical neutrino masses are given by a combination of parameters. This constitutes a very nice way to get a diagonalizable mass matrix from the see-saw mechanism.

Moving on to symmetries, clearly, the TBM matrix suggests some family symmetry. This is badly broken in the charged lepton sector, so one can write explicitly what the Lagrangian is, and the neutrino Majorana matrix respects the muon-tauon interchange, whereas the charged matrix does not. So this is an example of a symmetry working in one way but not in the other. To achieve different symmetries in the neutrino and charged lepton sectors we need to align the Higgs fields which break the family symmetry (called flavons) along different symmetry-preserving directions (called vacuum alignment). We need to have a triplet of flavons which breaks the A4 symmetry.

A4 see-saw models satisfy form dominance. There are two models. Both have R=1. These models are economical, they involve only two flavons. A4 is economical: yet, one must assume that there are some cancelations of the vacuum expectation values in order to achieve consistency with experimental measurements of atmospheric and solar mixing. This suggests a “natural form dominance”, less economical but involving no cancelations. A different flavon is associated to each neutrino mass. An extension is “constrained sequential dominance”, which is a special case, which supplies strongly hierarchical neutrino masses.

As far as family symmetry is concerned, the idea is that there are two symmetries, two family groups from the group SU(3). You get certain relations which are quite interesting. The CKM mixing is in relation with the Yukawa matrix. You can make a connection between the down-Yukawa matrix and the electron Yukawa. This leads to some mixing sum rule relations, because the PMNS matrix is the product of a Cabibbo-like matrix and a TBM matrix. The mixing angles carry information on corrections to TBM. The mixing sum rule one gets is a deviation from 35 degrees of \theta_{12}, which is due to a Cabibbo angle coming from the charged sector. Putting two things together, one can get a physical relation between these angles. A mixing sum rule, \theta_{12} = 35^\circ + \theta_{13} \cos \delta.

The conclusions are that neutrino masses and mixing require new physics beyond the Standard Model. There are many roads for model building, but their answers to key experimental questions will provide the signposts. If TMB is accurately realized, this may imply a new symmetry of nature: a family symmetry, broken by flavons. The whole package is a very attractive scheme, sum rules underline the importance of showing that the deviations from TBM are non-zero. Neutrino telescopes may provide a window into neutrino mass, quantum gravity and dark energy.

After the talk, there were a few questions from the audience.

Q: although true that MiniBoone is not consistent with LSND in a simple 2-neutrino mixing model, in more complex models the two experiments may be consistent. King agrees.

Q: The form dominance scenario in some sense would not apply to the quark sector. It seems it is independent of A4. King’s answer: form dominance is a general framework for achieving form-diagonalizable elements starting from the see-saw mechanism. This includes the A4 model as an example, but does not restricts to it. There are a large class of models in this framework.

Q: So it is not specific enough to extend to the quark sector ? King: form dominance is all about the see-saw mechanism.

Q: So, cannot we extend this to symmetries like T’ which involve the quarks ? King: the answer is yes. Because of time this was only flashed in the talk. It is a very good talk to do by itself.

Hooman Davoudisal: Extra dimensions and the LHC May 25, 2008

Posted by dorigo in physics, science.
Tags: ,
comments closed

Let me continue the long string of posts describing what I heard at the PPC 2008 conference last week with a report on a talk by Hooman Davoudisal, who gave a very clear and entertaining overview of the issue of Large Extra Dimensions, and their testability in the near future.

He started by saying that the topic he was given to report on is narrow, but hundreds of papers have been written on the matter of large extra dimensions (LED) theories. He thus had to pick a few things to discuss about this widely studied topic.

Extra dimensions are not a new idea – they date back on an attempt by G.Nordstrom in 1914, who tried to unify pre-general relativity gravity and electromagnetism in a 5-dimensional world. This was followed by the work of Kaluza in 1921, and Klein in 1926. More recently, string theory has been recognized to require 10 or 11 dimension. The extra ones are compactified at a fundamental scale. This is motivated by the hierarchy problem, that is the fact that there is a very small ratio between electroweak scale and the Planck mass: M_W/M_P = 10^{-17}.

Arkadi-Hamed, Dimopoulos, and Dvali in 1998 studied the case of N compact extra dimensions to stabilize the hierarchy: the fundamental scale is now of the universe is of the order of a TeV. Extra dimensions are large in units of the fundamental scale.  Their scales range from a fermi to a millimeter.  The standard model particles are localized on a “3-brane”, which is a four-dimensional sheet in the multi-dimensional space. Gravity propagates in all dimensions, and therefore gets diluted by the extra dimensions. Kaluza-Klein modes are quantized momenta in the extra dimensions. They correspond to our picture of particles in a box: if you took a course in quantum mechanics, you know that particles confined in a box get their energy levels quantized.

Hooman said that the key signal for LED detection is, what do you know, missing energy! [Apparently, if LHC does not find anything in its missing energy spectrum it will put on the road string theorists, SUSY phenomenologists, and LED aficionados all together: quite a democratic turn of events, if you ask me].  Kaluza-Klein (KK) gravitons escape in the “bulk” -the extra dimensions- and they leave behind the energy bill to pay. KK gravitons could be produced by quark-antiquark annihilation. Also, spin-2 “towers” of KK gravitons  can give rise to spin-2 mediated angular distributions of the final state particles. Further, a possibility is black hole production. When you bring the Planck scale down, you can create  black holes in reasonably sized particle accelerators, such as the LHC -it does not fit in your living-room, but it is smaller than the solar system after all.

The signature of black hole production would be potentially spectacular signals, with energetic  multijets, multi-lepton events. However, this picture is under debate. Meade and Randall say  that this turn of events is difficult at LHC.

For large extra dimensions, searches have been done at the Tevatron and LEP. LEP has a better bound for few additional extra dimensions (well above a TeV), while for many extra dimensions -4 and above- the Tevatron wins, and has limits just below one TeV. These are extracted from both the jet plus missing  energy signature and the photon plus missing energy signature.

A more generic framework is that of universal extra dimensions (UED). This scenario entails Lorentz violation along the extra dimensions, and the lightest Kaluza-Klein particle  is stable. The resulting pheonomenology has the potential of mimicking supersymmetry at the LHC. If you are a believer, you may expect a huge debate going off between UED and SUSY aficionados as soon as ATLAS and CMS start observing missing energy signatures.

Hooman pointed out that the latest UED limit was obtained at CDF using Run 1 data! The lower limit is at 280 GeV. I rushed to check and by jove, he is right: what a jolly gathering of lazy bums CDF is! No results from Run II have been produced [and may I say, I see none in preparation either… Maybe D0 does ?]

The Randall-Sundrum model with a 4-dimensional Standard Model (1999) has its pros: a natural Planck-weak hierarchy, and striking signals. However, the fundamental cut-off is of the order of a TeV, and this is dangerous. [There follows a sentence I cannot make much sense of… Explanations by experts is appreciated here:] Standard Model flavor from a warped bulk: it was realized by placing the SM in the 5-dimensional bulk; the bulk has all SM particles in it. One wants to keep the Higgs boson localized  to the 4-D brane. Localizing the zero-mode of fermions, and fermions have fundamental scale which gives  a higher effective scale. This modifies the RS phenomenology quite a bit because now couplings get diluted. To place the collider reaches in perspective, assume bulk profiles for fermions, realistic flavor. KK gluon exchange contribution: one is required M_{KK}>20 TeV, 1-2 TeV is not favored by these  bulk models. So if you want to explain flavor and other things in these models you are  pushed to higher scales. [Ok, back to better understood sentences.]

In conclusion, Hooman explained that extra dimensions offer the possibility to solve the hierarchy problem, and shed light on the flavor sector of the SM. New phenomena can be discovered at TeV scale. He asked the audience to acknowledge that a discovery of extra dimensions would be a fundamental revolution in science, and as far as I could detect, nobody objected. He concluded by saying that various scenarios can be tested at the LHC.  The original Randall-Sundrum model had rosier signals, but once one introduces more sophistication, signals may become more elusive. This is true also for LED, where black hole signals could be less obvious or likely.