##
New bounds for the Higgs: 115-135 GeV! *August 1, 2008*

*Posted by dorigo in news, physics, science.*

Tags: electroweak fits, higgs, ICHEP 08

trackback

Tags: electroweak fits, higgs, ICHEP 08

trackback

From yesterday’s talk by Johannes Haller at ICHEP 2008, I post here today two plots showing the latest result of global fits to standard model observables, evidencing the Higgs mass constraints. The first only includes indirect information, the second also includes direct search resuls.

The above plot is tidy, yet the amount of information that the Gfitter digested to produce it is gigantic. Decades of studies at electron-positron colliders, precision electroweak measurements, W and top mass determinations. Probably of the order of fifty thousand man-years of work, distilled and summarized in a single, useless graph.

Jokes aside, the plot does tell us a lot. Let me try to discuss it. The graph shows the variation from its minimum value of the fit chisquared -the standard quantity describing how well the data agree with the model- as a function of the Higgs boson mass, interpreted as a free parameter. **The fit prefers a 80 GeV mass for the Higgs boson**, but the range of allowed values is still broad: at 1-sigma, the preferred range is within 57-110 GeV. At 2-sigma, the range is of course even wider, from 39 and 156 GeV. If we keep the two-sigma variation as a reference, we note that the decay is not likely to be the way by which the Higgs will be discovered.

Also note that the LEP II experiment limits have not been inserted in the fit: in fact, the 114 GeV lower limit is hatched but has no impact in the curve, which is smooth because unaffected by direct Higgs searches.

Take a look instead at the plot below, which attempts at summarizing the whole picture, by including the direct search results at LEP II and at the Tevatron (without the latest results however) in the fit.

**This is striking new information! **I will only comment the yellow band, which -like the one in the former plot- describes the deviation of the log-likelihood ratio in data and the signal plus background hypothesis. If you do not know what that means, fear not. Let’s disregard how the band was obtained and concentrate instead on what it means. It is just a measure of how likely it is that the Higgs mass sits at a particular value of mass, given all the information from electroweak fits AND the direct search results, which have in various degrees “excluded” (at 95% confidence level) or made less probable (at 80%, 90% CL or below) specific Higgs mass values.

In the plot you can read off that the Higgs mass has a preferred range of masses which is now quite narrow! . That is correct: the lower 1-sigma error is very small because the LEP II limit is very strong. Instead, the upper limit is much less constrained. Still, **the above 1-sigma band is very bad news for the LHC**: it implies that the Higgs is very unlikely to be discovered soon. That is because at low invariant mass ATLAS and CMS need to rely on very tough discovery channels, relying on the very rare decay (one in a thousand Higgses decay that way) or the even more problematic decay. Not to mention the final state, which can be extracted only when the Higgs is produced in association with other bodies, and still with huge difficulties, given the prohibitive amount of backgrounds from QCD processes mimicking the same signature.

The 2-sigma range is wider but not much prettier: 114.4 – 144 GeV. I think this really starts to be a strong indication after all: the Higgs boson, if it exists, is light! And probably close to the reach of LEP II. Too bad for the LEP II experiments – which I dubbed “a fiasco” in a former post to inflame my readers for once. In truth, LEP II appear likely to one day turn out to have been very, very unlucky!

## Comments

Sorry comments are closed for this entry

[…] Original Technorati Search for: work […]

On the other hand, suppose LEP II had found the Higgs. Politically, would that have made it easier or harder to get the go-ahead for the LHC?

“So guys, you’re telling me that you have found all pieces of this ‘Standard Model’ of yours, and they fit all experiments ever performed – and now you want 10 billions to run one which you hope will be different, somehow?”

Conspiracy theorists, take notice…😉

How can the 115-135 GeV Higgs range described in this post

be reconciled with

the Tevatron ellipse showing an upper bound of 114 GeV as described in a post of the day before ?

Tony Smith

Be careful Tony, the bound is a lower one. The blue ellipse in the other post is a direct measurement of W and t masses, which does enter in the fit shown in the two plots of today’s post. In fact, it pushes the minimum of the chisquared down to 80 GeV. However, once one accounts for the 114.4 GeV minimum value obtained from the direct LEP II limit, what remains is the small region discussed in the post.

Cheers,

T.

GW, so you claim LEP II did find the Higgs and then… Hmmm. Naah.

Tommaso, I see that today’s bound of 115 is a lower bound,

but

I also see that yesterday’ blue Tevatron ellipse lies entirely above the line labelled M_H = 114 GeV,

and that the area above that line corresponds only to lighter Higgs

and that the area below that line corresponds only to heavier Higgs

so

it seems to me that yesterday’s plot shows clearly a 114 GeV upper bound

and that on its face the two bounds are inconsistent.

I know that there might be some sort of statistical relationships that might sort of try to reconcile the two, such as incorporation of yesterday’s data as part of the stuff of the global fits of today’s plot,

but

to my simple mind it looks like the apparent inconsistency may be a red flag pointing to some possibly interesting physics.

Tony Smith

PS – Also, with respect to today’s plot, I note that yellow delta chi squared range seems to have three local minima:

1 – centered around 120 GeV, from 114 up to 144 GeV or so

2 – around 180 GeV

3 – around 200 GeV

and

that seems to me to be interesting with respect to my (unpopular to say the least) 3-state model with 3 Higgs masses:

1 – around 143 GeV

2 – around 180 GeV

3 – around 239 GeV

although I suspect that there may also be conventional reasons for the two higher-energy local minima, such as the onset of ZZ phenomena around 180 GeV etc.

Heh heh heh!!! Cool!

Tommasso, what is the signature of a Higgs? When can we say: “we found it!”?

That’s OK if the Higgs needs years. Who cares about the Higgs who is somewhere over there anyway. The same numbers and graphs make it likelier for SUSY to be found who is, frankly speaking, much prettier and younger than Mr Higgs.😉

That’s because you misread yesterday’s graph.

The reddish SM band was spanned by Higgs masses between 114 (the upper edge) and 400 GeV (the lower edge).

The green (MSSM) band has a completely different parametrization.

In fact, this raises a question about the plots in todays posts. I assume that “Theory” in both cases includes just the SM radiative corrections to the ρ-parameter.

As we saw in yesterday’s graph, the data are mildly incompatible (at the 1 σ level) with the pure SM. It would be interesting to see what today’s plots looked like with (say) MSSM radiative corrections folded in.

So 135 GeV is 1 sigma and 144 GeV is 2 sigma. This means that 95% CL exclusion is at what, 150 GeV?

Hi all, I am happy to see these plots do generate some interest. Indeed, I did found them informative.

Daniel, finding a new particle entails demonstrating a few things. For the higgs in particular I’d say:

1) show that there is an excess of events over backgrounds, and that this excess is significant;

2) show that the excess has the right characteristics, i.e. that it is compatible with the hypothesis of being due to Higgs decay

3) show that their rate is in agreement with expectations

4) measure a mass.

There are many different final states for the combination of the processes of Higgs production and decay. Their applicability depends on the mass of the boson. In a nutshell, at low mass the Higgs preferentially decays to b-quark pairs, at high mass to W boson pairs. At low mass one needs to search for the H associated with some other boson because the H->bb decay by itself is utterly invisible (backgrounds are a million times larger). At high mass it’s ok to search for the WW signature, but then the mass becomes hard to measure.

Cheers,

T.

Lubos, don’t be misled by the fact that Sven’s ellipse falls in the green band. The likelihood of SUSY is not enhanced. Indeed, both SM and MSSM received a considerable shrinking in their allowed parameter space with the LEP II limit. But SUSY suffered more: I would not take the fact that global fits now prefer the region of masses just above the LEPII limit as an indication of anything.

Hi Jacques,

maybe Sven Heinemeyer will produce an update of his analysis using the latest data. I remember a careful analysis of the allowed range of SUSY in a couple of his recent papers.

PS: how are my 750$ doing ?😉

Thomas, 2-sigma and 95% are more or less the same, due to the steepness of the yellow curve in those whereabouts.

Cheers all,

T.

Jacques Distler said that I “… misread yesterday’s graph.

The reddish SM band was spanned by Higgs masses between 114 (the upper edge) and 400 GeV (the lower edge).

The green (MSSM) band has a completely different parametrization. …”.

Although my comment was incomplete in that I should have explicitly said that it was with respect to the SM without supersymmetry,

I did not misread yesterday’s graph,

because it is true (with respect to the SM) that, as I said

“… yesterday’ blue Tevatron ellipse lies entirely above the line labelled M_H = 114 GeV,

and that the area above that line corresponds only to lighter Higgs

and that the area below that line corresponds only to heavier Higgs …”.

To make my point more explicitly, I then should have said:

“… so, with respect to the Standard Model,

it seems to me that yesterday’s plot shows clearly a 114 GeV upper bound

and that on its face the two bounds are inconsistent. …”.

Since supersymmetry (MSSM) might resolve the inconsitency, as Lubos Motl pointed out,

it seems to me that the inconsistency might be resolved in at least two ways:

1 – by supersymmetry;

2 – by “some possibly interesting physics” based on the SM, as I suggested.

If Tommaso is correct that: “… the likelihood of SUSY is not enhanced. Indeed, both SM and MSSM received a considerable shrinking in their allowed parameter space with the LEP II limit. But SUSY suffered more …”,

thus rejecting the supersymmetry possibility,

then

I stand by my suggestion that the two results taken together look “… like the apparent inconsistency may be a red flag pointing to some possibly interesting physics. …”.

Tony Smith

nevertheless somebody recently put on the web really nice pics of LHC

http://www.boston.com/bigpicture/2008/08/the_large_hadron_collider.html

Cheers, Alex

[…] Tommaso Dorigo is shocking us in these days with a striking post after another. Today he posted this one where there is evidence that the Higgs is light indeed being between 115-135 GeV and there are […]

PS: how are my 750$ doing ?At this rate you’ll owe him money by the time that you win the bet!

Come on, Tommaso. I am surely not claiming that SUSY has been proven but your claim “[the very light Higgs] doesn’t increase the probability of SUSY” is absurd, from a phenomenology viewpoint.

Let me give you a link that explains why I consider your comment ill-informed, Tommaso:

http://arxiv.org/abs/hep-ph/0009355

“What if the Higgs boson weighs 115 GeV?” The potential in pure SM would develop another, new minimum separated from the correct one – it would become unstable – before 10^6 GeV. New physics has to kick in and SUSY is the most natural answer. But even if you had several proposals that can tame the new Higgs instability, it’s still true that a very light Higgs makes SUSY more *likely*, by any sensible kind of inference, than a heavy Higgs, simply by showing that new physics is needed.

The paper above discussed the MSSM features for these light masses, hinted by the LEP. The paper was written because there used to be a near-signal at 115 GeV at LEP near the collider’s death, and with the new data you told us about, the signal may be becoming somewhat real and it may become very real in a year or so. The paper by Ellis et al. is not the newest one but it is a poster boy of the link between light Higgs and SUSY.

Incidentally, can’t you show a graph where the data that led to the 115 GeV signal are incorporated?

One more obvious comment. Some people tend to imagine “something like compositeness, preons, technicolor” when we say “new physics”. In my opinion, such a class of ideas has always been heavily overrated. But in this particular case, there’s much more than aesthetics that helps me to show that these ideas are no good.

Composite models, including technicolor, generically predict a heavy Higgs boson and they are close to being falsified by the data you provide us with. Look into your phenomenology literature what are the alternatives to SUSY that stabilize the Higgs potential for a very light Higgs. I don’t really know one. I believe that even if there were one, it would be “damn like SUSY” in certain key aspects, so why not SUSY itself?

Hi Lubos,

I stand by my point: the LEP II limit, among the various inputs of the second graph by Gfitter, is WAY the strongest one. To give you some perspective, the limit at 95%CL is 114.4 GeV, which of course means that there is a 5% chance to find it below that value… But if you were to ask what is the likelihood to find the Higgs below 112 GeV, that would be one in a million!

So, the LEP II limit is a brick wall. By contrast, the direct TeV determinations of W and t masses are only a soft preference in one

direction, which by themselves would leave the full table of possibilities open.

It is because of those facts that I say: sure you can look at the TeV ellipse and get away that it favors SUSY, but that is a myopic assessment, because it means picking from a large sample of results. The important point is that, of a universe of possible values, those which survive today are between 114.4 and 150 GeV or so (take the 95%CL limits of the full Gfitter result as a reference), with a preference for the 115-135 GeV region. Now, is that a preference for SUSY with respect to the universe of possibilities that existed before LEP II produced his stringent bound ? I say it is not.

The paper you quote was written, as you correctly mention, after the LEP II “hint”, and not before.Using the argument that the SM has the hierarchy problem is like picking yourself up from your own bootstraps: it was used to construct SUSY, you cannot use it a second time.

Instead, I think you should give a look at the papers which fit all data (also including B physics observables and cosmological bounds) for MSSM models, finding the relative likelihood of Higgs mass values before and after LEP II bounds are included. See Sven’s papers (oh well, I now decree he owes me a beer due to the mass of citations I am making of his works🙂

Cheers,

T.

About your last comment Lubos, I agree. There is not really much in the market as an alternative to SUSY to explain away the inconsistency of a light higgs. Still, if SUSY were found I would be really surprised. I am old-fashioned with these things… I stand by Ockham’s razor, and after putting together the data, it looks like the bonuses do not justify invoking 24 new particles and 80 more parameters.

What makes me especially wary is the fact that SUSY needs to have hid extremely well for all this time. Loops of SUSY particles have not touched the phenomenology of particle physics in any meaningful way for all this time… When will we finally throw the blanket ?

Cheers,

T.

Dear Lubos, regarding your comment #19 that

I am a little puzzled by it. The data provided in this post is about bounds on fundamental Higgs bosons. In technicolor models, electroweak symmetry is broken by technifermion condensates which have no particular reason to be electroweak eigenstates. So it is not at all obvious that they feature something which behaves like a fundamental Higgs boson, even at low energy, and that these bounds are directly applicable to them.

For both technicolor and preon models, I believe suppression of unwanted byproducts like FCNCs and exotic bound states without ending up with something much more complicated than the SM is the real problem.

dear Lubos, the paper you quote does not reach a correct conclusion. If the Higgs mass is 115 GeV, the correct implication is that the Standard Model vacuum is metastable, see for example fig. 2 of http://arxiv.org/abs/0712.0242.

Today string theorists agree that more than one vacuum can exist.

Dear Colleagues.

The Higgs mechanism becomes unnecessary in the framework of non-equilibrium field theory, see for example:

http://www.iop.org/EJ/abstract/0295-5075/82/1/11001

Regards,

Ervin Goldfain

Dear Guess Who #22,

thanks for your comment. I feel you are making the situation more difficult than it is. Composite Higgses are still Higgses at energies below their masses (or the inverse radii) – they can only differ by their width etc. But don’t the bounds for elementary Higgses still apply here?

There are many other problems of technicolor with high-precision physics, including FCNCs, too strong corrections to the W-Z mass relations, difficulties to make top-quarks heavy enough, pseudoGoldstone bosons, etc. – see e.g. chapter 8 (?) of Dine’s book dedicated to technicolor. These are way more serious problems than what we encounter with SUSY.

Dear Vacuum decay #23,

I am confused why you think that there is a contradiction. Isn’t Ellis et al. talking exactly about the same instability? Metastable means that the decay has to be nonperturbative because the physically relevant minimum is still a local minimum. But it is not a global one.

You might argue that the metastability is long-lived and the new minimum is pretty much harmless because the lifetime is long etc. but I still think it is OK to say that it implies the existence of new physics at the scale where the new global minimum develops, 10^6 GeV, even if the only role of the new physics were to determine the actual (exponentially suppressed) decay rate.

I am surely not questioning that there are many local minima in the configuration space of the full (string) theory. I am just saying that the effective field theory description breaks down if new unphysical global minima emerge at some energy scale, and this gap of the QFT framework has to be completed by new physics. Indeed, you may be right that the new physics could be non-SUSY string physics.

But the decision what is more likely is a matter of inference methods then. I think that even if one uses non-low-energy-SUSY stringy vacua, it is OK to approximate them by non-SUSY field theories at low enough energies, and the conclusions from field theory will still apply. SUSY prefers lighter Higgses and non-SUSY prefers heavier Higgses. I believe that all “stringy” methods to revert this logic are artifacts of anthropic miscalculations (“many vacua”, like the working class, have more votes – a paradigm I will never accept without a glimpse of proof) but yes, I realize that some people consider these miscalculations to be parts of their own inference.

Hi Tommaso #20,

I haven’t contradicted your statement that LEP II is a brick wall. But unlike you, I also see the softer wall on the opposite side.😉 If someone tells you that a woman certainly shouldn’t give a birth before the 115th day of pregnancy, does it mean that you shouldn’t listen to other people who say that it may also be somewhat bad after 15th month or so?😉

I didn’t quite understand your question “Now, is that a preference for SUSY…” In fact, I think that there is something linguistically and logically wrong about the question.😉 But if you ask whether the Tevatron data told us something new that we didn’t know right after LEP II, of course that the answer is Yes. The very heavy Higgses – hundreds of GeV or more – are now close to being ruled out even though they were not ruled out back then.

You wrote: “Using the argument that the SM has the hierarchy problem is like picking yourself up from your own bootstraps: it was used to construct SUSY, you cannot use it a second time.”

You don’t really misunderstand these things, do you? The SM has a hierarchy problem but whether or not one views this need for fine-tuning as a real problem is a scientifically aesthetic question with no definitive answer. However, with the Tevatron data, we know more than we did before: it actually seems that because of the *observations*, if you care about them, the Higgs is light, indeed.

And once again, my justification for SUSY in the light Higgs context wasn’t a theoretical desire to solve the hierarchy problem but the existence of the new minimum in the Higgs potential that occurs at some intermediate Lambda if the Higgs is light. These are two different things. I am not using anything twice.

Moreover, your statement that the hierarchy problem was “used to construct SUSY” is historically untrue. SUSY was “constructed” by the Russians who were looking for possible new spacetime symmetries and by Pierre Ramond who was incorporating fermions to string theory. It turned out that his model also had spacetime supersymmetry, besides worldsheet supersymmetry, and this spacetime SUSY was picked by phenomenologists (Wess, Zumino and others) who also found other reasons why it was a promising idea about effective field theories.

But there are many justifications for supersymmetry. Two major ones – the existence of a dark matter candidate plus the gauge coupling unification in SUSY GUT – haven’t yet been mentioned in this thread. It is completely wrong if you tell us that there only exists 1 justification for SUSY.

I had N justifications for SUSY before you told us about the recent Tevatron data. And whether you like it or not, when you tell us that Higgs doesn’t seem to be (observationally) heavier than 154 GeV or so, it is another justification for SUSY that is clearly independent from the previous justifications simply because a few weeks (or months) ago, we didn’t know whether the Higgs was that light in reality.

So whatever the value of N was, now I have N+1 justifications for SUSY and the probability that SUSY is right has certainly increased. I can’t tell you whether it is really higher than 50% right now but it is certainly higher than it was before your postings.😉 But SUSY usually implies lighter Higgses than non-SUSY, so of course an observation that the Higgs is light is a new argument in favor of SUSY, how it could not be?

Best wishes

Lubos

Lubos, while I agree that SUSY is the most likely thing happening at the TeV scale, I don’t think you understand the result here. It’s an electroweak fit, which means it says nothing about composite Higgs, technicolor, or little Higgs scenarios, for instance. In such theories there are many new particles at the TeV scale that contribute to all the higher-dimension SM operators, and these will certainly modify the fit. In the SM the Higgs mass is the only free parameter, so such a fit clearly prefers a particular mass; beyond the SM there can be many parameters, and the fit prefers some region which might include quite heavy Higgses. Model-builders work very hard to construct such scenarios consistent with all current bounds. This fit can

onlytell us that we expect the Higgs to be light if there is nothing but Standard Model physics contributing. The other plot with the two bands scans the MSSM parameter space and looks at electroweak results there. One could in principle make additional bands for specific little Higgs models, for instance, and they would not be ruled out.Some real problems with composite Higgs/technicolor/etc. are that generically one must tune to eliminate new contribution to the Peskin-Takeuchi S parameter, and that there is still no very compelling way to give fermions masses (especially given how huge the top quark mass is).

Dear phenomenologist #26,

excellent. So imagine I thank you for the (meta-new) warning that the little-Higgs-like models need separate and completely different evaluation of the high-precision predictions, by including their characteristic spectra of high-dimension operators etc.😉

But isn’t it fair to say, in light of the evidence you provide us with (e.g. making top quark heavy enough is hard; I wrote the same thing above), that what you wrote reaffirms my “prejudice” or “expectation” that composite Higgs theories including little Higgs require a high value of the mass measured in a similar way as the Tevatron high-precision bounds?

This may be about the competition in speed between the formulation (and accurate analysis) of new composite-Higgs theories and their falsification🙂 but whether or not it is so, the current state of affairs is that the falsification team is ahead. So it seems correct to say that there’s no known little Higgs-like model that is known to be compatible with all the requirements concerning masses and high-precision measurements.

Indeed, it could be that it is just due to a shortage of man-hours on little-Higgs phenomenology but maybe not. We know places in the MSSM parameter space that look healthy so the situation seems more acceptable than technicolor and I would find it strange if this asymmetry were not taken into account when rationally evaluating the likelihood of different scenarios.

In the future, people may find fully realistic composite Higgs models compatible with all the data, including the hi-precision ones. But in the future, people may also safely rule out all composite Higgs models. You don’t want everyone to automatically assume one of these futures already now, do you?😉 The situation now is that SUSY looks more alive than composite Higgses because there are known regions of it that haven’t been falsified yet. Known regions of the technicolor or little Higgs landscape that are compatible with everything we need are a matter of wishful thinking, aren’t they?

Now, yes, wishful thinking may often become true🙂 but it is still not right to assume that wishful thinking is as true as established facts, is it?😉

Best

Lubos

By the way, phenomenologist #26, I would bet that you are an anthropic person because you implicitly use the anthropic reasoning for the composite Higgses, too.🙂

What do I mean? You seem to say that there are so many types of composite Higgs models that some of them must surely be compatible with everything we need, right?

I have two problems with such a formulation. First, whether or not there are many “flavors” of a particular scenario doesn’t increase the probability that the scenario is correct. If there are many variations of a model, you should reduce their priors accordingly, so that the whole qualitatively distinct ideas – scenarios – still have comparable priors, regardless of the number of “theories” or “vacua” in each class.

Second, if there are zillions of little Higgs-like theories, it still doesn’t mean that one of them must be viable. There can exist sensible universal bounds that are satisfied by all of them and that contradict observations.

To summarize, the argument that “one can construct many versions of something” makes no impact on my appraisals of the likelihood of a scenario. Loop quantum gravity can be written in infinity^infinity different versions, by adjusting the infinitely many continuous couplings for all the terms in the spin foam Hamiltonian. But despite this high number, it is still easy to see that none of these theories is compatible with basic things such as Lorentz symmetry.

Now, the little Higgs theories are surely doing much better than LQG because they respect the established principles of QFT. But some of their more specific features, the very same ones that make then members of the little Higgs family, could also very well be the features that make them incompatible with some general patterns of reality.

If there are many observational constraints that must be satisfied and it is not known whether a little Higgs model satisfies all of them, I think it is reasonable to expect that “probably not”. There are just very many conditions and the little Higgs theories clearly don’t cover the whole “space of Fermilab/LHC possibilities”, so whatever the number of “flavors” of the little Higgs theories is, I think that the huge number of independent constraint wins.

In other words, a randomly chosen class of theories whose compatibility with data is not known is probably incompatible with them because it is still a large-codimension or a small-fraction surface/region of the full parameter space of everything we can measure. It may still contain a lot of points but that’s not really enough.

Best

Lubos

Dear Lubos, Dine’s chapter on technicolor (all six pages of it) is almost content-free.🙂 For a better introduction, there is http://arxiv.org/abs/hep-ph/9401324

and for a longer review, http://arxiv.org/abs/hep-ph/0203079

Regarding “composite Higgses”, the terminology can be misleading. There need be nothing like the minimal SM Higgs *particle*, i.e. a single technimeson carrying the same electroweak quantum numbers as what we usually mean when we talk about “the Higgs”, or “the Higgs boson”. Since I’m a lazy dog, let me just quote Georgi (this is from his famous review of effective field theory):

The same may or may not be true of preon models. A few years ago, I spent more time than I like to admit digging up and reading just about every paper written on that subject. What I found was that the subject had been pretty much been mined dry by the mid-80s, at least with regard to models built using standard QFT techniques (specify symmetry groups and fermion representations, construct composites). By then, there was an impressive list of theoretical constraints to be satisfied; when they were applied systematically, the only plausible candidates left were more complicated than the SM, so the whole idea of finding a simpler underlying theory was kind of shot. It may still be possible to do something non-perturbatively (really fun stuff like getting fermions from monopoles) but that’s at a whole different level of difficulty, with no easy-to-use model-building toolkits for the Hoi Polloi.😉

Dear Guess Who #29,

I will probably stick to the “economical content” of Dine’s and similar texts (to use a less aggressive description of Dine’s work than yours) because what Howard writes in the text above doesn’t make much sense to me.

SU(2) x SU(2) in QCD is an approximate global symmetry. So various things may break it, and if they do (and indeed, they do), you don’t have to know how things transform under this broken symmetry.

But SU(2) x U(1) in the electroweak theory is a local symmetry. Breaking of a gauge symmetry is always an inconsistency, leading (at least) to non-decoupling of the time-like ghost modes of the gauge (W,Z,gamma) bosons. Once you admit that this symmetry (or any other gauge symmetry) can’t really be broken, I can always ask how states transform under it, and with spacetime-dependent gauge transformations acting upon the vacuum, I obtain Goldstone bosons. Also, whatever parameterization of the “configuration space” of the theory I use, there must be a degree of freedom/direction that distinguishes the physical vacuum from the idealized gauge-symmetric vacuum that can be identified at very high energies.

You may try to “reduce” the bad impact of the W,Z ghosts (their interactions with the rest) but if you do so, you are also reducing whatever new you added to the electroweak theory. That’s a trade-off: what you’re adding to have “new physics” is proportional to the “bad features” you’re adding. It follows that this type of “new physics” is just bad – the only question is how much of this bad stuff you have to add to realize that it’s bad.🙂

So I believe that Georgi makes these things more mysterious than they are and argues that things can be “completely different” even though it seems obvious that they can’t really be different if one looks carefully enough. Is it fair to say, at least, that there is no good, physically acceptable analysis in the literature of models where the technifermions have no well-defined transformation rules under the electroweak gauge group (but the models still avoid all things like ghosts and non-unitary WW amplitudes etc.)?

A loophole from the conclusions above would be to have all other objects including W,Z gauge bosons composite in some incomprehensible new physics. Spin 1 massive particles may behave as gauge bosons even though they’re generic technihadrons blah blah blah. I don’t know how to falsify it on the top of my head but it just doesn’t fit my intuition. Even if it were possible to describe the electroweak theory as a field theory without the electroweak gauge group altogether, there would probably have to exist some kind of duality with the normal description.

Once the duality exists, I can use the normal electroweak degrees of freedom, with the gauge group, to shoot the unusual idea that things don’t transform under SU(2) x U(1) at all. This is based on the assumption that models with “similar” spin-1 massive particles that interact like needed for electroweak phenomena can’t violate the other constraints of the electroweak theory (and gauge symmetry).

I can’t quite prove the assumption now but I would argue that the burden is yours, to prove that a “completely unexpected new kind of description” of the electroweak interactions (inequivalent to anything one can write down, starting with the electroweak gauge group) is possible.

Best wishes

Lubos

Hi Tommaso,

I think there are several Italian physicists involved into SUSY construction: Zumino, Ferrara just to cite a few and I like to think that something on it will be seen at LHC. This is in some way to forget that physics is a world wide enterprise but do you ever think what such a success would mean for our country?

Ciao,

Marco

Dear Lubos, I would think that the reason why Georgi chose chiral SU(2)xSU(2) of QCD as an example is obvious: it’s the template which Weinberg and Susskind used for the original technicolor proposal. Essentially all they did was take QCD and posit a scaled-up copy of it.

In both cases, ordinary QCD and technicolor, the chiral symmetry is broken by the formation of a condensate; electroweak interactions only see the f_L part of it, so the condensate breaks electroweak symmetry. No need to go looking for a “completely unexpected new kind of description”, as you put it. This is how the plain standard model works too.

As an exercise, you could write down all standard quark bilinears and check how they transform under electroweak SU(2)_LxU(1).

I see silly WordPress is eating my condensates (interpreting stuff betwee VEV brackets as HTML tags). Previous comment was meant to say “formation of a f_L f_R condensate”, with the “f_L f_R” part between VEV brackets.

Dear Guess Who #32,

it’s OK to use your analogy but I think that you’re mapping the things incorrectly and deriving incorrect conclusions. The correct conclusion is that in both cases, QCD and electroweak theory (or any other QFT, for that matter), the symmetry that we call “gauge symmetry” is never really broken. Even the normal phrase “breaking by the Higgs field” is a misnomer – the SU(2)_L x U(1)_Y symmetry is nonlinearly realized but effectively unbroken and it has the same power to constrain amplitudes.

The symmetries that can be broken must be global symmetries (in all cases) and in the class of phenomena where they’re broken, you should completely forget that these symmetries ever “existed”.

Howard’s speculation where the quantum numbers (=transformation rules) under SU(2)_L x U(1)_Y are ill-defined would be the first example in any theory where the transformation rules under a gauge symmetry are ill-defined: do you agree? I am not questioning that a condensate may break (or make nonlinearly realized) a symmetry. I am only disagreeing with Georgi’s assertion that the transformation properties of physical objects under a symmetry that is required to be a gauge symmetry – e.g. SU(2)_L x U(1)_Y – can be ill-defined.

The transformation properties under the full SU(3)_color x SU(2)_L x U(1)_Y must be well-defined for all objects that interact with the W,Z,gamma,gluon gauge bosons, otherwise these gauge bosons would have non-decoupled ghost polarizations. Do you agree with it?

At low energies, the known particles – photons, W, Z (and gluons, if we look inside hadrons) – must be described by Lorentz-vector fields, for Lorentz invariance, and gauge symmetries – or something equivalent to them – are required to remove the ghost polarizations.

This may be confusing or vacuous for SU(2) x SU(2) in QCD because there’s no remnant symmetry with corresponding gauge bosons, except for SU(2)_L x U(1) (that only becomes relevant at the higher electroweak scale). But in the electroweak theory, you actually need to predict the right gauge bosons.😉 So the analogy with SU(2) x SU(2) in QCD is only useful up to the moment when you realize that there are actually also the W,Z gauge bosons – that you see once you arrive at the electroweak scale.

At the QCD scale, you may say that there are no W,Z bosons (not sure what you want to do with the photon) and you have “no problem”. But by avoiding this problem that only arises at the EW scale, you also failed to describe the EW phenomena.😉 I think that if you want a theory that has these gauge bosons – that have been observed – it is inevitable for consistency with observations (and internal consistency) to have well-defined transformation rules of all objects under the gauge group, at least in one of the equivalent descriptions of your physics.

In other words, every ghost-free theory with proper electroweak-like interacting W,Z,gamma gauge bosons admits an equivalent definition with the corresponding exact gauge symmetries as redundancies. If you know a counterexample, I am interested in it. Of course, you may gauge-fix it and describe in all kinds of ways where the gauge redundancies is removed, but the consistency constraints (unitarity etc.) on your description without any gauge symmetries will be equivalent to the requirement of gauge symmetry in the conventional description with the gauge symmetry.😉

Best

Lubos

Let me summarize these comments differently. I think that the very “exact electroweak gauge symmetry” (in at least some equivalent formulations of physics) is an experimentally established fact. (Do you disagree?)

Why? The W,Z,gamma gauge bosons are known to exist and non-derivatively couple to other (charged) matter. By the term “non-derivative” I don’t need a particular description of the theory – it is a statement about the power-law dependence of the amplitudes in a certain regime: the low-spatial-momentum regime of charged massive fields (e.g. fermions) is enough.

The Lorentz symmetry acting on the fields that are able to produce the spin-1 bosons allows me to extend the field to full 4-vectors, and the proper decoupling of its unphysical, ghostly time-like polarizations from the rest of the matter is equivalent to the condition of gauge symmetry in the normal covariant definition of the theory.

You may obscure this fact but this fact will continue to be true, and by using the normal degrees of freedom, people can see why it is true. So when you start with objects with no transformation rules under SU(2)_L x U(1)_Y, you may still construct something that you may call a “Higgs replacement” but you will never be able to generate properly interacting gauge bosons so it won’t really be an electroweak theory.

BTW, the little Higgs theory is in no way an example of Howard’s speculations written above: it has a lot of gauge groups at the lattice sites of the theory space and the familiar gauge group is simply a diagonal group of a sort. So of course, all fields have well-defined transformation properties under this diagonal group. There are many new 4D fields – cousins of the Higgs, if you wish – but there’s still a “normal Higgs” there, too. By making the theory space more or less complicated, one can add factors like “N” to various relationships, but qualitatively speaking, for N comparable to one, it’s still the same breaking by the Higgs.

Ah, semantics. Love the stuff. The way I would put it is that you are intrepreting my mapping and conclusions incorrectly.😉

In my lingo, what matters is that you don’t break the gauge symmetry explicitly, by inserting non-invariant terms into the Lagrangian. If you do that, you destroy Ward identities and renormalizability. You must also make sure that any symmetries spoiled by quantisation (anomalies) are not gauged, or you have the same problem.

But spontaneous symmetry breaking is not about any of that. It’s about the combination of two things:

1) You have a system filled with stuff which is not invariant. Surely you agree that’s fine. You don’t break electrodynamics by presenting it with a capacitor full of electrick charges.😉

2) Your system actually LIKES being filled with stuff which is not invariant. No problem here either, as long as the cause is an invariant potential, without any explicit symmetry breaking terms in it.

There is no problem here, because the symmetry of the interactions is unbroken. Only the symmetry between possible ground states is broken. That’s what makes spontaneous symmetry breaking useful.

I can not agree that Georgi was engaging in “speculation” when he wrote about transformations under SU(2)_L x U(1)_Y in 1993. He was stating an obvious fact. The chiral quark condensates are held together by QCD, which knows nothing about electroweak interactions (product group). Why then would you expect it to produce bound states that also happen to be electroweak eigenstates? How could it? Write down a few standard quark bilinears (try the two-flavor singlet and triplet) and check how they transform under SU(2)_L x U(1)_Y. What do you find?

I agree that all objects must have “well-defined” gauge transformation properties, but this is not synonymous with them having to be eigenstates. As long as you can combine them mathematically to form a complete representation, i.e. as long as they transform into each other without glitches, that’s enough. Nothing requires the physically observed states to be gauge eigenstates.

We discussed another well known example of physically observed states differing from gauge eigenstates on this blog only days ago: mass vs. gauge eigenstates (the CKM matrix, rememeber?).

‘The fit prefers a 80 GeV mass for the Higgs boson.’

Hi Tommaso, thanks – that’s excellent news! The argument that lepton and hadron masses are quantized with masses dependent on the weak boson masses is pretty neat because it agrees with the prediction of my model, in mass arises by the coupling to fermions of a discrete number of massive Higgs-type bosons, through the polarized vacuum which weakens the mass coupling by a factor of 1/alpha and a geometric factor of an integer multiple of Pi.

However, this scheme requires the Z_0 mass of 91 GeV as the building block, not the 80 GeV mass of the W+/- weak boson. (These two masses are related by the Weinberg mixing angle for the two neutral gauge bosons of U(1) and SU(2).)

The model is pretty simple. The mass of the electron is the Z_0 mass times alpha squared divided by 3*Pi:

91000 MeV *(1/137^2)/(3*Pi) = 0.51 MeV

All other lepton and hadron masses approximated by the Z_0 mass times alpha times n(N + 1)/(6*Pi):

91000 MeV *(1/137)*n(N+1)/6*Pi)

= 35n(N+1) MeV

= 105 MeV for the muon (n = 1 lepton, N = 2 Higgs bosons)

= 140 MeV for the pion (n = 2 quarks, N = 1 Higgs boson)

= 490 MeV for kaons (n = 2 quarks, N = 6 Higgs bosons)

= 1785 MeV for tauons (n = 1 lepton, N = 50 Higgs bosons)

The model also holds for other mesons (n = 2 quarks) and baryons (n = 3 quarks); e.g. the eta meson has N=7, while for baryons the relatively stable nucleons have N=8, lambda and sigmas have N=10, xi has N=12, and omega has N=15.

The physical picture of the mechanism involved and of the reasons for the choice of N (Higgs boson) values is as follows. First, the electron is the most complex particle in terms of vacuum polarization; there is a double polarization (hence alpha squared – see appendix for this) shielding the electron core from the single Higgs type boson which it gains its mass from.

For all other leptons and hadrons, there is single vacuum polarization zone between the electromagnetically charged fermion cores and the massive bosons which give the former their mass.

Instead of the Higgs like bosons giving mass by forming an infinite aether extending throughout the vacuum which mires down moving particles (which is the mainstream picture), what actually occurs is that just a small discrete (integer) number of Higgs like massive bosons become associated with each lepton or hadron; the graviton field in the vacuum then does the job of miring these massive particles and giving them inertia and hence mass. Gravitons are exchanged between the massive Higgs type bosons, but not between the fermion cores (which just have electromagnetic charge, and no mass). (This is why mass increases and length contracts as a particle moves: it gets hit harder by gravitons being exchanged when it is accelerated, and changes shape to adjust to the asymmetry in graviton exchange due to motion.)

Now the clever bit. Where multiple massive Higgs-like bosons give mass to fermions, they surround the fermion cores at a distance corresponding the the distance of collisions at 80-90 GeV, which is outside the intensely polarized loop filled vacuum. The configuration the Higgs like bosons take is analogous to the shell structure of the nucleus, or to the shell structure of electrons in an atom. You get stable configurations as in nuclear physics, with N = 2, 8, and 50 Higgs like quanta. These numbers correspond to closed shells in nuclear physics. So when we want to predict the integer number N in the formula above, we can use N=2, 8, and 50 for relatively stable configurations (closed shells).

E.g., the muon is the most stable particle (shortest half life) after the neutron, and the muon has N = 2 (high stability). Nucleons are relatively stable because they have N = 8. And the tauon is relatively stable (forming the last generation of leptons) because it has N = 50 Higgs like bosons giving it mass.

I’ve checked the model in detail for all particles with lives above 10^-29 second (the data in my databook). It works well. Like the periodic table of the elements, there are a few small discrepancies, presumably due to effects analogous to isomers, where for instable particles, a certain percentage has one number of Higgs field quanta, and the remainder has a slightly different number, so the overall looks like a non-integer; analogous to the problem of chlorine having a mass number of 35.5, and there may be further detailed analogies to the atomic mass theory in terms of binding energy and related complexities.

Appendix: justification for vacuum polarization shielding by factor of alphaHeisenberg’s uncertainty principle (momentum-distance form):

ps = h-bar (minimum uncertainty)

For relativistic particles, momentum p = mc, and distance s = ct.

ps = (mc)(ct) = t*mc^2 = tE = h-bar

This is the energy-time form of Heisenberg’s law.

E = h-bar/t

= h-bar*c/s

Putting s = 10^-15 metres into this (i.e. the average distance between nucleons in a nucleus) gives us the predicted energy of the strong nuclear exchange radiation, about 200 MeV. According to Ryder’s Quantum Field Theory, 2nd ed. (Cambridge University press, 1996, p. 3), this is what Yukawa did in predicting the mass of the pion (140 MeV) which was discovered in 1947 and which causes the attraction of nucleons. In Yukawa’s theory, the strong nuclear binding force is mediated by pion exchange, and the pions have a range dictated by the uncertainty principle, s = h-bar*c/E. he found that the potential energy in this strong force field is proportional to (e^-R/s)/R, where R is the distance of one nucleon from another and s = h-bar*c/E, so the strong force between two nucleons is proportional to (e^-R/s)/R^2, i.e. the usual square law and an exponential attenuation factor. What is interesting to notice is that this strong force law is exactly what the old (inaccurate) LeSage theory predicts for with massive gauge bosons which interact with each other and diffuse into the geometric “shadows” thereby reducing the force of gravity faster with distance than the inverse-square law observed (thus giving the exponential term in the equation (e^-R/s)/R^2. So it’s easy to make the suggestion that the original LeSage gravity mechanism with limited-range massive particles and their “problem” due to the shadows getting filled in by the vacuum particles diffusing into the shadows (and cutting off the force) after a mean-free-path of radiation-radiation interactions, is actually actually the real mechanism for the pion-mediated strong force. Work energy is force multiplied by distance moved due to force, in direction of force:

E = Fs = h-bar*c/s

F = h-bar*c/s^2

which is the inverse-square geometric form for force. This derivation is a bit oversimplified, but it allows a quantitative prediction: it predicts a relatively intense force between two unit charges, some 137.036… times the observed (low energy physics) Coulomb force between two electrons, hence it indicates an electric charge of about 137.036 times that observed for the electron. This is the bare-core charge of the electron (the value we would observe for the electron if it wasn’t for the shielding of the core charge by the intervening polarized vacuum veil which extends out to a radius on the order 1 femtometre). What is particularly interesting is that it should enable QFT to predict the bare core radius (and the grain size vacuum energy) for the electron simply by setting the logarithmic running-coupling equation to yield a bare core electron charge of 137.036 (or 1/alpha) times the value observed in low energy physics. (The mainstream and Penrose in his book ‘Road to Reality’ use a false argument that the shielding factor is the square root of alpha, instead of alpha. They get the square root of alpha by seeing that the equation for alpha contains the electronic charge squared, and then they argue that the relative charge is proportional to the square root of alpha. They’re wrong because they’re doing numerology; the gravitational force between two equal fundamental masses is similarly given by an equation which contains the square of that mass, but you can’t deduce from this that in general force is proportional to the square of mass! Newton’s second law tells you the relationship between force and mass is linear. Doing actual formulae derivations based on physical mechanisms, as demonstrated above, is a very good way of avoiding such errors, which are all too easy for people who ignore physical dynamics and just muddle around with equations.)

[…] Above: Dr Tommaso Dorigo’s illustration of the 80 GeV (with – 23, +30 GeV standard deviation) preferred Higgs mass in his new post: […]

Dear Guess Who #27,

as you may guess, I have nothing whatsoever against spontaneous symmetry breaking (except for some incorrect statements about it made by you, to be discussed later) so you are attacking a straw person.🙂

Instead, I have something against Georgi’s statement that one might not be “able to identify the transformation properties of the Goldstone bosons under the full SU(2) x U(1) symmetry. This is not always possible.”

This statement by Georgi is wrong and I believe that in your comment #37, you partly confirm that it is wrong. Georgi didn’t say anything about “eigenstates” (which is your new fashionable topic in #37) as you can easily check by making a search for “eigenstate” in Georgi’s quote: eigenstates are just some carefully crafted combinations of other objects that happen to satisfy the eigenstate equation. But even non-eigenstates still have “transformation properties that we are able to identify”, in contradiction with Georgi’s assertion.

Moreover, even this new “eigenstate loophole” can be seen to be redundant. There’s nothing new about the transformation properties under SU(2)_L x U(1)_Y of the “stuff that fills space” (the Higgs sector) that one can get that would differ from the rules of the normal Higgs.

Why? The “stuff” must be invariant under U(1)_{electromagnetic}, otherwise photons will fail to be massless, while the transformation properties under the remaining 3 generators of the electroweak group may be traced because this group has to continue to exist. So the “stuff”, whatever it is, transforms as representations of the electroweak group. And I can even always write down a “nice basis of eigenstates” under some Cartan generators.

So quite obviously, I also disagree with the middle (and therefore also the last) sentence of this new paragraph of yours from #37:

“There is no problem here, because the symmetry of the interactions is unbroken. Only the symmetry between possible ground states is broken. That’s what makes spontaneous symmetry breaking useful.”

That’s just not true. Spontaneous symmetry breaking doesn’t mean that ground states – or the stuff filling the vacuuum – doesn’t transform under the broken symmetry. Quite on the contrary, spontaneous symmetry breaking means that this stuff *does* nontrivially transform under this symmetry. Until the symmetry got broken, the vacuum was a singlet – invariant under the symmetry. Once it’s broken, it is a non-singlet but by transforming the vacuum (or the Higgs boson state or anything), you can always complete it to the multiplet.

In the case of Goldstone bosons, this full multiplet is an infinite-dimensional family of objects from different superselection sectors because the symmetry is nonlinearly realized. But I still know what the transformation properties are. And I am confident that no “qualitatively new” situation what the symmetry can be doing in a consistent theory can ever occur.

And one more time: the main reason why your comment #37 is irrelevant for this discussion is that you keep on making no distinction between gauge groups and global symmetries. Gauge symmetry is just a redundancy so it is never even literally broken. We know that the electroweak group is a gauge symmetry because we have observed the gauge bosons. You seem to be hiding from this fact, or is it just my feeling?😉

The nonzero W,Z masses mean that this symmetry is nonilnearly realized and there is only one qualitative method to prepare nonlinearly realized symmetries – and the situation is always qualitatively analogous to the case of the minimal electroweak theory (with possible extra stuff that has nothing to do with the symmetry breaking).

Best wishes

Lubos

Dear Lubos, this discussion between us started with my comment #22, wherein I pointed out that

So this is not a

, as you now would have it (comment #40). It is what the discussion has been about all time.

You now point out that Georgi never explicitly talked about eigenstates in the quote I provided. True, but what he did explicitly say is that

. As any particle physicist will tell you, even ones who never read Georgi’s “Lie Algebras in Particle Physics”, to classify particles under their transformation properties is to label them with their eigenvalues. Which is obviously only possible if they are eigenstates under those transformations.

Georgi was writing for particle physicists, so I suppose he did not bother to state this perfectly obvious point explicitly. Everybody got it anyway.

I may be willing to chalk that one up to your not being a particle physicists, but your statement that

looks too much like intentional semantic fog for my liking. There is an obvious difference between (1) having a single object which carries the quantum numbers of the SM Higgs, and (2) having a whole bunch of objects, none of which is individually classifiable under SU(2)_L x U(1)_Y, as Georgi says. The phenomenology is completely different, which is why the reviews which I pointed you to in comment #29 are a little longer than the six pages allocated to the subject in your reference, Dine’s book about supersymmetry and strings.

To get back to the original point of comment #22, this means that experimental bounds on an SM or MSSM Higgs are not directly applicable to technicolor models.

I may be willing to agree with the statements further down your comment #40 that there is nothing “qualitatively” different to minimal electroweak symmetry breaking, under some yet-to-be-specified definition of “qualitatively”, but this post was about QUANTITATIVE differences between different models. You seem to have no problem with the existence of such differences between SM and MSSM; surely then you have no problem with the existence of such differences between those and technicolor models?

Going down the rest of your comment, you write that

Of course not! Where did you get that strawman from? I spelled out this part at toddler level in comment #37. As I said there, the symmetry is broken when

(point number 1 in comment #37).

You do understand what it means to say that stuff (e.g. a condensate) is not invariant, right? It means that it DOES transform under the symmetry group. If you fill your system with something which does transform under the symmetry group, it quite obviously breaks the symmetry between possible ground states.

So it gets really odd when you first proclaim that

and then go right on to restate exactly the same thing I said – in the same paragraph (!):

But… That’s what I said!

The only explanation of your going down this path which I can think of and which does not involve intentional obfuscation is that you somehow managed to miss the “not” in “not invariant”. Dear Lubos, could it be that you are just reading too fast to catch all words?

Dear Guess Who #41,

great. It was the comment #22 where you first wrote the word “eigenvalue” but it doesn’t change anything about the fact that this is a straw man topic that has nothing to do with Georgi’s comments; that it doesn’t justify any qualitatively new kinds of “symmetry breaking” (a point you finally seem to agree with); and that it is also wrong by itself because one can *always* organize states into eigenstates of a Cartan subalgebra.

I insist that Georgi’s speculative “Very likely…” comment was wrong and still is wrong. Georgi was speculating about a new kind of a theory that could have been ruled out long before he wrote those sentences – and surely, an expert on symmetries e.g. Coleman could have done so.

But even if a superficial reader was unable to see that Georgi’s comment was false, he or she should be able to see that even 15 years later, no theories of the type he proposed exist, so the “Very likely…” proclamation is surely less likely now than it was in 1994. They will never exist because they are mathematically impossible and they can be rather easily seen to be mathematically impossible.

If a symmetry can be talked about at all, whether it’s broken or not, it is always possible to organize the full (=including all sectors) Hilbert space into its representations, and all representations can always be organized according to the eigenvalues of a commuting subset of generators. You haven’t found any counterexamples, most likely because they don’t exist.

You may have made a linguistic error but I insist that your sentence “Only the symmetry between possible ground states is broken.” referring to spontaneous symmetry breaking is incorrect, whenever the language is used in any acceptable way. The symmetry between possible ground states is never broken. For example, possible ground states always have – and must have – exactly the same energy (or energy density) and the same number of spectrum of excitations. Spontaneous symmetry breaking means that the ground state is not a singlet, but it is still perfectly symmetric with (physically indistinguishable from) other possible ground states in the same multiplet that can be obtained by transformations using the symmetry group.

Finally, because you spent your time by misleading formulations and speculations about non-existent structures in group theory, let me inform you that you have also done exactly 0 to convince me that the bounds can’t be applied to composite Higgs theories.

You say that different calculations would have to be made, much like in the SM and MSSM case. For general quantities and high accuracy, it is the case. But if you look e.g. at the graph by Heinemeyer, Hollik, Stockinger, Weber, and Weiglein, in Tommaso’s “New zoom” article, your statement is wrong even for SM and MSSM. They use a single graph for SM and MSSM because SM can always be viewed as MSSM in a region of the parameter space with very high superpartner scale. That’s why it is possible for them to draw SM and MSSM into one graph, and depending on the Higgs mass, they divide the region into the SM, MSSM, and “both” strips. So they do these things simultaneously.

The Standard Model is an umbrella word for all models that look like the Standard Model up to the electroweak scale. It is an effective field theory. So if one does a calculation of low-energy quantities for such a model, it is always the same thing. The differences may only occur when one looks at all kinds of new stuff – or new substructure – that matters at arguably higher energy scales.

There can be many subtleties that correct various quantities a bit but I would still bet that these data, if correct, are (together with known theoretical results) pretty much falsifying composite Higgs theories and I think that no arguments written above against this point have really been justified.

Best wishes

Lubos

TD, could you please check that the IP address of the “Lubos” posting in this thread really does match Pilsen? There’s just too much that does not add up here.

Hah! An anonymous poster asks me to check the IP of a former professor at Harvard ?!

Jokes aside, all the comments in this thread from Lubos are genuinely his own. Or those of some crafty hacker, but that is quite unlikely!

Cheers,

T.

Thanks TD. See, one of the advantages of being an anonymous poster is that trying to smear “me” with fake posts in my name containing abject nonsense would be rather pointless. Whereas somebody usnig his own name could be attacked this way. As we all know, Lubos is controversial in some circles, so I would not have been overly surprised to see him being surreptituously attacked this way.

Unfortunately, this time he’s doing the damage all by himself.

Really Lubos, to repeatedly invent strange misintepretations of what Georgi says and what I say and what others have already told you in this thread rather than just saying “oh, right, didn’t think of that” is not wise.

I am only controversial in the circles of crackpots, politically correct Nazis, and anonymous cowards who are not willing or able to discuss actual serious topics but who want to promote fake authorities and stupid quotes substantiated by 0 science and who are, in reality, extremely satisfied with the system where the world is controlled by crackpots and PC Nazis.

Let me summarize what this thread is actually about. This article and the following thread is about the preferred Higgs masses as seen by the Tevatron. At the level of reliability of these measurements, they favor supersymmetry and disfavor all theories with a genuine Higgs substructure. Whether or not some phenomenologists invent some verbal exercises that dilute these points is much less important than the actual scientific results.

Lubos, regardless of whether you are controversial or not, in your opinion or in your arguing skills, I thank you for your interesting remarks in this thread and elsewhere. Of course we disagree on several things, but you are always welcome here.

Cheers,

T.

…apart from when you try to patronize and call people names, il va sans dire😉

Tommaso, consider these three facts:

1 – The Gfitter analysis indicates Higgs mass at 80 GeV, with “… at 1-sigma, the preferred range … within 57-110 GeV …”.

2 – The Tevatron 68% CL ellipse – if you consider it with respect to the Standard Model – lies entirely above the line for Higgs mass = 114 GeV,

which is equivalent (again, with respect to the non-supersymmetric standard model) to 114 GeV being a 68% CL upper bound for the Higgs mass.

3 – LEP, by direct search, has excluded Higgs mass below 15 Gev.

If you are inclined toward the Standard Model without supersymmetry,

do you worry at all about the apparent (at the confidence levels indicated) conflict between

Gfitter and Tevatron analysis of the data

and

the LEP lower bound of 115 GeV for the Higgs mass

???

Does it not indicate to you that, if the non-supersymmetric Standard Model is correct,

then

the Gfitter and Tevatron analyses need to be examined and perhaps corrected in some way ?

Is anyone at Fermilab or CERN working on such revisions, and if so are there any available publications of their work?

Tony Smith

Sorry about my typo in which I should have said

“… 3 – LEP, by direct search, has excluded Higgs mass below 115 Gev. …”.

I hope my typing 15 instead of 115 was clearly (in the context) obviously a typo.

Since I often make typos and am not very good at finding them and correcting them, I will here apologize in advance for other typos that I may have missed or not corrected.

Tony Smith

[…] electroweak measurements. Tommaso Dorigo has a nice explanation of this story in a new posting here. Don’t miss the comment section, which has a hilarious exchange between Lubos and an […]

Dear Tommaso,

thanks for your welcoming messages. You’re normally welcome to TRF, too – but please kindly allow me not to use the word “always” because it is perhaps too strong.😉

Guess Who might superficially be a generic anonymous guy but both of us pretty much know that it is a real particle physicist and you probably know the institution well.😉 Guess Who’s physics differs from some non-professional commenters in rather visible ways!

But I guess that he has dedicated a part of the recent years to saying that supersymmetry is collapsing – on a long-term downward trend (regardless of any “short-term” flukes!) – which is probably the reason why we’ve seen this fog plus these arguments.

I probably normally admire physicists like him but this behavior doesn’t look quite fair to me. Light Higgses favor SUSY, heavy Higgses favor (or would favor) non-SUSY and compositeness. I am convinced this is not a truly controversial thing. Instead, it is a textbook example of a basic deduction that the readers should know and I can’t understand objective, non-egotist reasons why someone would prefer to cover this simple interpretation of experiments by fog.

One can perhaps create special models and situations where composite models can be saved from downright falsification (or at least sketch these models or dream about them) but that’s not quite enough from keeping them as likely as models where the observed patterns are natural. This fog would be pure propaganda.

At any rate, Tommaso, your blog is clearly inspiring for many people, including your big role model, Peter Woit. Note in the link above that he found the “right” portion of this discussion, appropriate to his own preferences and skills, too.😉 He’s not too interested in the model (in)dependence of the deduction of m_H from m_W and m_t but if someone is asking for my IP address, that’s his cup of tea!

Best wishes

Lubos

Lubos said (comment 52) “… heavy Higgses favor (or would favor) non-SUSY and compositeness … composite models can … not …[be]… as likely as models where the observed patterns are natural …”.

There is at least one example of a composite model in which “the observed patterns are natural”:

Hashimoto, Tanabashi, and Yamawaki in hep-ph/0311165 say:

“… … The idea of the top quark condensate explains naturally the large top mass of the order of the electroweak symmetry breaking (EWSB) scale. In the explicit formulation of this idea often called the “top mode standard model” (TMSM), the scalar bound state of tbar-t plays the role of the Higgs boson in the SM. …

we can easily predict the top mass mt and the Higgs mass mH by using the renormalization group equations (RGEs) for the top Yukawa and Higgs quartic couplings, and the compositeness conditions …

we predict the top quark mass mt = 172 – 175 GeV

for D = 8 and R^(-1) = 1-100 TeV …

We also predict the Higgs boson mass as mH = 176 – 188 GeV …”.

Note:

the prediction of a Higgs mass state around 180 GeV is near a local minimum of the Gfitter plot of mH vs. delta chi squared

the D = 8 dimensionality that is consistent with a M4xCP2 Kaluza-Klein structure that I use in my model described in my web book at http://tony5m17h.net/E8physicsbook.pdf

and the prediction of an interesting compactification scale R^(-1) around 1 to 100 TeV.

Jacques Distler (comment 9, to which I replied in comment 13) said “… It would be interesting to see what today’s plots looked like with (say) MSSM radiative corrections folded in …”.

I think that it would be interesting to see what the Gfitter and Tevatron plots would look like with the Hashimoto-Tanabashi-Yamawaki model folded in,

particularly since it seems that LHC data might be useful in deciding among alternative physics models.

Maybe in a few years LHC data, if analyzed with respect to all alternative models, might be able show which is better:

supersymmetry favored by Lubos , Jacques, et al

or

composite models such as the Hashimoto-Tanabashi-Yamawaki model

or

something else.

Tony Smith

Dear Tony #53,

unfortunately, exactly their predicted mass around 170 GeV has been just excluded by the Tevatron’s combined teams – today.😉

http://www.kcchronicle.com/articles/2008/08/05/news/local/doc4897f01724373094939415.txt

http://cosmiclog.msnbc.msn.com/archive/2008/08/04/1246154.aspx

Search for 170 in the articles above. Also, 170+ is above the 95% confidence limit discussed in this article.

I don’t want to study HashiMotl et al.’s unusual extra-dimensional calculations but let me say that by imagining the Higgs to be a top-antitop bound state, one obtains a physically equivalent class of models to the normal models where the Higgs is elementary, up to different conventions and maps between the renormalization map schemes.

Such such as top-antitop model of the Higgs would be constrained by the same constraints discussed in this article, and I even think that Guess Who would agree with this statement.

Best

Lubos

I had a similar experience with Guess Who in an earlier discussion concerning the calculated value of sin^2(theta_W) in SUSY SU(5) vs. non-SUSY SU(5). Despite the fact that it is well known that the Weinberg angle comes out correctly in SUSY SU(5), GW insisted on arguing that this is no better than the non-SUSY case where the value come out close, but not quite correct. His argument seemed to be that the reason it doesn’t come out right in non-SUSY is that the unification scale where one should start to run the gauge couplings down to the electroweak scale is unknown since the gauge couplings don’t unify for this case. So you can always add corrections in order to get the right value, he argued. And this is supposed to be better than the SUSY case where it comes out correctly without any additional corrections and where the couplings do unify. It sounds like he advocates corrections to the SM which have exactly the same effect as SUSY, but let’s pretend it isn’t SUSY.

So you’re right, he’s most likely someone who’s invested himself in alternatives to SUSY and who is now in denial.

“Such such as” should have been “Such a” only.😉

Dear Eric #55,

I completely agree with your appraisal of the contrived non-SUSY explanations of the gauge coupling unification. But let’s not forget – SUSY has not yet been really found and if it will, it will still be amazing – in my opinion – even though perhaps less unexpected than at some moments in the past.

If a numerical coincidence is naturally confirmed, with good enough accuracy, by a minimal or nearly minimal (and thus more natural and more likely) version of a model, it always makes it reasonable to expect that the model is more likely to be true than other models where it doesn’t work as naturally and where one needs fine-tuning to achieve a similarly accurate agreement as a coincidence.

It is extremely natural to assume that one can follow the MSSM running from the TeV scale up to the GUT scale – many cute models in string theory predict this desert, too – and if this assumption leads to a nontrivial successful prediction for the couplings, it is circumstantial evidence that this version of the big desert may work in Nature.

The fact that one can mimic it by more contrived corrections and extra fields etc. in more generic theories may be true but it is not enough to erase the advantage of the SUSY GUT. Special relativity could have been mimicked by carefully crafted properties of a non-relativistic aether, too. That couldn’t prevent a sensible person from seeing that there had been a lot of evidence supporting relativity in the early 20th century.😉 Newton’s orbits could have been mimicked by epicycles, and so on and on and on.

One more comment about the unification. It may be also mimicked by split supersymmetry. Fine, except that this comparable agreement doesn’t make split SUSY equally likely to SUSY GUT. Why? Simply because the scenarios with all superpartners near a TeV is much natural – and therefore more “a priori likely” – than the scenario in which the particular gaugino is the only light particle (also a dark matter candidate). So even if both of these theories (SUSY GUT and split SUSY) pass the test, they don’t end up with the same probabilities at the end because the priors were different.

To achieve this new (split) gap between the different superpartner masses means to add a new unnatural hierarchy into the mechanism of SUSY breaking – and this contrived SUSY breaking is just “less a priori likely” than the natural classes of SUSY breaking that have a uniform scale. One needs a very special corner of this parameter space where the “right” gauginos (I forgot which of them they are) are the only light particles. It could have been other particles or combinations that would end up as light. There are many scenarios like that and one cannot say that each of them is separately as likely and natural as SUSY GUT with all light superpartners – because all these “differently split” cases are “small corners” of a big parameter space while the natural region with comparable superpartner masses it the “bulk” of this space.

All the best

Lubos

Hi Tommaso:

Can you explain how an expermentalist would interpert the 95%CL for a particle mass? According to P. Woit’s latest posting the Higgs mass has been excluded at 170 TEV. But does this mean there is still a chance that the mass is above 170TEV but that the experment was not conducted at this energy level? In other words if nothing is found in the mass range from 115 to 170 TEV is the SM still viable at the 5%CL but someother model would have a higher CL?

I was not going to add anything more to this thread until Eric barged in.

Sorry Eric, but our discussion was not about SU(5) SUSY GUT vs SU(5) non-SUSY GUT. The latter has been dead for a long, long time, because of little things like proton decay. Our discussion was about SUSY vs. non-SUSY GUTs in general, and the fact that the low-energy value of a running quantity does not tell you anything in isolation, since you can always make it fit experiment by adjusting thresholds and unification scale. You need multiple constraints to reach conclusions about models involving multiple parameters (duh!).

Regarding technicolor, I remain puzzled that anyone can have a problem understanding that bound states produced by a gauge interaction which is not one of the SM ones need not, and generally will not produce anything like eigenstates of the latter, and therefore will not generally have bound states which can be identified with the SM Higgs boson, and therefore must be analyzed on its own.

For what it’s worth, I have absolutely nothing “invested” in either non-SUSY GUTs, preons or technicolor. I’m just trying to keep the zealots somewhat honest.

Oddly, they do seem to cluster though. I can’t remember anyone getting all worked up when I recently reminded TD that even not seeing SUSY at the LHC would not rule out something like split SUSY. Where were you then, Eric?

Hi Everyone,

Check out Lubos Motl’s new work. It’s really cool!

Also check out this document, featuring Lubos’ best from his blog:

http://prime-spot.de/Bored/bolubos_short.doc

There you’ll find some of Lubos’ best insults as well as some racist and sexist remarks. For example,

“I will probably always consider Smolin and Woit to be moral scum for having done these nasty Nazi-style things. Smolin included the comment that my review was sexist because I didn’t appreciate the intellectual quality of his postmodern feminist bitch friend from the MIT”

and

“LM: Wow, Prescod-Weinstein seems to be a real hardcore. It’s an extremely serious problem that people like that are penetrating into the intellectual spheres. It’s not hard to see http://scholar.google.com/schola…escod- weinstein that she has 0 citations in total right now but she already feels qualified enough to be firing virtually all non-leftist professors from the Ivy League I know. If I knew, back in the 1950s, that this is what would inclusion of women into the Academia lead to, I would have definitely opposed such a step because this is potentially about a complete liquidation of a scholarly, rational discourse. Incidentally, Chanda might be obsessed with these progressive things because she is not just black and female but also lesbian,…”

and

“LM: Of course that I can. It’s been explained many times. A group of people, in this case blacks, whose mean IQ is 1.1 standard deviations below the average of another group, in this case whites, simply cannot be expected to have a proportional representation in the Academia. The larger-than-sensible departments of African and African American studies are partly designed to reduce the white-black gap in the universities by this specifically engineered field that selectively attracts blacks. ”

finally,

“After years of heavy interactions with people as stupid, as aggressive, and as arrogant as yourself, i.e. with the underclass, have you completely lost your mind, Ms Hossenfelder? I don’t know how to explain you these matters comprehensibly enough but let me try again: relatively to real top physicists, you are just a tiny piece of a waste product of metabolism who got into the system mainly because of quotas on the female reproductive organs. Whether you would like a certain kind of change is completely irrelevant.”

Dear Cecil #58,

I noticed your news, too. That’s just way too funny and I couldn’t resist writing a text about it.

http://motls.blogspot.com/2008/08/tevatron-falsifies-connes-model-of.html

They exclude exactly one possible figure for the Higgs mass, 170 GeV. Someone would have to be very unhappy to predict exactly this value. It would be like being hit by an asteroid.😉

Needless to say, Alain Connes predicted the Higgs mass to be 170 GeV.🙂 Good bye, noncommutative standard models, we loved you even though you were a bit dumb models!

Best wishes

Lubos

Dear Guess Who #59,

the technical subtlety with SU(5) vs something else that Eric slightly misinterpreted is completely irrelevant for this big issue.

Even if you consider non-SUSY E6 GUT or any other gauge group with any mass spectrum, you can’t claim to “match” the SUSY SU(5) GUT’s successful prediction of the coupling unification just by saying that the “threshold corrections could conspire to make it work, too”.

They could conspire to give you the exact agreement but it is much more likely that they will *not* conspire for a given theory in your class. I can’t believe that you don’t understand this point. Pure SUSY SU(5) GUT up to the GUT scale is an extremely robust theory that can easily come from a UV-complete theory. And if you’re assigning this model the same priors as to some specific individual fine-tuned complicated non-SUSY theories with non-minimal gauge groups and non-minimal contrived spectra, I am afraid that you could also claim that that it hasn’t yet been decided whether Newton’s theory or the epicycles are the correct explanation of the planetary orbits. Qualitatively, it’s the very same question.

Feel free to call this “discrimination” between simple and complicated theories “zeal” but I will continue to call it a “fundamental pillar of rational reasoning”.

Concerning your “non-falsifiability” claim about split SUSY, it’s just not true. Split SUSY predicts (more precisely, has to assume) light Higgsinos and gauginos (needed for dark matter etc.) so the LHC will surely decide about split SUSY in one way or (more likely) another. Whether split SUSY is counted as a SUSY theory is a matter of convention but the LHC won’t leave a physically material question unanswered, at least not after many years and at least not if one doesn’t redefine the meaning of “split SUSY” while running (once it becomes more popular for most model builders not to make any predictions).

Best

Lubos

Dear Lubos, I forgot that the spam-eater hates short comments with links. Lost one, let me try to write a slightly better one.

First: “the technical subtlety” is what my and Eric’s discussion was about. If you now want to discuss something else, like whether a complicated non-SUSY GUT is more “unlikely” than SU(5) SUSY, that’s a different (and ill-defined) question than whether it is ruled out. Your fine-tuning argument sounds almost a little anthropic, so I’d rather not go there. Those dicussions never lead anywhere.

Second: since you are so sure that the LHC will be able to see split SUSY (if that is the correct model), would you mind listing which split SUSY particles it will be able to see, through which channels, up to which thresholds, and whether split SUSY will definitely be ruled out, as in “unable to do the job with regard to dark matter and unification”, if those particles are not seen?

Dear Guess Who,

Whether or not we were discussing SU(5) or another GUT is besides the point. SU(5) is the simplest possible case and it still makes the point that sin^2(theta_W) comes out correct when SUSY is included, whereas it does not for non-SUSY, unless you consider threshold corrections which have exactly the same effect as SUSY. The main point is which case is more natural and believable? If you can provide a set of threshold corrections that are well motivated and non-contrived, then this would be a different story.

With regard to Hashimoto, Tanabashi, and Yamawaki in hep-ph/0311165 saying:

“… we predict the top quark mass mt = 172 – 175 GeV

…

We also predict the Higgs boson mass as mH = 176 – 188 GeV …”,

Lubos said (comment 54) “… unfortunately, exactly their predicted mass around 170 GeV has been just excluded by the Tevatron’s combined teams – today …[links to two articles]… Search for 170 in the articles above. Also, 170+ is above the 95% confidence limit discussed in this article. …”.

As one of the articles (msnbc) said,

“… An analysis of collisions shows (to a 95 percent confidence level) that the Higgs boson can’t have a mass around 170 GeV/c2 … Fermilab’s scientists soon expect to widen the no-Higgs zone, going down to 165 GeV/c2 and up to 175 GeV/c2 …”,

but

even the wider zone is quite consistent with the prediction of Hashimoto, Tanabashi and Yamawaki of a Higgs mass in the range 176-188 GeV.

so

it seems to me that Lubos misread their predictions and confused their 172-175 GeV Tquark mass with their 176-188 Higgs mass,

and

that the Hashimoto-Tanabashi-Yamawaki composite model has NOT been ruled out by the Fermilab analysis discussed in the msnbc and Kane County Chronicle articles cited by Lubos.

Tony Smith

With respect to Lubos’s (comment 54) statement that the Hashimoto-Tanabashi-Yamawaki Higgs mass prediction 176-188 GeV is

“… above the 95% confidence limit discussed in this article …”,

that 95% CL is based on the Gfitter analysis, which might not be an appropriate analysis with respect to composite Higgs models,

and also has the (in my opinion) suspect characteristic that it predicts a Higgs mass of 80 GeV which is inconsistent with LEP experimental results.

That is why in my earlier comment 49 I asked Tommaso whether

“…

the Gfitter and Tevatron analyses need to be examined and perhaps corrected in some way ?

Is anyone at Fermilab or CERN working on such revisions, and if so are there any available publications of their work? …”.

Tony Smith

PS – It may be relevant that the mH vs delta chi squared plot shows a local minimum in the region around 180 GeV predicted by Hashimoto-Tanabashi-Yamawaki for the Higgs mass,

although that local minimum may have other causes, such as for example ZZ phenomena.

Dear Eric, since the value of sin^2 (theta_W) depends on the GUT, I can not agree that which one we are discussing is “besides the point”.

You are fixating on the failure of the simplest non-SUSY GUT to get one particular low energy value right to the second decimal point when using a particular value of M_GUT which fails to produce coupling unification anyway (becaus they all do, so the choice is really arbitrary) not to mention a proton lifetime within the right ballpark, even. That’s called “beating a corpse”. Spectacular for propaganda purposes, maybe, but not much good for anything else.

A meaningful discussion of what is “more natural and believable” would require an assessment of whether the enlargement of parameter space and particle spectrum required by SUSY is “more natural and believable” than the enlargement required by a more complicated non-SUSY GUT. I’m undecided about that. I believe others are too: https://dorigo.wordpress.com/2008/07/23/string-theorists-betting-against-susy/

Dear Guess Who #62,

of course that I can tell you whether the LHC will see split SUSY, with all the required details. The answer is that it will not see split SUSY and if you agree with a 50:50 bet, I am ready to join you.😉

If there were split SUSY, which is not the case, it would see the new light scalar particles as four jets plus missing energy in most cases, but it would not see anything else (other superpartners). Satisfied? This is a silly game because you surely know the correct channels better than I do and you know that whatever mass of the light scalars can be – so that they’re still light enough to play the role they should play in split SUSY – they can always be discovered in *a* channel after a sufficient amount of integrated luminosity.

It’s only about the number of years that you could possibly need in a difficult scenarios but it is surely not about a complete impossibility to test these things. These things will be decided and probably much earlier than most people think.

Your comment that one only cares whether a scenario is “ruled out” and not whether it is “unlikely” is internally inconsistent. Nothing in the real world is ever 100.0000000000000000000000000% ruled out (please add a few zero, to be sure what I mean). Models are only ruled out at certain confidence levels and I am just saying that you are incorrectly manipulating with the rules to calculate or estimate the confidence levels when you say that errors in predictions don’t matter because they can always be fixed by some unknown mysterious new physics that you don’t have to describe.

By your standards, the epicycle theories haven’t yet been ruled out.

Of course that these new effects can hypothetically remove discrepancies but when these new effects have to be assumed to have particular quantitative properties, in order to avoid contradictions with observations, such an assumption always lowers the probability that the theory is correct, strengthens the net arguments that it is wrong, and moves the theory closer to being ruled out. You don’t seem to be agreeing with this basic thing because you implicitly argue that any correction to a theory is “for free” and can be done without affecting the probability that the model has been ruled out.

There is nothing anthropic about my fine-tuning argument whatsoever, and I completely disagree that this point of mine belongs to any “controversial” speculative segment of physics governed by the anthropic discussions. Instead, my argument about fine-tuning reducing the probability that a model is correct is a basic and inevitable component of logical inference that is necessary in most cases in physics to determine whether a theory has been ruled out or not. I am sure you are using it in dozens of other cases – you just don’t seem to be willing to apply logic in this case, too.

Be sure that Eric and myself aren’t the only people who see this contrived approach of yours in its full nudity.

Best

Lubos

my god, i had no idea what you physicists talked about … fabulous theater …

me, i am attracted to the graphs as art, thinking of doing them as large oil paintings ..,

ok, good luck with reality … meditation helps, by the way

enjoy, gregory lent

Hi all,

sorry to everybody for being unable to contribute to this thread, but my internet connection does not allow me to do that today or tomorrow.

I want just to answer Cecil here.

The Tevatron experiments have searched for the Higgs boson in a wide range of masses – all those that could in principle provide a visible signature given their available datasets. From 100 to 200 GeV, all the useful final states have been considered.

The fact that the limit is most stringent at 170 GeV means nothing about the possibility that the Higgs is elsewhere. The limit obtained at the Tevatron in fact is a confidence-level one as a function of mass.

The confidence level of the non-existence of a SM higgs reaches 95% at 170 GeV, while it is of the order of 70-80% from 140 to 190 GeV, if I remember correctly, and lower for lower masses.

Please read https://dorigo.wordpress.com/2007/03/18/95-confidence-level-watch-your-language/ for an explanation of what CL really means.

In the case of Higgs searches at the Tevatron, it means little. The Higgs can still hide at 170 GeV and the experiments just noticed a downward fluctuation of backgrounds there, a 1-in-20 chance.

The Higgs can in fact be anywhere if we only look at direct limits, above 114 GeV. Below that, the LEP II limits are really stringent.

If one instead considers all the information together, then there are indications that the Higgs is lighter than 140 GeV (and heavier than 114).

That is about all there is to say about the Hggs mass so far – in the Standard Model, as many have noted above.

Cheers,

T.

Dear Guess Who #66,

the controversy in “String theorists betting against SUSY” is about a binary question whether low-energy supersymmetry is “natural” in string theory or not. Indeed, this question might be controversial, at least socially. There are two “big” possible answers to a question and these two “big” answers have comparable prior probabilities (even though the people who say that low-energy SUSY is more sensible a prediction probably have a better intuition).😉

But the “controversy” that you are trying to create is of a completely different character. You are not trying to say that the pure nice “SUSY SU(5) GUT” is perhaps comparably natural to another natural model that you can describe in words.

You are saying that SUSY SU(5) GUT is as natural as a model that is so complicated, in fact, that you can’t even tell us what it is. It is almost certainly a model whose gauge group and spectrum must be rather carefully adjusted to generate the same accuracy in the gauge coupling unification as we get from SUSY SU(5) GUT.

But unless very unlikely things occur, this proposition of yours is clearly unlikely and to assume that it is likely is ludicrous. SUSY SU(5) GUT is surely not as natural as non-SUSY SUEG($#%)#$() with #()$#(&)@$*&@#%@, so even if the latter happened to predict the same low-energy Weinberg angle – QCD coupling relations as SUSY SU(5) GUT, it is certainly not equally likely, regardless of the big paradigms about low-energy or high-energy SUSY in string theory one can choose to believe.

If you wanted to argue that you have as likely a model as SUSY SU(5) GUT with the same accurate gauge coupling unification, you would actually have to write it down, check the numerical tests, and justify that your model hasn’t been engineered only to generate one number but that it has comparable “priors” to be a sensible model as SUSY SU(5) GUT.

Until we actually see something like that, the correct logical inference based on the existing evidence leads to the conclusion that we don’t know any scenario where the gauge coupling unification works as well as it does in SUSY SU(5) GUT.

Best

Lubos

Dear Guess Who,

Perhaps a stupid question, but I do not follow what you are trying to get at here:

“The chiral quark condensates are held together by QCD, which knows nothing about electroweak interactions (product group). Why then would you expect it to produce bound states that also happen to be electroweak eigenstates? How could it? Write down a few standard quark bilinears (try the two-flavor singlet and triplet) and check how they transform under SU(2)_L x U(1)_Y. What do you find?”

If q_L is an SU(2)_L doublet and q_R is an SU(2) singlet, then a q_L q_R condensate transforms as an SU(2)_L doublet. What’s the point?

Hi gs. Well, two points. The first, obvious one is that the condensate is not invariant, so it breaks electroweak symmetry; that’s how technicolor works. The second one becomes apparent if you do write out the mesons in two-component spinor form and check how they transform under SU(2)_L x U(1)_Y. Are they eigenstates?

“If there were split SUSY, which is not the case, it would see the new light scalar particles as four jets plus missing energy in most cases, but it would not see anything else (other superpartners).”

You probably meant light gauginos, not scalars!

Dear Guess Who,

The operators \bar{u}d, \bar{u}u-\bar{d}d, \bar{d}u form a linear representation of the unbroken SU(2), and a non-linear realization of the full chiral SU(2)xSU(2). I’m still unclear as to how this is relevant for the original question of whether there exists something that can be called a massive Higgs particle in technicolor models.

More specifically, suppose q_L^a = (u_L, d_L) and q^R_b=(u_R, d_R). Then the composite scalar field operator q^R_b q_L^a transforms as a fundamental (doublet) of SU(2)_L and an antifundamental of SU(2)_R. If the strong technicolor interactions gives this scalar field a VEV, it acts as a Higgs field for spontaneous electroweak symmetry breaking. Now the question is whether it is sensible to identify certain associated massive particle(s) as “Higgs particle(s)”?

As I’m writing and thinking, perhaps the answer might be “no”, since below the technicolor scale, only the Goldstone parts of this composite field operator appear in the Lagrangian, and these Goldstone modes do not transform linearly under the full SU(2)xSU(2), so they cannot be taken to be conventional Higgs fields. I hope I’m not completely out to lunch.

Minimal versions of Technicolor was already ruled out a long time ago, so I don’t see what the fuss is about or why these particular plots shed any new light on whats already been known to be wrong for a decade or more.

You have to make the models rather complicated and contrived to possibly evade the preexisting bounds, in which case it requires a completely different set of degrees of freedom and the above plots won’t give you much of a refinement in the allowed parameter space (or rather the refinements will be highly nontrivial and nonlinear). Its been known for awhile that say a fermilab discovery of a Higgs would not necessarily preclude the theory. Only exhaustive searches via the LHC can possibly rule the full nonminimal theory out.

This whole dispute is an unfortunate case of clashing terminology! When Lumo says “the data support SUSY” he just means “thank god the latest data do not force even more extreme fine-tuning in order to save SUSY!”

With this clarification I think that all sides are in agreement.

Dear Tom,

There is only a very small and certainly acceptable amount of fine-tuning involved with SUSY. In fact, one would expect the Higgs mass to be less than about 130 GeV in the MSSM, so the latest data does support SUSY. There seems to be a lot of misinformation out there regarding the actual status of low-energy supersymmetry.

Dear Guess Who #73,

I still don’t follow the point of your non-eigenstate comment either.

If the left-handed 2-spinors transform as R (e.g. doublet) and the right-handed 2-spinors transform as R’ (e.g. singlet), then the left-right condensates will transform as the tensor product R (x) R’ which is another doublet in your example.

Is it an “eigenstate” of SU(2)_L x U(1)_Y? Well, it depends on what you call an eigenstate of a whole non-commutative group which is, strictly speaking, a meaningless phrase. We can talk about “representations” and not just “eigenstates”. In general, the condensate representation will be larger (reducible) and under the relevant subgroup, it will decompose. Some components will transform nonlinearly and they won’t be helpful for the Higgs mechanism, others will transform linearly under SU(2)_L and they can be used as a Higgs.

But when you look at the latter, is the doublet made out of eigenstates of J_3 in SU(2)_L? You bet.😉 It is the very same doublet as the normal Higgs, how it could not be? The eigenvalues are +1/2 and -1/2. You seem to be talking about some very new, very mysterious possibilities how the Higgs could transform or non-transform but these possibilities don’t seem to exist outside your mind.

Dear Tom #77, not exactly: what I meant by “the data favors SUSY” is that “for any chosen self-consistent method to estimate the probability that SUSY is correct, the probability is higher after the data are taken into account than what it was before.” The overall normalization of the probability is a tough thing but I happen to care how probabilities change as a result of new experimental, data. You don’t?

I surely don’t see any “extreme fine-tuning” needed in SUSY. It might be “fine-tuning” if one is very critical but because it is not really fine, but at most an order of magnitude or so, “tuning” would probably be more appropriate. If you ask me why various model builders talk about these clearly incorrect phrases such as “extreme fine-tuning” in SUSY, here’s an explanation.

In the past, many of them worked on SUSY and they always preferred “as spectacular predictions as possible” that occur at the lowest possible energies one can imagine. It is just so hot to make spectacular, “easily and soon testable” predictions. But it is only so until the experiments are actually approaching. They will kill over 99% of such papers.😉

Because the model builders wanted to be so spectacular, they focused on an unnatural region of the parameter space where the masses are much lower than generically expected or needed. It is not hard to recalibrate this question from the scratch.

Let me remind you that the Higgs vev is 246 GeV so if the Higgs mass and superpartner masses are in hundreds of GeV or a few TeV, which is still possible for most cases, there’s simply no fine-tuning to talk about, surely not an extreme one. What should be done is that the phenomenologists should fine-tune – and never repeat – their sensationalist methods to push their language in a particular direction that suits them at particular moments. That can backfire and that probably will backfire.

Best wishes

Lubos

Well to be fair when people refer to finetuning in SuSy, they usually dont have the little hierarchy problem in mind but rather something else. Eg for instance the mu parameter or the soft susy breaking terms would be what we call finetuned, b/c the cutoff scale is arbitrarily high relative to the electroweak scale.

Hence the need for things like the nMSSM and variants thereof.

Dear Guess Who,

Correction: it appears the three operators in my earlier post do not form a non-linear realization of the full SU(2)xSU(2), my mistake. This however does not affect the other statements in the post.

Dear Lubos,

Here’s my attempt to understand the issue, though I’m not sure if it’s correct. Consider a QCD-like technicolor sector with a global chiral SU(2)xSU(2) symmetry, and for the moment turn off the couplings to the electroweak sector. The spectrum includes a set of goldstone bosons coming from chiral SU(2)xSU(2) breaking, plus massive techi-mesons or techni-hadrons.

Now turn on the coupling to the electroweak sector. The original global chiral symmetry breaking now gives rise to spontaneous electroweak symmetry breaking, and the original goldstone bosons are eaten by the massless gauge bosons to form the W and Z bosons. If we are working up from low energy, we have to get to the technicolor scale to see the mechanism of electroweak symmetry breaking, and at this scale we find a whole bunch of massive particles (the techni-mesons or techni-hadrons), none of which can be naturally considered as “the Higgs” particle(s).

In models of this type, the point seems to be that there is no intermediate mass scale (between the scale of the W/Z mass scale and the technicolor scale) at which the system is described by an effective field theory using only the gauge fields + fermions and a scalar Higgs field in some representation of the gauge group.

I hope I haven’t said anything too stupid.

Dear Haelfix #80,

the cutoff scale is certainly not “arbitrarily high”. The high value of the Planck scale is an empirically observed fact – because we observe the tiny strength of gravity (if expressed in particle physics units) – and the Planck scale, the ultimate cutoff of all effective low-energy field theories, is really the “natural” scale.

The question might be why the electroweak scale is so low relatively to the Planck scale and this question is *naturally* answered by supersymmetry. It boils down to a single parameter of the Higgs potential that is low, because of a “classical reason”, and protected, by SUSY cancellations.

Analogously, the GUT scale, if viewed as a cutoff, is also natural, not arbitrary. It is the scale where the couplings unify, something that can also be demonstrated by an extrapolation of the known low-energy data. As a bonus, the GUT scale and Planck scale are close.

The mu-term etc. is a legitimate constraint on allowed models and in those that I consider viable, this low value is explained by a qualitative mechanism so it is no fine-tuning. There are all types of stringy etc. models that naturally solve the mu-problem.

They always have some extra physics besides MSSM, which is ultimately needed even for the very SUSY breaking, but it is completely misleading to say that these effects and (solvable) problems favor “something like nMSSM”. There is no convincing evidence in favor of the particular modifications of MSSM that we call nMSSM. Clearly, the MSSM is not the full story either but that’s very different from picking nMSSM.

Best

Lubos

Dear gs #82,

thanks for your explanation but I still find the described physical phenomena confusing. If the techni-QCD scale is e.g. 10 TeV to push it slightly away (and this lower bound on the compositeness scale is what can demonstrated by some other high-precision experiments), may I still ask what is the effective field theory below 1 TeV in your setup?

If I can’t, do you really want to claim that there is no effective field theory up to this scale?

If I can, isn’t it obvious that it must be “using only the gauge fields + fermions and [a] scalar Higgs field[s] in some representation of the gauge group”, by the renormalizability conditions etc.? Would you say that the little Higgs bosons in various little Higgs theories are not scalars or that they are not in a representation of the gauge group? If you would, why?

I think that this intuition of mine is really supported by the data – which are needed to make these arguments. The “little hierarchy” that must exist between the electroweak scale and compositeness scale is experimentally observed and it is a kind of hierarchy, after all.

So it allows us to say that an effective field theories up to the electroweak phenomena should exist and not break down up to something like the (significantly higher) compositeness scale. Because the theory has to behave well over this order of magnitude, to say the least, one should be able to argue that the low-energy effective field theory should be close to a renormalizable, UV quasi-complete one. And then, I think, it follows that the description must be in terms of gauge groups, fermions, and scalars in various representations of the groups.

The only loophole I could see would be to fill the whole interval between the electroweak scale and compositeness scale by a dense network of new phenomena but I would still expect that this would contradict the high-precision experiments that require the compositeness scale to be high.

Am I doing something silly?

Thanks & best wishes

Lubos

Hi Lubos,

Very well, poor word choice on my behalf, we are of course in agreement on the subject of cutoffs -shrug-

However the pure MssM still does have finetuning in the Mu parameter (amongst others) no matter how you cut it. It needs to be ‘solved’ if you are into naturalness, b/c ratios on the order of 10^-16 or so are hard to swallow. I’m sure there are stringy constructs that solve the problem, and I have no idea how to qualitatively weigh the odds on whether the nMSSM is ‘favored’ or not relative to them (I never said it was).

But generically, the point is there is always some amount of finetuning in Susy, which only goes away as you add more and more layers of extra model building to the problem, so I mean its a little disengenous to claim that Susy removes all finetuning (which I errenously thought you were claiming). Indeed, if we had such a perfect model, all of hep-ph would file for unemployment =)

“Obvious” or not, it’s

wrong.The effective theory, below the Technicolour scale, is a gauged nonlinear sigma model, consisting of the techni-pions coupled to the SM gauge fields.

There is no scalar Higgs in that theory.

And, if one goes on to include the various (other!) techni-mesons one may, but need not, find a neutral scalar among them, whose couplings to the W,Z are such that one might interpret it as a Higgs.

I would have thought that, even from the 6 pages in Michael Dine’s book, this much would be clear.

Dear Haelfix #85,

I have already agreed that in pure MSSM, viewed as an effective field theory, the mu-problem is/was a problem. However, this problem is more solvable e.g. than the original hierarchy problem, and it is actually being solved in pretty much every viable model that someone proposes these days. The solutions began to materialize in a 1988 SUGRA-based paper by Giudice and Masiero

http://ccdb4fs.kek.jp/cgi-bin/img/allpdf?200035631

that currently has 466 citations and that solves it by separating the SUSY breaking from the EW symmetry breaking. There are other methods, too. In other words, there are (increasingly) important (and natural) classes of models that don’t suffer from the mu problem.

I think that your word “disingenious” is a silly attempt to intimidate me. I certainly did say – and I still say (because it is true) – that there’s no obvious fine-tuning that remains a universal problem of supersymmetry. At most, there is some “tuning” and it is questionable whether the word “fine-tuning” might be applied to these cases. If we knew of such a real fine-tuning, SUSY would pretty much be killed and and 35% of hep-ph could file for unemployment, too. Note that unlike you, I don’t overhype the numbers.😉

But concerning your last half-sentence, yes, you are completely right about a significant portion of the activity in hep-ph these days. People invent pseudoproblems that they subsequently “solve” and the only goal of this theater, in many cases, is for them to be employed and paid. Much of the stuff has no lasting value because many problems to be solved are not real problems and many models used to solve them are just random guesses, not well-motivated physical theories.

There is no “clear” fine-tuning associated with supersymmetry as a framework. I have to write this sentence very explicitly so that there won’t be further misunderstandings and attempts to make the obviously correct proposition that “supersymmetry solves the fine-tuning problems of the electroweak theory” politically incorrect, something that you clearly try to make. Whether it is politically incorrect is much less important than the fact that the proposition is *correct* and actually important for modern particle physics.

Best

Lubos

Dear Jacques #86,

I have defined the setup rather clearly – and you have even copied most of the relevant parts to your comment – for everyone to be able to see why your comment is incorrect, so a very short reminder is enough here.

A non-linear sigma model is an effective field theory that breaks down at low cutoff scale close to the meson masses and the only way to preserve the gap required by the high-precision bounds on compositeness is to make all the nonlinear terms small, i.e. the model becomes linear and the scalars transform as nice reps of the gauge group, as expected.

If the model were “heavily” nonlinear, a large “f”, the compositeness scale would be close to the meson scale and be excluded by the high-precision data.

Best wishes

Lubos

Yesterday I wrote a comment here, with a link to arxiv, and I don’t see it now. Maybe lost in the spam filter?

Which is, indeed, pretty much the consensus about Technicolour.

Technicolour models, invariably, seem to lead to positive values for the Peskin-Takeuchi “S” parameter, which is excluded by precision electroweak data.

What we’ve seen above didn’t exactly look like any kind of consensus – more like a fine 50:50 split about every single question – but I am happy that this particular question seems to breath with harmony, Jacques. For a while, maybe, but I will try to extend it.😉

Hi Lubos and all.

I really am enjoying this thread and am pleased I can still understand a non-zero amount. Lets say I am getting the gist.

Lubos wrote:

“…. you are completely right about a significant portion of the activity in hep-ph these days. People invent pseudoproblems that they subsequently “solve” and the only goal of this theater, in many cases, is for them to be employed and paid. Much of the stuff has no lasting value because many problems to be solved are not real problems and many models used to solve them are just random guesses, not well-motivated physical theories.”

Could not agree more with your wording. THIS is PRECISELY the feeling I get! I do hope some interesting happens at LHC. This playing with models without new data is done because people do have to keep themselves busy, don’t they. Publish or perish; stand up, give talks, go to confereneces, meet others just as anxious to emerge or stay afloat, else fall into shadows.

I left HEP several years ago. I sincerely hope something interesting is found at LHC. If yes then who knows. Maybe I could even drift back or, much more likely, simply help pay some youngster to start off in the field. Else I would be a criminal to encourage him to enter it..

Dear Lubos,

Thanks for your comments in #82. I agree with your statement “A non-linear sigma model is an effective field theory that breaks down at low cutoff scale close to the meson masses..” When I wrote my comments I was only trying to make sense of the Georgi quote, and was not thinking of how to get a model that is still phenomenologically viable given the current high precision data. I think you and Jacques have already discussed the relevant points, and I don’t have much to add. Nice talking with you, cheers

Regarding Lubos’ comment: “One more obvious comment. Some people tend to imagine “something like compositeness, preons, technicolor” when we say “new physics”. In my opinion, such a class of ideas has always been heavily overrated. But in this particular case, there’s much more than aesthetics that helps me to show that these ideas are no good.”

I should probably mention that the Koide mass formulas for the neutrinos, mesons and baryons, as well as the rewriting of the MNS matrix into circulant form, was motivated by a preon theory, more recently Kea and I have been assuming that the coincidences arise simply from using a discrete complex Fourier transform on experimental measurements that depend on generation, such as the masses of the leptons, etc.

Consequently, the results really don’t depend on a preon content. You can read them that way, but you can also read them as a statement on the utility of Fourier transforms in elementary particles.

Dear Goffredo #92,

thanks for your agreement. But I would like if my statement were not interpreted (or abused) as an anti-scientific sentiment. It is about the balance between quantity and quality, if you wish.

If quantity is too low, progress can be slow – for individuals or the whole community. But if quantity is too high, it is almost inevitable that the quality will drop. Of course, this is almost inevitable if one wants to keep those thousands of particle theorists in the game – and thousands is not such a huge number for this Earth!

But still, even assuming that the overall numbers are not changed, it is still important to realize that it matters a lot whether someone contributes something that has a lasting value or just something to display an activity that is likely to be irrelevant after the first measurement.

I am conservative in this respect: a good scientist should not only have a large number of correct results but even a high proportion of correct (published) results. Wrong papers shouldn’t count as a “good thing” and they should even count negatively. That’s how I look at things.

What I really mean by my comment is that the refinement of the pillars and general conceptual issues that might be used in future work is probably more valuable than hundreds of random, specific (but unlikely to be true) guesses that use the existing technology.

Dear gs #93,

great, thanks for your explanations. I kind of understand the philosophy of technicolor. The complicated strongly coupled physics or non-linear sigma models are very attractive for some people but I am just not among them. In my opinion, such complicated things are only relevant for narrow intervals at the energy scale axis. Their (technicolor etc.) effect is qualitatively similar like if you add a lot of particles with comparable masses into this window.

At long intervals at the energy scale, nearly conformal, scale-invariant theories rule and most of them are weakly coupled.

For example, the strongly interacting physics of QCD really explains objects whose masses are comparable to hundreds of MeV or several GeV. At much higher energies, the nearly free quark and gluon fields replace everything. In this sense, almost by definition, these strongly coupled phenomena are not directly relevant for understanding of the hiearchies. Moreover, I think that the idea of compositeness has been important and new in the past but it ceased to be original after the 1970s. It may be used again in Nature but it wouldn’t be that cool, anyway. It was much cooler in the first cases.

So the “coolness” and “originality” test really fails and one should look at the truth. And a reasonable look at the reality creates a bleak picture for compositeness theories. String theory shows that there is no good reason for the Higgs etc. to be composite. Fundamental scalars (that are only made out of 1 string, so to say) are extremely natural (there are many scalars to choose from) while composite ones are just contrived.

In string theory, one really doesn’t build models with composite Higgses in most cases. I don’t know to what extent this is a “no-go” feature of string theory and to what extent it is sociological or cultural – because string theorists realize, just like me, that it would be very contrived to “engineer” Higgses if there are so many candidate scalar modes in the theory without any work.

Dear Carl #94,

I am afraid that using some numerological Ansätze for mass matrices as an argument for preons or technicolor wouldn’t work as a PR directed at me because my opinion about these numerological Ansätze is probably even much lower than my opinion about technicolor and preons.😉 Sorry and please don’t take it personally (especially not Kea).

Best wishes

Lubos

Well now, Lubos, what if we told you that the Fourier transform was an arithmetic operator in a von Neumann algebra implementation of Langlands duality, in the same sense that Witten’s j invariant is.

I don’t know about Lubos, but

Iwould say, “What the heck is‘Witten’s’J-invariant?”Come on, I’m sure you’ve looked at Witten’s recent work on 3d gravity. I was merely using a short description of it.

His work on 3D Gravity?

Then I know what “J-Invariant” you’re talking about. But, no, it’s

not“Witten’sJ-Invariant.”However, I have

noidea what “a von Neumann algebra implementation of Langlands duality” might be, let alone what such a beast might have to do with Witten’s work on 3D gravity.Care to elucidate?

Hi,

I’m fairly new to this blog, but I must say it’s a terrific blog. I’m enjoying this discussion and I very much look forward to the LHC results!

I want to point you to a few things. First, is this Lubos Motl work, which is extremely intriguing:

Also, a collection of Lubos’ greatest works from his very own blog:

http://prime-spot.de/Bored/bolubos_short.doc

It’s a good collection of his insults, as well as his racist and sexist remarks. Very interesting.

For example,

“you are just a tiny piece of a waste product of metabolism who got into the system mainly because of quotas on the female reproductive organs.”

“LM: Wow, Prescod-Weinstein seems to be a real hardcore. It’s an extremely serious problem that people like that are penetrating into the intellectual spheres. It’s not hard to see http://scholar.google.com/schola…escod- weinstein that she has 0 citations in total right now but she already feels qualified enough to be firing virtually all non-leftist professors from the Ivy League I know. If I knew, back in the 1950s, that this is what would inclusion of women into the Academia lead to, I would have definitely opposed such a step because this is potentially about a complete liquidation of a scholarly, rational discourse. Incidentally, Chanda might be obsessed with these progressive things because she is not just black and female but also lesbian,…”

“Anonymous: Can you explain us why a deparment like African Studies can not be compare to other deparment and can only create a “false impression of balanced” while “balance cannot exist because of facts of biology”? Speak your mind, you are a free man now.

LM: Of course that I can. It’s been explained many times. A group of people, in this case blacks, whose mean IQ is 1.1 standard deviations below the average of another group, in this case whites, simply cannot be expected to have a proportional representation in the Academia. The larger-than-sensible departments of African and African American studies are partly designed to reduce the white-black gap in the universities by this specifically engineered field that selectively attracts blacks. “

Jacques, it seems clear that Kea is secretly just an algorithm that strings together buzzwords to create phrases that are meaningless but sounds impressive to outsiders; why demand explanations?

Dear Kea #96,

if you told me that, I would laugh a bit.

Witten’s j-invariant would be news for me, too. If you want to know, the j-invariant was discovered by Felix Klein in the 1870s which is about 130 years ago, not by Witten a year ago.😉

However, Fourier transform is something else. It is an even more basic mathematical operation. There are four basic mathematical operations: addition, subtraction, Feynman’s path integral, and Fourier transformation.

Whoever looks for mystery behind these things is ill-informed.

On the other hand, the Langlands duality or j-invariants have a lot of non-trivial information in them. So it’s perfectly possible that by some huge oversimplification, j-invariants and Langlands issues have something to do with the Fourier transform, but it is surely not too interesting because Fourier transform is a trivial thing.

So even without seeing your notes about relationships that you probably want to look “mysterious”, one can see that there is nothing mysterious about them because they’re just nonsensical.

Best

Lubos

The use of the Fourier transform is pretty simple and it doesn’t leave a lot of room for manipulation. Take the discrete fourier transform of the leptons and you end up with the Koide formula, which is exact. The only manipulation you have to perform is to use the square roots of the masses rather than the masses themselves. The resulting formula for the square roots of the masses is

1 + sqrt(2) cos(2/9 + 2n pi/3),

times a constant, and is quite exact. See Modern Physics Letters A, Vol. 22, No. 4 (2007) 283-288. That formula extends to the neutrinos, with a different scale as

1 + sqrt(2) cos(2/9 + 2n pi/3 + pi/12),

see hep-ph/0605074.

Discrete Fourier transform theory tells you that any 3×3 unitary matrix is equivalent to the sum of a 1-circulant and a 2-circulant matrix. Writing the MNS matrix in this form turns it into a stunningly simple form built from sqrt(2) and +- 1 and i.

What’s more, the weak hypercharge and weak isospin quantum numbers of the quarks and leptons are the solutions of a simple equation involving a 3×3 1-circulant matrix A and a 2-circulant matrix B. The simple equation is (A+B)(A+B) = (A+B).

Ignore away, we’re busily writing it up.

Dear Carl #102,

don’t tell me that you don’t really see that everything you write is completely meaningless numerology.

It’s not possible for complicated quantities such as the low-energy lepton masses to be expressed by similar childish formulae because these quantities are obtained by non-integrable RG running differential equations from some high-energy values that are the only ones that have a chance to be of a relatively “analytical” form.

The Modern Physics paper by Gerald Rosen unfortunately made one more mistake: it offered a prediction. That’s a mistake because any numerological paper that is as stunningly silly and this one and that offers a prediction is going to die instantly. Rosen’s predicted top quark mass was 177.698 GeV. That’s about 5 sigma from the currently measured central value 172.4 GeV. Dead. To a large extent, it was already dead in 2006 when he wrote the nonsense. That’s like a 3% error. What about the advertised 0.001% accuracy? It never works. Even at the level of numerology, these are very lousy attempts.

Moreover, if you think that these funny cosine-sqrt formulae have something to do with compositeness (or any other semi-meaningful concept in particle physics), a physician could be a more appropriate solution.

Best

Lubos

Dear Lubos

I am again very with what you write regards quantity vs quality and need for good solid work on pillars and concepts rather than guesses with theoretical technology.

Personally I do think it is worth spending time thinking why these things happen. But I am only a scientist, not a sociologist. I do feel there are many theorists, not too many for the earth but maybe too many for the discipline. All of them need and try to emerge (in all senses of the word, from the most profound ones to the superficial need to get a job). Inevitably the quality suffers. Maybe there is here a bland hostility towards present situation in theory, but it is not anti-scientific sentiment and certainly not an anti-theory sentiment. It is instead anti-guessing and too-little-pillar-and-conceptual-work sentiment. The present guessing and theoretical tinkering has a artificial smell to it that I don’t like. As a pro-science and pro-physicist I hope that latge quantities of good data will flow in to bring fresh air and set things straight.

But I am repeating myself.

Jeff

OOPs.

I forgot to write the key word.

“I am again very PLEASED with what you write…”

Freudian slip?

Yes, the discrete Fourier transform is a very simple thing, as is its relation to particle masses and mixing matrices. Numerology? We shall see.

I am not claiming to understand very much about Langlands correspondences. By the way, it is not

geometricLanglands that is of particular interest here, but rather the classical case. Since the motivation for studying simple Fourier operators (and braid group representations) comes from the higher operad combinatorics that generalise vertex operator algebras, one leaves the difficulty of axiomatising the continuum geometry in a topos theoretic way until later. This would only be essential for, say, recovering General Relativity, which has nothing to say about mass quantum numbers anyway. Instead one focuses on the finite field case, and one still expects to study a physical electric-magnetic duality which isnottrivial. Perhaps the best argument for this involves the study of full U duality, and its connection with quantum computation as studied by Kallosh, Duff et al. But I’m sure you know more about that than I do.As for the j invariant … I was assuming you were both intelligent enough to understand what I meant. Apologies for being mistaken. Anyway, the j invariant is important as the Belyi map which characterises the Grothendieck-Teichmuller tower. This is of course far more fundamental than Witten’s use of it, but his connection to CFT is illustrative (for d=2) of the correspondence between cardinality and categorical dimension that we have in mind when we study (Schwinger type) measurement algebras for d outcomes. That is, d=2 is associated with QM spin, and d=3 with (rest) mass. One expects all stringy dimensions to obey the same rules. For example, the 11 dimensions of M theory come from (i) 6 punctures on a sphere (ii) 3 punctures on a torus (1 hole) and (iii) 2 holes on the remaining moduli of twistor dimension. The point being that stringy space is not really

spacein the naive sense, but degrees of freedom for measurement algebras. I’m guessing there are many string theorists who think along these lines anyway, although one doesn’t hear about it and it is a lot more like NCG.Sorry, I’ve been waitressing all day long, and I’m very tired. If you were interested, you could follow our blogs anyway.

Dear Goffredo #104,

my comments were meant to be pro-theory, anti-phenomenology, in the particle physics sense. More precisely, I think that scientists in an “ideal world” should study things where the expectation value of the value of the results is maximized. So they must choose sufficiently ambitious but sufficiently realistic goals where they have a chance.

If a model is extremely likely to be wrong, e.g. because it is too arbitrary and there are too many similar models that have a similar chance, its details contribute pretty much zero to the expectation value of the value of the paper about it.😉

On the other hand, if one analyzes some qualitative or other features of an unrealistic model – a matrix model, for example – that won’t describe the world accurately but that has a good chance to teach us something universal about physics that will be useful later, it may be a less spectacular approach for the public but the expected value may be higher.

There are all kinds of waves where different behavior, later seen as irrational, is encouraged. The analogy with the bubbles in financial markets is obvious. When phenomenologists don’t find better topics and there are no new experiments, it’s obvious that they will be building and calculating random models that “could” be true.

Of course, the probability that a specific model like that is strictly true is very tiny while the authors always “overhype” the actual probability that their paper will be relevant for physics. This bubble bursts at the moment when the new experiments actually falsify these papers – it’s like a full-fledged bankruptcy of many companies at the same moment. A very healthy thing, in the given context.

I feel that once the LHC data will start coming, people will realize that we wouldn’t have lost too much if we were simply waiting for the experimental results. Historically, people learned many or most (but not all) important things from the experiments. When they’re become difficult and expensive, it is obvious that the fraction of the theoretically predicted results should increase because theory becomes relatively “cheaper”.

But still, one shouldn’t forget that the experiments are not “infinitely expensive”. In a year or so, we may have data that can be followed by a short calculation that will tell us everything relevant about the TeV phenomenology and that will make thousands of papers about the TeV phenomenology written in the recent era useless. Only a tiny fraction survives, if any.

So I am confident that the LHC will burst a hep-ph bubble while the importance of most theoretical results in string theory won’t be affected much because this topic is studied for more lasting purposes.

But even if I am right, science can only be done by real people, much like the financial markets. People are not immune against bubbling temptations. People are not 100% objective. If someone (or many people) are rewarded for meaningless guesswork, of course that the number of people who will be guessing will keep on increasing. The same threat can occur with other types of estates and commodities, too.

If people are rewarded for formal work that is actually disconnected from the observable phenomena (and I surely have to emphasize, on similar blogs like this one, that this surely doesn’t include most of the work on string theory which is extremely physical), they will do more formal work, too. Some irrationality always controls the medium term but in the long run, the actual events and experiments should restore the balance (although sometimes they overshoot, too).

I am confident that it makes no sense to try to regulate these things because the regulators typically have a much worse idea about the value of different things than the imperfect but good enough experts.

Best wishes

Lubos

Lubos

VERY well said. Agree 100% (I would like to violate unitarity).

Ciao and good work

Let me to underline this part of Lubos #103

‘… these quantities are obtained by non-integrable RG running differential equations from some high-energy values that are the only ones that … ”

Indeed any low energy fitting (aka “numerology”) is renegating of the RG GUT unification, at least for the yukawa coupling. I think that Carl is aware of it, and it is not so bad as it seems. One could assume that GUT fixes the yukawas to be 1 for the top, 0 for all the others, and that other mechanism at low energy produces the contribution to the masses. Or one could abjure of GUT as it is, and bet for TeV scale unification… some part of the bussiness of large extra dimensions is related to this, isn’t it?

Tony #49:

“Does it not indicate to you that, if the non-supersymmetric Standard Model is correct, then the Gfitter and Tevatron analyses need to be examined and perhaps corrected in some way ?”

No, I do not think so. Statistically, the results are still compatible at a very good level. Look at the first plot above: the curve has a delta-chisquared value of about 1 to 2 with respect to the minimum, for Higgs masses of 115-135 GeV. That is really nothing to worry about.

Cheers,

T.

Phenomenologis #100: Kea is a regular here, a very esteemed colleague if you ask me, and a charming lady. You instead are unknown here, although you did leave an email which is probably real. I suggest you treat Kea with some respect… Unless your claim was some sort of twisted humor. I love twisted humor, as long as I understand it.

Cheers,

T.

Dear gregory #69,

thanks for visiting – yes, we must look like monkeys in a cage. But it is a very nice cage. It has no bounds but plus and minus infinite, and it is carved in hundreds of years of pure thought. There is no door, so enter and exit at your will.

Cheers,

T.

Dear Alejandro #110,

your statement about “renegading of unification” is far too weak. Any numerology or simple formulae “predicting” low-energy parameters disagrees not only with the GUT unification but also with the basic thesis that the laws of physics follow analytical rules at short distances while the long distance physics is “derived”.

In order for any kind of such numerology (predicting exact masses etc.) to have any chance to be true, you would have to abandon not only grand unification but any other idea from high energy physics, too.

I am afraid that this is an unpopular idea with Carl, you, Kea, and many others – that you would prefer if someone told you how to change sqrt into rqst in your formulae or something like that (the shape of the wooden earphones in cargo cult). What I say is unpopular with you but it is also obviously true, too.😉

Best wishes

Lubos

Let me emphasize this point once again. “High energy physics” is called “high energy physics” exactly because these numerological things directly involving low-energy physics are impossible. If you think that they are possible, you would have to establish a new discipline of physics, namely “low energy physics that wants to become a theory of everything”. I am sure that not only charming Kea but also Robert Laughlin and others would happily join you.🙂

Unfortunately, such a discipline could never describe this universe because the phenomena studied with a better resolution (at short distances) always contain more information while the long-distance phenomena and effective laws are always derived or “approximations” of the full laws. And this asymmetric role of long and short distances can’t be reverted.

It is the short-distance laws that are fundamental. The longer distances one considers, the more derived, chaotic, and mathematically intractable the effective laws will become – starting from low-energy effective field theories and continuing through chemistry and biology to economics and sociology, among many other steps.

Why but Lubos, string theory teaches us that the physics at small scales can look exactly like physics at large scales.

Dear Kea #116,

you probably refer to T-duality, UV/IR connections, etc.

These things teach us that phenomena at distances or radii R are equivalent or related to phenomena at distances or radii Lstring^2 / R. And Lstring is close to Lplanck.

So it tells us that sub-stringy or sub-Planckian distances are as non-fundamental as low energies. But of course, because it is a theory of quantum gravity, physics at the stringy or Planckian distances *is* fundamental, and it is a way shorter distance scale than the scale where you evaluate your Standard Model parameters.

Best wishes

Lubos

I’m a bit confused about where you get the Planck scale from. With respect to what do you define this scale?

Lubos, one can of course make tree-level mass approximations using lattices and in theory make improvements to the approximations with new ideas.

“It’s not possible for complicated quantities such as the low-energy lepton masses to be expressed by similar childish formulae because these quantities are obtained by non-integrable RG running differential equations from some high-energy values that are the only ones that have a chance to be of a relatively “analytical” form.” – Lubos Motl, comment #103

Supposedly the dynamics of quantum gravity involves Feynman diagrams in which gravitons are exchanged between Higgs-type massive bosons in the vacuum, which swarm around and give rest mass to particles. In the string theory picture where spin-2 gravitons carry gravitational charge and interact with one another to increase the gravitational coupling at high energy, it is assumed – then forced to work numerically by adding to the theory supersymmetry (supergravity) – that the gravitational coupling increases very steeply at high energy from it’s low energy value and becomes exactly the same as the electromagnetic coupling around the Planck scale. So in that case, particle masses (i.e. gravitational charges) at the highest energy are identical to electromagnetic charges.

Hence, if forces unify at the Planck scale forced to by string theory assumptions about supersymmetry, then mass and electric charge have the same (large) value at the Planck scale, and you can predict the masses of particles at very high higher (bare gravitational charge).

So even if string theory were true, you could predict lepton masses by taking the unified interaction charge (coupling) at the Planck scale and correcting it down to low energy by using the renormalization group. This is what your comment seems to be saying, given that you’re a string theorist.

The running coupling for quantum gravity (gravitational charge i.e. mass increasing with collision energy, i.e. increasing as you get very close to a particle) which hasn’t been observed experimentally, is supposed to work by gravitons (being gravitational charges themselves) exchanging gravitons with one another in strong fields approaching the Planck scale. You get a dramatic rise in effective gravitational charge (mass) as a result of the feedback effect which multiplies the effective mass of a particle at high energy, just because of the gravitons producing more gravitons and so on. This relies on the graviton having spin-2. E.g., in electromagnetism you get forces mediated by

unchargedspin-1 gauge bosons. Allegedly you need spin-2 to get universal attraction, but this argument is false because the calculations defending it assume wrongly that there are only two masses in the universe! Duh! Obviously any two masses will be exchanging gravitons with all the other masses, and the convergence of these gravitons from distant immense masses will produce effects that exceed the small exchange of gravitons between two relatively small masses nearby. So you don’t need spin-2: masses get pushed together because they exchange gravitons more strongly with distant immense masses in the universe, than with nearby relatively small masses! Detailed calculations show that gravity (on scales up to galaxy clusters) results from spin-1 graviton exchange between masses, while over greater distances you get repulsion between masses (similar charges): this accurately predicts the acceleration of the universe as well as the strength of gravitation.So the renormalization group calculations for gravitational charge, mass, based on spin-2 gravitons are wrong. The only way anyone in the mainstream can get anywhere is by censoring out the calculations from publication in the proper places.

Sorry, correction to one sentence:

Hence, if forces unify at the Planck scale forced to by string theory assumptions about supersymmetry, then mass and electric charge have the same (large) value at the Planck scale, and you can predict the masses of particles at very high

energy(bare gravitational charge).“You can see that the mass of 170 GeV has just been excluded by the combination of the two experiments’ results.”

Mon Dieu! Zees ees, lack, mah worst nahtmairrrrrre!! Ah need to go to PahRIS and drink some wahn! Haw haw haw haw haw!

Lubos #114 explains the problem with the Koide formulas, etc., beautifully and succinctly thus: “Any numerology or simple formulae “predicting” low-energy parameters disagrees not only with the GUT unification but also with the basic thesis that the laws of physics follow analytical rules at short distances while the long distance physics is “derived”.”

Lubos #115 continues: “It is the short-distance laws that are fundamental. The longer distances one considers, the more derived, chaotic, and mathematically intractable the effective laws will become – starting from low-energy effective field theories and continuing through chemistry and biology to economics and sociology, among many other steps.”

Apparently Lubos missed some of his undergraduate classes in physics. The excited states of the hydrogen atom; their energies (and therefore the mass of the excited atom) follow a very simple law first noticed well before the invention of quantum mechanics. What I’ve been working on is the algebra of quantum bound states.

What I’m working on is quite different from high energy field theory where one assumes that the initial and final states are free particles. Instead, I assume that the initial and final states are the particles still in the bound state. Unlike standard high energy particle theory, my representations of bound states ARE energy eigenstates with no time dependence. From there, it’s simply a matter of defining states so that they can be used recursively to create bound states that, algebraically, act just like single particle states. Kea has a more complicated way of explaining it, but that’s the core idea.

Lubos continues: “In order for any kind of such numerology (predicting exact masses etc.) to have any chance to be true, you would have to abandon not only grand unification but any other idea from high energy physics, too.”

As far as logic goes, this seems a bit of a stretch. One might as well have argued in 1900 that if the Rydberg formula really did apply to atomic spectra, we would have had to have dumped all of classical physics. So, does that mean that exploring these formulas was a mistake? Grand unification using the usual techniques has absorbed something like a billion physicist-hours. Maybe it’s time to abandon it. They say that doing the same thing over and over, despite getting the same negative result, is a sign of insanity. With so many million hours of failed work already poured done down that rat hole, I think you have to be a nut case to continue down Lubos’ path.

No. Advances in physics are sometimes made by observing bizarre coincidences that later end up explained by models. In the case of our coincidences involving the discrete Fourier transform, I think the implication is that the elementary particles are composite. Others are free to think up other ideas.

For example, Gerald Rosen took my formula and applied it in a way that I disagree with, but my point in linking him in is to show that the formulas are interesting to real physicists other than the usual suspects. The formulas are accurate to a few parts per million and are stunningly simple. As far as accuracy goes, that’s similar to the Rydberg formula before later corrections.

Dear Lubos #114, finally with your comment I have understood your interpretation of “cargo cult” here! I tend to think that your interpretation differs both from Feynman’s and from actual anthropology; well, it is a topic I am sure it will resurface in your blog from time to time. But yes, my homepage (which I am linking above in the “name” field) could be told to be an “imitation of science” instead of science. I prefer to say that I am lazy.

On GUT and predictions: I do not agree neither on your definition of HEP. To me, High Energy is up to 300 GeV nowadays, and it will be up to 1 TeV or 10 TeV soon. Beyond that, we can do GedankenExperiments.

And I would not say that it is not possible to derive mass relationships in the GeV – TeV ranges. For a counterexample, look at QCD, they do it as a routine. But if, for the sake of argument, you want an example involving elementary entities only, you have here the mass quotient between W and Z, which is perfectly predicted from the coupling constants of SU(2) and U(1).

Then of course you have riskier predictions. Remember the papers of Fritzsch and Minkowski in the early seventies. Or recheck Zee’s collection “Unity of Forces in the Universe. Vol. I”; a lot of the papers were not GUT but smaller unifications (I expect that you have still access to a decent physics library, or at least to “backup CD” versions).

Last, you have the “extra dimensions at TeV” bussines which, while being still inspired in the GUT running, changes the perspective a bit

Dear Lubos #114 and Nigel Cook # 119,

I wish to bring to your attention that the hierarchical structure of some Standard Model parameters (masses and gauge couplings) can be recovered either from the nonlinear dynamics of gauge field theory or from the flow equations of the Renormalization Group. These results point out to universal behavior underlying the approach to chaos and criticality, see for example:

doi:10.1016/j.chaos.2006.01.117

doi:10.1016/S0960-0779(02)00092-9

doi:10.1016/j.cnsns.2006.02.006

http://www.iop.org/EJ/abstract/0295-5075/82/1/11001

Regards,

Ervin Goldfain

Dear Tommaso,

I want to say that I love your blog. Your posts are very interesting and informative.

I would like to point you to the following post from a PI researcher:

http://backreaction.blogspot.com/2007/08/lubo-motl.html

Be careful when you’re interacting with Lubos online. Try not to disagree with him and make sure that, in your dealings with him, you satisfy the demands of his overly inflated ego and his delusional sense of grandeur and importance. If you follow these steps, then you MAY not have to write a post like the one linked to above. He continues to believe that his ‘fall from grace’ last year is the result of physicists conspiring against him and trying to silence him, and not the result of his own ridiculous behavior online, as the facts clearly show.

By the way, I suggest you read the comments (especially the last 8 or so). Cheers!

But, if this were true, general relativity would be impossible to formulate in mathematical terms. Since it deals with the longest distance scales, it would be the most derived, chaotic and mathematically intractable theory. This is patently wrong.

There is actually no reason at all why an effective low-energy theory should not be simple. Conceptually, this follows from the notion of Wilsonian RG flow. Practically, it is confirmed by the fact that astronomy, classical mechanics, atomic physics, etc., are formulated in terms of simple basic laws.

Good point low theorist. I wonder what Lubos has to say about it.

Cheers,

T.

Hi Roberto,

thank you for your concerns. What you probably do not know is that the original name-calling of Lubos against Bee happened in the comments thread of a post in this very blog. I do know Lubos, and although I do find some of his habits despicable, I also like some other traits of his personality -among them, his wits and the fact he has views opposite to mine on several issues, which makes talking to him very interesting to me.

Cheers,

T.

Dear low-energy theorist #127 and Tommaso #128,

general relativity is indeed a rather accurate and mathematically tractable science that applies to very long distances.

But the only reason why it is so mathematically tractable is that it only describes empty space – or space with a very simple representation of matter. And that’s also the reason why it works up to long distances.

However, I find it more fair to say that GR is the correct approximate description of empty space at distances (much) longer than the Planck scale😉, so in this sense, its typical distance scale is the shortest scale one can have.

The fact that the simple description survives to long distances is that one only looks at those effects where it does😉 – the empty curved space, so to say.

My comment shouldn’t have been extracted from its context. I was clearly talking about large, “composite” systems that are inevitably mathematically intractable. There was no reason to expect low-energy parameters to be simple.

If you want to look at GR as a possible counterexample, it is not a counterexample. GR is an effective description valid at longer-than-Planckian distances and its constants are whatever they are. Unless protected by symmetry laws, Newton’s constant could not have “easy” values either. This is a somewhat subtle statement to make because the constant is dimensionful. But one can say that all the dimensionless ratio involving Newton’s constant are “difficult” numbers and my comment that the derivation of the numerical values of low-energy parameters can’t be “analytical” therefore holds for GR, too.

Best

Lubos

So Lubos can defend his bizarre claim that “large, “composite” systems that are inevitably mathematically intractable” against the example of GR, but he ignores hte obvious large composite systems such as atoms, one of the early great successes of QM.

Let’s see, a U-238 atom has, uh, let’s see, 92 electrons, 384 down quarks, and 330 up quarks, a total of 660 fundamental fermions, and that’s only counting the valence particles. And yet, atoms are sufficiently simple that the periodic table of the elements is taught to children in high school. Intractable mathematics?

The problem with Lubos’s analysis is that it assumes that the only possible way of making progress in understanding elementary particles is through perturbation theory. And sure, if he were looking at Feynman diagrams that involved 660 fermions he’d be completely lost. But nevertheless, these bound states have a simple symmetry.

There’s a simple lesson in this. A good way of avoiding the intractable mathematics that has mired elementary particles for 30 years is to look for a way of analyzing the elementary particles as basic bound states.

The lesson of the Lamb shift is that the underlying description of the quantum state should be a QM bound state. Then QFT perturbation theory can be used to work out corrections to the basic bound states.

This is how the elementary particles should be described in a preon theory. Not as a mathematically intractable QFT perturbation series, but as a mathematically simple QM bound state calculation. And if there is to be universality between the leptons and quarks, such a theory should also apply to the mesons and baryons.