jump to navigation

On the supremacy of US over Europe in HEP July 20, 2008

Posted by dorigo in physics, politics, science.
Tags: , , ,
trackback

I just finished reading a very interesting piece by Donald Perkins (professor of Physics at Oxford University, and author of one of the most appreciated books on particle physics ever written) on he discovery of weak neutral currents. He discusses in detail the events that brought CERN to announce the discovery in 1973, and the ensuing debate following a negative result by the HPWF experiment, operating at Fermilab.

The history of the discovery is complex, so my quick and dirty summary below is going to leave much wanting. I do it anyways because I wish to introduce some considerations on the inferiority complex which plagued CERN in those years.

A paper by Weinberg and Salam in 1967 hypothesized a unification of weak and electromagnetic interaction by postulating the existence of both charged and neutral weak currents. The latter, however, had never been observed, while their effect was predicted to be comparable in size to that of charged currents. Because of that the paper by Weinberg and Salam was basically ignored for four years. In 1971, however, Gerald ‘t Hooft proved that the unified electroweak theory was renormalizable, and things started to change. Physicists started believing in the existence of neutral currents, and set out to actively seek them.

To search for neutral current interactions one could look for neutrino collisions with atomic nuclei. In a charged current interaction, the neutrino would change into a charged lepton -typically a muon, given the composition of neutrino beams saw the predominance of muon neutrinos. In a neutral current, instead, one would not observe any lepton downstream, but just the remnants of the nucleus and other light hadrons. These events were studied with the Gargamelle bubble chamber at CERN, which used a neutrino beam obtained from a 26-GeV proton beam. The typical signal, the appearance of a star of hadronic tracks, could be mocked by neutrons produced upstream, and the difficulty in calculating the rate of those events made the discovery of true neutral current events hard.

Another way one could observe neutral current interactions of neutrinos was through the collision of the neutrino with an electron: from the reaction \nu_\mu e \to \nu_\mu e one would only observe a energetic electron coming out of the blue, with a very small angle from the beam direction. Those events were however very rare, and in 1973 only three were found in more than a million pictures of the bubble chamber at CERN.

At the Bonn conference in August 1973 Gargamelle reported a ratio between neutral and charged current interactions from the neutrino and antineutrino beams of 0.21+-0.03 and 0.45+-0.09, respectively. The different behavior of antineutrinos was expected in the unified electroweak theory, but the results were initially greeted with skepticism, and for a while the CERN experimentalists were under a considerable amount of heat.

The reason was that the Fermilab experiment, which had initially reported (by Rubbia, at the same conference) a value of R=0.29+-0.09 for the mixed effect of neutrino and antineutrino interactions (which were not separable in the wide-band beam of Fermilab), had later claimed (although not published) a result consistent with zero contribution from neutral currents.

Let me now quote Perkins, because I find his account of the situation enlightening:

Today, CERN prides itself on being the world’s leading high-energy physics laboratory. Whether or not this is so, it is clear that 20 years ago [the article by Perkins was written in the nineties -TD], things were quite different […] CERN unfortunately did not have a similar reputation in its physics, and was still recovering from disasters […]. And during the 1960s it had been repeatedly beaten to the ground, for examples, over the discovery of the \Omega^- hyperon, the two types of neutrinos, and CP violation in K^0 decay. All these things could and should have been found first at CERN, with its far greater technical resources, but the Americans had vastly more experience and know-how. Even today, the scoreline in Nobel laureates in high-energy physics (counting from the end of WWII) tells the story: United States 26, Europe 6.

It is important to understand this legacy in inferiority in considering the attitudes at the time of people in CERN over the Gargamelle experiment. When the unpublished (but widely publicized) negative results from the HPWF experiment started to appear in late 1973, the Gargamelle group cane under intense pressure and criticism from the great majority of CERN physicists. […] many people believed that, once again, the American experiments must be right. One senior CERN physicist bet heavily against Gargamelle, staking (and eventually losing) most of the contents of his wine cellar! […] It is indeed a dramatic testimony to the rapidly changing fortunes in the world of high-energy physics that wat was undoubtedly the principal discovery during the first 25 years of the CERN laboratory was to be greeted initially with total disbelief by the vast majority of CERN physicists.

I wonder how the matter is perceived nowadays, fifteen years after the above words were written. In the meantime the top quark has been discovered, B_s mixing has been measured, new baryons have been found: all of that at Fermilab. By contrast, the LEP II experiments have basically been a fiasco, adding little to our knowledge of subnuclear physics except maybe a precise W mass measurement which is going to be surpassed by CDF alone very soon.

Comments

1. nige cook - July 20, 2008

Thanks for the interesting history of neutral currents. Maybe you might take the space to explain what neutral currents actually are physically, e.g., the exchange of neutral weak gauge bosons (Z0) between quarks or leptons? The motion of a charged weak boson (W+ or W-) constitutes an electric current in Maxwell’s equations, so a deflection or scattering of the particle implies a modification to the symmetry of the electric current, which changes it’s energy implying a gauge interaction with an exchange of of field quanta by Noether’s theorem.

The 1967 theory predicting weak currents was just a new modification of the interpretation of the Yang-Mills SU(2) theory. Glashow’s Nobel Lecture of 8 December 1979 states:

β€˜Schwinger, as early as 1956, believed that the weak and electromagnetic interactions should be combined into a gauge theory. The charged massive vector intermediary and the massless photon were to be the gauge mesons. As his student, I accepted his faith. … We used the original SU(2) gauge interaction of Yang and Mills. Things had to be arranged so that the charged current, but not the neutral (electromagnetic) current, would violate parity and strangeness. Such a theory is technically possible to construct, but it is both ugly and experimentally false [H. Georgi and S. L. Glashow, Physical Review Letters, 28, 1494 (1972)]. We know now that neutral currents do exist and that the electroweak gauge group must be larger than SU(2).’

So the Schwinger and Glashow 50s work used SU(2) with its two charged vector bosons mediating charged weak interactions, and the neutral vector boson mediated electromagnetism. So they were trying to use SU(2) in place of what is now U(1) x SU(2). The essential problem was that they omitted the neutral weak gauge bosons, believing that only one kind of neutral gauge bosons existed, the photons of electromagnetism.

However, when you look at U(1) x SU(2) in today’s standard model, it’s not a neat theory in which U(1) is electromagnetism and SU(2) is the weak force. The photon doesn’t directly from U(1) and the Z0 doesn’t come directly from SU(2). Instead, there is an epicycle-type fiddle called the Weinberg mixing angle, which mixes up the properties of the B gauge boson from U(1) with those of the W0 gauge boson from SU(2) to give the electromagnetic photon and the weak neutral gauge boson W0.

I was disappointed to discover this. As a kid I read Fenyman’s book QED, where in Chapter 4 he describes the U(1), SU(2) and SU(3) gauge theories as being separate, not mixed up! E.g., on page 141 he states that the Z0 is a particle ‘which we could think of as a neutral W’, and he repeats this error in figures 87-88. In Figure 88, he suggests that because the coupling for the Z0 is similar to that for the photon, they ‘may be different aspects of the same thing’.

This completely ignores the Weinberg mixing angle which exactly how the two neutral gauge bosons are related in the standard model, they’re products of a mixture.

What’s really a mess here is that you have to mix up the neutral gauge bosons of two separate symmetry groups, U(1) and SU(2), to get two mixture products which fit observations. There is no reason why this is, and most popular expositions cover up this ad hoc fiddle.

The real solution is quite different. If you look at SU(2), you see that it actually has three massless gauge bosons, which acquire mass at low energy by a separate unobserved mass-giving (Higgs) field.

Instead of adding on U(1) for electromagnetism and having all the problems of electroweak symmetry breaking and the ad hoc Weinberg mixing angle of neutral gauge bosons from the two gauge groups, why not replace the ad hoc Higgs field model with something more predictive and falsifiable, so that at low energy mass is only given to a portion of the gauge bosons of SU(2). This would mean that at low energy, some of the three gauge bosons of SU(2) would remain massless, and would the generate infinite-range electromagnetic and gravitational interactions. The two massless charged gauge bosons could be able to propagate as exchange radiations (in two opposite directions along each path between charges as they are exchanged) because when passing both ways along a path the magnetic field curl of each propagating massless charged component cancels out that of the other, thus overcoming the usual problem with the propagation of charged massless radiation (i.e. magnetic self-inductance). These massless charged gauge bosons respectively produce positive and negative electric fields, and where those fields move relative to an observer you see magnetic fields appear (because the cancellation is then imperfect). The neutral massless gauge boson is not the electromagnetic field quanta, but rather the graviton. I’ve compiled evidence that the idea that gravity is a massless version of the weak neutral current does lead to a lot of falsifiable predictions, some of which have already been confirmed over the last decade.

2. dorigo - July 20, 2008

Hi Nige,

thank you for your long and useful message, especially for the quote by Glashow. As for generating the observed pattern of the Standard Model with something else than a mixing between U(1) and SU(2), I have my doubts that anything different from the Standard Model -i.e. something that does not incorporate it as a low-energy limit or in some other way- can produce the level of agreement with data that we have seen since the seventies.

Cheers,
T.

3. nige cook - July 20, 2008

Hi Tommaso, thanks, but the reason for the excellent agreement is between the electroweak portion of the Standard Model and empirical facts is precisely due to the ability to fine tune the Weinberg mixing angle so that it works.

The only reason why the Standard Model’s electroweak theory works well is because the Weinberg mixing angle is 26 degrees (i.e. sin^2 theta = 0.2).

If U(1) described electromagnetism and SU(2) described the weak interaction, then the Weinberg mixing angle would be precisely zero. This seems to the basis for Feynman’s misleading claim that the Standard Model is three separate field theories.

There is no theoretical reason or prediction that the Weinberg mixing angle is 26 degrees. It’s just a fudge factor in the Standard Model that makes it work. It’s not a rigorous theory, but an empirical model containing a mixing of gauge bosons to ensure that couplings and interaction rates match those of observations.

The mixing angle 26 degrees is the difference in vector contributions of the Z0 and the W0. If they were the same thing (as Feynman claimed) then the Z0 would be the W0. It isn’t. Both the observed photon and the observed weak neutral Z0 can only be produced in the Standard Model by mixing together the unobserved B predicted by U(1) with the unobserved W0 predicted by SU(2).

Because there is no physical or theoretical reason for this mixing other than forcing the Standard Model to agree with experimental data, I disagree with your claim that it would be hard for another theory to be as successful as the Standard Model. If you’re allowed ad hoc adjustment factors such as mixing angles to mix up the gauge bosons predicted from different symmetry groups, then it’s easy to model things. In any case, all I’m suggesting is a simplification of the Standard Model. get rid of U(1) and replace the unobserved Higgs field model that currently gives 100% of the weak gauge bosons mass at low energy with one that gives only half of them mass (i.e.g, only the left-handed spinor interactors, modelling for the left-handedness of weak interactions), then the massless SU(2) gauge bosons remaining present at low energy will generate two long range interactions, electromagnetism and gravity. Easy.

It’s now just a matter of formalizing the theory by finding the lagrangian, which should be possible by correcting the lagrangian of the Standard Model, in particular the Higgs sector.

4. LuboΕ‘ Motl - July 20, 2008

The LHC may change it. Or not. Meanwhile, there’s a lot of mess in Europe, especially Italy.

The Italian archives are responsible e.g. for the translation and distribution of the EU documents. It just happened that the new EU members after 2004 have not yet been given the ratification treaties – so we can’t really ratify our membership.

That’s because of a mess caused by the gipsies or what’s the name of the inhabitants of Italy.πŸ˜‰

5. superweak - July 20, 2008

LEP II was a fiasco? Really? I can make the same claim about the Tevatron Run II program – if you think the best you can claim is the Bs mixing measurement, that’s pretty sad. LEP I’s exploration of the Z pole was absolutely superb and doesn’t deserve to be dismissed so lightly. Besides, if you think that LEP is all that CERN has contributed in recent times, NA48 would like to have a talk with you.

6. dorigo - July 20, 2008

Thank god, somebody is still awake at the end of my posts!πŸ˜‰

I admit it, mine was a provoking statement. By the way, I said nothing of the other experiments at CERN. And what’s the matter with LEP ? I did not mention it, and its results were part of the reason why Perkins was claiming that CERN had the best results to show-15 years ago.

In any case, you would be alone if you made the claim that the Run II of the Tevatron was a fiasco. Besides the Bs, new B baryons were found and measured. The lifetimes of B mesons challenges those of B factories. The W and top mass errors are twice as good as predicted. WZ and ZZ production is measured. I could go on….

Instead, back to my original, and only, statement. What did really LEP II (mind the II) add to our knowledge of HEP ? Take your time to answer.

Cheers,
T.

7. dorigo - July 20, 2008

Lubos, so sorry for czech republic to have its troubles with the EU. You should console yourself with Groucho Marx: “I would never join a club that accepted me as a member”.

Cheers,
T.

8. Guess Who - July 20, 2008

Come on “superweak”, surely it was clear from the lead-in calling Perkins “one of the most appreciated books on particle physics ever written” that this post was written very much in jest?πŸ™‚

Nigel clearly got the spirit of the thing. I mean, “just a matter of formalizing the theory by finding the lagrangian, which should be possible by correcting the lagrangian of the Standard Model, in particular the Higgs sector”… yeah, child’s play.πŸ˜€

Nige, you jester you.

9. superweak - July 21, 2008

You’re seriously claiming that the Higgs mass limit didn’t add anything to HEP? Or the many SUSY searches? The large extra dimensions searches? Measurements of the Ds decay constant? The meson spectroscopy with two photon collisions? And you’re claiming that because Tevatron experiments can do precision W mass now, LEP II running was pointless? Clearly since LHCb will supersede many of your results in short order, you should disband your B group at once!

10. Kea - July 21, 2008

Nigel, assuming you are least partly serious, which I strongly suggest, why would anybody be looking for a Lagrangian? People have tried LOTS of Lagrangians. Why not take the hint from more modern perspectives on Feynman diagrams, involving noncommutative geometry, that the correct formulation is not written in terms of a classical Lagrangian? Lie groups are pretty specific objects, after all, that were inevitably used given the options available to mid 20th century physicists. But now we have so many more options …

11. dorigo - July 21, 2008

Hi Superweak,

no, the Higgs mass limit was an important result. The others, well, much less so. Even the W mass results improved the world average by a quite small amount, objectively. But the point, in my opinion, is whether ground was broken or not. It wasn’t. LEP II will not win any nobel prizes, while the Tevatron should at least win a couple IMO – now whether it will or not has to do with other factors, extraneous to the objective value of the results.

Please also note that the perception of a “fiasco” (but you certainly have appreciated that I have written that tongue-in-cheek) is enforced by the fact that many really thought that the Higgs would be found. I remember many discussions with LEP II physicists at the end of the nineties: their expectations were huge. This, if you will, is the reason of being disappointed. And, as I mentioned in a recent post about Lederman, again shows that success in HEP requires, first and foremost, a huge amount of luck.

Cheers,
T.

12. dorigo - July 21, 2008

Ah, and, to be sure: my bids for Nobel prizes in Physics are the following:

– Luciano Ristori, for ideating, designing, building, and operating the Silicon Vertex Trigger, a innovative, daring new device, which has allowed CDF to obtain first-class results in the B-physics sector, furthering our understanding of hadronic decays and allowing the precise measurement of B_s mixing.

– Giorgio Bellettini, for the discovery of the top quark (along with maybe two or three other prominent figures of the Tevatron in the eighties and nineties).

Cheers,
T.

13. Tony Smith - July 21, 2008

Nige said:
“… there is no physical or theoretical reason for … mixing together the unobserved B predicted by U(1) with the unobserved W0 predicted by SU(2) …[to]… produce… in the Standard Model … the observed photon and the observed weak neutral Z0 …
the Weinberg mixing angle is … just a[n]… ad hoc adjustment factor… in the Standard Model that makes it work …”.

There is a theoretical calculation of the Z0 and W+ and W- masses, and therefore a theoretical reason for the magnitude of the Weinberg angle, that is substantially consistent with experimental observation.
A key underlying geometric idea is to look at SU(2) as a 3-sphere S3,
and then decompose S3 by the fibration S1 -to- S3 -to- S2
with the S2 representing the W+ and W- and the S1 representing the W0,
the W0 then combining with what Nige designates as B to produce the photon and the Z0.
The details of the calculation are complicated,
but can be found in my pdf web book on my E8 physics model at
http://www.tony5m17h.net/E8physicsbook.pdf
particularly pages 82 to 86 of the pdf (81 to 85 of the text pages).
(It is a free web book of 377 text pages plus a cover, about 10 MB pdf file.)

I know that my E8 model is not widely understood or accepted,
and it is fine with me if Nige does not like it,
but it does exist as a “theoretical reason” with an underlying geometric picture, and in my opinion it should be acknowledged, subject to any substantive criticism that anyone may have of it.

Tommaso said that one of his “bids for Nobel prizes in Physics” is
“… Giorgio Bellettini, for the discovery of the top quark (along with maybe two or three other prominent figures of the Tevatron in the eighties and nineties). …”.
I substantially agree (although I would like to give the prize to everybody who built the machine and did the brilliant experimental work),
but
it is puzzling that the T-quark has pretty obviously been discovered (whether it is a simple single-state quark or a 3-state quark as I think should be irrelevant to recognition of the brilliant experimental work,
although I think that the Fermilab practice of excluding public access to significant data is reprehensible),
why there is as yet no Nobel for it?
Nobel politics can be very Byzantine/Machiavellian (for example, think of the absence of a Nobel for Wheeler) – Tommaso, do you have any ideas about why the T-quark Nobel is so late?

Tony Smith

14. Eric - July 21, 2008

Nigel,
Quantum mechanically, two U(1)’s generically are going to mix together whether you like it or not. In the SM, the Weinberg angle may seem ad hoc in the sense that it is a free parameter. However, the value of the Weinberg angle comes out automatically when one goes to SUSY GUTs.

15. MM - July 21, 2008

was the Michelson-Morley experiment a fiasco because it failed to detect the aether drag?

16. Guess Who - July 21, 2008

Eric: any GUT actually, not just SUSY ones. Plain SU(5) gives you sin^2(theta_W) = 3/8 (plus logarithmic scale-dependent corrections).

17. Eric - July 21, 2008

Hi Guess Who,
Non-Susy GUTs come close to getting the right value for sin^(theta_W), but they miss a little. When you go to SUSY SU(5), it comes out exactly right.

18. Guess Who - July 22, 2008

Dear Eric, the value runs, so it is affected by uncertainties in higher order corrections, mass thresholds and “known” parameters like the QCD scale. If you assume conveniently placed thresholds, SUSY GUTs can give you the low energy value you want. So can non-SUSY GUTs.

19. Eric - July 22, 2008

Dear Guess Who,
The point is that it still works better in SUSY GUTs as opposed to nonSUSY, the reason being exactly because the values run and the extra contribution to the beta functions from the superpartners.

20. Guess Who - July 23, 2008

Dear Eric, the point was actually that the value of sin^2(theta_W) is no longer an arbitrary parameter when you embed the SM in a GUT, whether SUSY or not.

Regarding SUSY or non-SUSY, I don’t think that the statement that the former work better in this respect is meaningful in isolation. You can always tune a running value to fit low energy measurements e.g. by moving the GUT scale. Maybe you are thinking of the convergence of the couplings? That certainly works better in simple SUSY GUTs than in non-SUSY ones, and can reasonably be considered a hint in favor of SUSY.

21. DB - July 23, 2008

T.
I hardly think the discovery of the top quark, which was a completely expected and almost routine thing – once the bottom quark had been found back in 1978, is ever going to win a Nobel for anybody. Its mass made it a challenge to find, but hardly a breakthrough.

The Higgs particle is another matter. It is by no means obvious or certain that it must exist.

And supersymmetry is another matter again. It’s always just around the corner, and has been for 35 years. Now the the next corner is about to be turned we’re hearing a lot about split supersymmettry, which is designed to save string theory in case the LHC doesn’t find supersymmetry.

22. Eric - July 23, 2008

Guess Who,
It is well know that non-SUSY SU(5) GUTs give a value of sin^2(theta_W) = 0.206 at the Z mass, while SUSY SU(5) GUTs give a value of sin^2(theta_W) = 0.23, which is very close to the experimental value of 0.2312 without requiring requiring any corrections. I think this speaks for itself.

23. Tony Smith - July 23, 2008

DB said “… I hardly think the discovery of the top quark, which was a completely expected and almost routine thing – once the bottom quark had been found back in 1978, is ever going to win a Nobel for anybody …”.

If you take DB’s position, and you realize that what made the T-quark “completely expected” was the success of the Kobayashi-Maskawa 3-generation mixing matrix,
you would think that Kobayashi and Maskawa would have won a Nobel for brilliant theoretical work confirmed by experiment.

Why do you think that they are not Nobel laureates?

Tony Smith

PS – Further, weren’t Nobel prizes given for both theoretical models AND subsequent experimental discovery of the then-completely-expected W-bosons?

24. Guess Who - July 23, 2008

Dear Eric, what is well known is that non-SUSY SU(5) GUT gives you sin^2(theta_W) ~ (3/8)/(1 + 55*alpha(M_GUT^2)*ln(M_Z/M_GUT)/(9 pi)). As the astute reader may notice, you need to plug in M_GUT and M_Z to extract a number. You know the latter from experiment, but the former? See, that’s what “in isolation” means. You read off M_GUT from the crossing of the running gauge couplings… but in non-SUSY SU(5) GUT they don’t meet…

25. dorigo - July 23, 2008

Hi all,

sorry for letting this thread grow without contributing. A few things.

1) MM, the Michelson-Morley experiment was little more than a table-top experiment, it could only have two outcomes, and both were interesting. I would not compare it to four giant detectors running for years, 2000+ physicists, O(1 billion dollars) investment, who went head down for the Higgs, and argued endlessly that they should continue running on the basis of a 1.7-sigma excess at 115 GeV (which they presented as more than 3-sigma back then!).

2) GW, Eric, I think you do not need my input… I am happy that you clarified the matter however.

3) DB, the W and Z discovery earned Van der Meer a nobel prize -quite right also according to your metric, since it was a breakthrough in the technology necessary to advance the field- but it won Rubbia another. Now, W and Z bosons were not only known to exist well before 1983 (after 1978 nobody in the field believed they did not exist), but their mass was also quite well known beforehand, with the clear implications for the possibility to “tune” the experiment in their search.
The top quark was thought to exist by most, but was not just as much called for. Sure, after the measurement of I_3^b by LEP nobody doubted. But the mass could be anywhere between 100 and 200 GeV by 1992. So I think that discovery deserves a nobel prize at least as much as Rubbia’s one did.

Cheers,
T.

2)

26. Guess Who - July 23, 2008

Tony, I suppose you couldn’t very well give a Nobel prize to Kobayashi and Maskawa without also giving it to Cabibbo, at which point Ms. Carlucci might raise serious objections.πŸ˜‰

In hindsight I find it hard to judge how brilliant the CKM construct was. From a mathematical point of view it’s a simple example of singular value decomposition, which was completely worked out for the general complex case sometime in the 30s. Once you realize that you have multiple generations and that nothing prevents them from mixing, it’s unavoidable. Not sure why you say it was central to making the top completely expected; I would say that followed from anomaly cancellation, which happens in group space, no need to consider mass eigenstate mixing. Am I missing something?

I think I can answer why Wheeler never got the prize: no experimentally confirmed prediction. Seems to be an unwritten rule of the Nobel committee that only such theoretical work is eligible.

27. Tony Smith - July 23, 2008

Guess Who “… find[s] it hard to judge how brilliant the CKM construct was …”.

A detailed account of it can be found in the book “The Evidence for the Top Quark” by Kent W. Staley (Cambridge 2004), particularly his chapter 1 entitled “Origins of the Third Generation of Matter”. It indicates to me that the work of Kobayahsi and Maskawa was quite brilliant (even if embedded in political controversy about Japan, the Nagoya group, and dialectical materialism).

As to Wheeler’s lack of a Nobel, the story is not as simple as “no experimentally confirmed prediction”. The whole story has not been published as far as I know, but I will start with Kip Thorne saying in his book “Black Holes and Time Warps”:
“… in 1958 … there arrived in Moscow an issue of the Physical Review with an article by David Finkelstein … Landau … read the article … It was a revelation … Finkelstein’s insight and the bomb code simulations fully convinced Wheeler that the implosion of a massive star must produce a black hole …”.

IIRC:
There was a push (backed by Princeton, Cambridge, etc) to get a Nobel prize for Wheeler based on his Black Hole idea,
but
Landau had a seat on the Nobel committee and black-balled Wheeler using the grounds that Finkelstein had priority.
However, Landau had a further reason to block Wheeler:
The Soviets felt that the Black Hole basis for the Princeton-Cambridge push for a Nobel for Wheeler was a sham, and that their real basis was to give Wheeler a Nobel for his bomb code work (which obviously had an abundance of “experimentally confirmed prediction”)
and the Soviets felt that if a Nobel were to go for bomb code work then Sakharov should also get a Nobel for his bomb code work.
However,
the Nobel committee did not want to give prizes for war-weapons work, so a joint Wheeler-Sakharov bomb code prize would not happen,
so
the Soviets, through Landau’s black-ball, made sure that a Nobel would not go to Wheeler without Sakharov.

Human politics is interesting, but sort of sad.

Tony Smith

28. nige cook - July 23, 2008

Guess Who, comment 8: I don’t think it’s that hard to do.πŸ™‚ The SU(2) weak isospin lagrangian is known. Delete the Higgs field which gives SU(2) bosons mass at low energy, and then set up a suitable path integral formulation for the massless SU(2) bosons. The half that do get mass are the left-handed interacting weak gauge bosons, and the other half that don’t acquire mass are the long-range bosons for gravity and electromagnetism. Massless neutral currents are graviton exchanges.

Kea, comment 10: people haven’t tried to find a lagrangian for massless SU(2) gauge bosons interactions where the massless neutral current is quantum gravity and the massless charged currents mediate electromagnetic interactions. The nearest thing was the 1957 incorrect Schwinger-Glashow theory in which the massless neutral current of SU(2) was used to replace U(1) for electromagnetism, and the two charged field quanta of SU(2) were used for weak interactions. This is not what I’m arguing which is that the two charged massless field quanta of SU(2) are electromagnetic field quanta (the extra polarizations of the virtual photon are electric charge) and the massless neutral field quantum is the graviton.

If there is a way of using noncommutative geometry to deal with Feynman diagrams in place of have a path integral with involving an amplitude containing an action which is an integral of a lagrangian equation for the gauge theory, then please let me know. I’m interested in path integrals because for low-energy approximations (representing the Newtonian and Coulomb laws), the only significant contribution to the interaction histories summation is for the simplest Feynman diagram, i.e., a simple exchange of a photon between charges. All of the loop diagrams can be ignored at low energy for the classical approximations. So the only summing of histories in the path integral is for that single simple type of interaction integrated over all possible paths in spacetime, each with an equal contribution but a different phase vector that determines interference. The first priority is to check how this theory works for low-energy physics.

Tony Smith, comment 13: thanks for that information that you have the Weinberg mixing angle calculated from your model, pages 82-86 in http://www.tony5m17h.net/E8physicsbook.pdf Your physics writings are always fascinating. Your calculation of the Weinberg mixing angle on page 85, namely Sin(theta_w)^2 = 1 – (M_W+/- / M_Z0)^2 = 1 – ( 6452.2663 / 8438.6270 ) = 0.235, is impressive. I don’t however physically understand how the B gauge boson of U(1) gets mixed with that of the neutral W_0 boson of SU(2) to produce the observed photon and the observed weak boson Z_0. Mixing of gauge bosons would physically make more sense to me if it was between massless and massive versions of gauge bosons within the same symmetry group such as SU(2), i.e., mixing of massive and massless versions of the SU(2) gauge bosons might occur with a physical mechanism in terms of how the Higgs-type field gives mass to SU(2) gauge bosons as a function of handedness and energy.

Eric, comment 14: ‘Quantum mechanically, two U(1)’s generically are going to mix together whether you like it or not.’ The mixing is between the gauge boson of one U(1) and the neutral gauge boson of SU(2). It’s not the U(1) weak hypercharge gauge boson B and the U(1) electromagnetic photon which are being mixed together to produce the W_0 and the Z_0. It’s instead the weak hypercharge U(1) unobserved gauge boson, ‘B’, and the SU(2) unobserved W_0 get mixed up to produce the two observables, the photon and the Z_0.

‘In the SM, the Weinberg angle may seem ad hoc in the sense that it is a free parameter. However, the value of the Weinberg angle comes out automatically when one goes to SUSY GUTs.’

Thanks for this news, which is a competitor with Tony Smith’s calculation, but SU(5) GUTs, whether SUSY or not, don’t include gravity. Gravity and electromagnetism have similarities in both being long range interactions, needs to be incorporated into a a symmetry group.

29. Eric - July 23, 2008

Nigel,
Whenever SU(2)_L is broken, it becomes a U(1) gauge group which then mixes wth the hypercharge U(1)_Y to become U(1)_em.

Regarding your other point, it is highly unlikely that gravitational interactions have anything whatsoever to do with the Weinberg angle.

30. Guess Who - July 24, 2008

Well, if the SM ultimately comes out of string theory, I would say that gravitational interactions have everything to do with the Weinberg angle. Guess I had you pigeonholed all wrong Eric. Here I thought you were a militant stringista.πŸ˜‰

Seriously, you are simplifying a bit too much here. If you have a Higgs in the fundamental representation of SU(2), it breaks the symmetry completely. All three massive gauge bosons acquire a mass, no unbroken U(1) survives. To get an unbroken U(1), you need to step up to the vector representation, which gives you two massive charged gauge bosons and an uncharged one. Georgi and Glashow tried that one, but without a Z, it’s now a historic curiosity.

The real reason that the U(1) of hypercharge mixes with one of the U(1) subgroups of SU(2)_L is that the Higgs is engineered to get that result, by carrying both hypercharge and and weak isospin charge. So it breaks both, but since there are only three independent Higgs components (fixing the VEV eliminates one degree of freedom) and four gauge fields, a linear combination of the latter remains massless.

Nige, I think what you are getting at is something which probably crosses the mind of everybody who starts pondering the structure of electroweak interactions. I think it’s even a pet puzzle of Not Even Woit.πŸ˜‰

Basically, you are thinking that the Lorentz group is SU(2)_LxSU(2)_R, the electroweak group is SU(2)_LxU(1), and it really looks like one ought to be able to find a deeper underlying explanation for this similarity. Like maybe get the U(1) of hypercharge by breaking an SU(2)_R with a Higgs in the vector representation, a la Georgi and Glashow, and then let the two massive R gauge bosons form a spin 2 bound state, to be identified with the graviton. Make the SU(2)_R symmetry breaking scale low enough (the dark energy scale, eh?) and this “graviton” might be almost exactly massless, while enough confinement might survive to explain why you never see the two charged gauged bosons in isolation. Who knows, you might even be able to get the right running to explain the rotation curves of galaxies.

As you probably guessed, I’m being a little facetious here. In words, it all sounds very easy. The devil is in the details. You could start with the equivalence principle: how would you make this “gravity” act equally on all kinds of matter, independently of composition, and bend light as observed? Remember, most mass around us is not due to the Yukawa couplings between Higgs and fermions. Some 99%+ of it is the energy of quarks and gluons bound together and furiously running around inside hadrons. How exactly would your gravity couple to that, and how would it couple to light, all in a manner that reproduces the experimentally verified predictions of general relativity?

Work that one out, and fame (but probably not riches) will be yours.

31. beginner - July 24, 2008

I know this may sound like a silly question, but what is a particle?

32. goffredo - July 25, 2008

“A particle is a particle is a particle.”

Hi BEGINNER.
For beginners here is what I still remember Weinberg wrote 20 years ago: More or less he wrote that a particle is a representation of its symmetry group(s). Once you said how a particle behaves repect to symmetry transformations (translations, rotations, labeling, gauge,…) then you’ve said everything you can about that particle.

The hot, updated and professional theoretical contributors to the forum will certainly orrect me or set me straight. Lets see that they say. Remember that particles are “just” (*) excitations of fields.

jeff

(*) BEGINNER. Try asking a theoretical physicst to explain how a field excitation can leave a track of ionization in a detector. If the field (2nd quantization) context is too abstract for you then ask him how he would explain a quantum mechanical particle/wave can leave a nice track.

33. beginner - July 25, 2008

Is it okay to say a particle is a creation operator?

Then is a square root of a creation operator half a particle?

34. dorigo - July 25, 2008

Hi beginner, you do not look like a beginner to me… A particle, for an experimentalist, is a body whose phenomenology can be described without making reference to its dimensions and inner structure. Of course, this depends on the kind of probing one does, unless the particle is elementary.

As for creation operators, I suggest other blogs might be a better place to discuss themπŸ˜‰

Cheers,
T.

35. Guess Who - July 25, 2008

Hm, yeah, or you risk getting confused by me trying to edit a paragraph for clarity while seriously distracted and undercaffeinated. I suppose it was obvious anyway, but just in case, the third and fourth sentences in the second paragraph of comment #30 were meant to say

All three gauge bosons acquire a mass, no unbroken U(1) survives. To get an unbroken U(1), you need to step up to the vector representation, which gives you two massive charged gauge bosons and an uncharged massless one.

Sorry about that. I need an edit button. Or more coffee and fewer distractions. Or all of the above.

P.S. Try a creation operator acting on the vacuum.πŸ™‚

36. Guess Who - July 25, 2008

See, I’m undercaffeinated again. Forgot the slash in the closing blockquote tag.😦

37. goffredo - July 26, 2008

A creation operator is an operator. It creates quanta. The quanta are the particles. Don’t confuse a projectile with the gun.

The transformation properties of the field, of which the quanta are excitations, specify the essential properties of the particles: mass, spin (spacetime properities), statistics (fermion, boson), and a host of internal quantum numbers according to the symmetries the field is assumed to have. The particle nature of these quanta is somewhat straightforward but also uninteresting if the field is free (non-interacting). If the field interacts, with itself or with other fields, the picture gets quite complicated.

In the perturbative approach to studying interactions Feynman introduced a nice computational device that allows one to write down the terms of the perturbation expansion with a graphic shorthand notation rather than the complex (hard to remember) field-theoretic symbols. The free-field current (a free quantum) is dipicted as a line. Interacting currents, at coarsest perturbation level, are represented as two asymptotically free currents that interact by exchanging another type of line, a propagator of the interaction field, that correlates the two currents. The concept of “virtual particles” is used to describe this line but these “particles” are far less particles than the free-particles and create all kinds of conceptual problems for non-propfessional amateurs.

38. Luis Sanchez - August 3, 2008

This whole idea of which findings deserve a nobel award is a sure way to have endless discussions, nonetheless I feel like I should add a bit to the discussion:

-IMHO top quark will not earn a nobel, but of course this is arguable but it was totally expected, its big mass was anticipated from ARGUS results (more on that later…), anyway lets say it is nobel worthy, what amazes me is that nobody mentions the direct detection of the tau neutrino by the DONUT experiment in 2000 which was also totally expected… bla bla… my point is if top detection deserves a nobel so does tau neutrino.
-Of course the CKM trio deserves a nobel, they really deserved it for a long time and its just a little taste of how outrageous this kind of decisions can be, for example why Chien-Shiung Wu didn’t received it and the list can go on (Edwin Hubble, Zeldovich, Stueckelberg…).
-Why doesn’t anyone menctions DESY !?! Some of Europes biggest achievements were realized there, gluons were discovered there (now, that’s nobel-worthy), B mesons oscillations were discovered also at DESY (which opened the field for so many collaborations like Hera-b, Babar, Belle, LHCb…). And talking about “fiascos”, HERA was filled with lots of expectations (and in 1997 caused such a big stir with the leptoquark issue) and seemed to came out with empty hands, although at least their prediction measurements of qcd (specially deep inelastic scattering) contributed to the nobel to asymptotic freedom trio.


Sorry comments are closed for this entry

%d bloggers like this: