jump to navigation

Tevatron excludes chunk of Higgs masses! March 13, 2009

Posted by dorigo in news, physics, science.
Tags: , , ,
trackback

This just in – the Fermilab site has the news on the new exclusion in a range of Higgs masses. At 95% C.L., the Higgs boson cannot have a mass in the 160-170 GeV range, as shown in the graph below. The new limit is shown by the orange band.

This is the first real exclusion range on the Higgs boson mass from CDF and DZERO.  I will have more to say about this great new result during the weekend.

UPDATE: maybe the most interesting thing is not the limit shown above, but the information contained in the graph shown below. It shows how the combination of CDF and DZERO searches for the Higgs bosons end up agreeing with the background-only hypothesis (black hatched curve) or the background plus signal hypothesis (red curve), as a function of the unknown value of the Higgs boson mass. The full black line seems to favor the signal plus background hypothesis, although only marginally and at just the 1-sigma level, at around 130 GeV of mass:

However, they say that if you like sausages and if you follow laws, you should not ask how these things are made. The same goes with global limits, to some extent. In this case it is not a criticism of the limit by itself, but rather of the interpretation that one might be led to give to it. In fact, the width of the green band should put you en garde against wild speculations: It would be extremely suspicious if the black line did not venture outside of the green band somewhere, even in case the Higgs boson does not exist!

That is because the band shows the expected range of 1-sigma fluctuations -due to statistical effects, and not to systematic ones such as the real presence of a signal!- and since the black curve is extracted from the data by combining many datasets and each individual point of the line (in, say, 5-GeV intervals) has little correlation with the others, it is entirely appropriate for the curve to not be fully contained in the green area! So, the fact that the black curve overlaps with the signal plus background hypothesis at 130 GeV really -really!- means very, very little.

What does mean something is that the hatched black and red curves appear separated by about one-sigma (the width of the green band surrounding the background-only black hatched curve) over a wide range of Higgs masses. This says that the two Tevatron experiments have by now reached a sensitivity of about 1-sigma to the signal with the data they have analyzed so far. Beware: they are already sitting on about twice as much data (most analyses rely on about 2.5/fb of collisions, but the Tevatron has already delivered to the experiments over 5/fb). So they expect new results, significantly improved, by this summer.

It does seem that at last, the game of Higgs hunting is starting to get exciting again, after a hiatus of about 7 years following the tentative signal seen by the LEP II experiments!

About these ads

Comments

1. Kea - March 13, 2009

Hurrah! Excellent news!

2. Higgsterics - March 13, 2009

This is a talk scheduled for next week at CERN:

“Why I never believed the Tevatron Higgs sensitivity claims for Run2a and b”
by Michael Dittmar (Zurich/ETH)
TH PhenClub
Thursday 19 March 2009 from 11:00 to 12:00

Description:

Since about 10 years, some sensitivity claims for the SM Higgs at the Tevatron and Higgs masses below 200 GeV have been presented. Most recently even a preliminary 2 sigma exclusion limit has been distributed by the world media. The BBC published an article (17th of February) about a recent AAAS meeting under the title `Race for God particle heats up’. In this presentation I will describe the “origins and the evolution” of these surprising statements and why a Higgs sensitivity close to nothing remains after a critical analysis of the Tevatron simulations and the existing few fb-1 data analysis.

3. Marcos - March 13, 2009

Sweet! Although, there’s something I’m not getting: the range of the second plot is 100-155 GeVs, but the first plot shows the exclusion zone as 160-170 GeVs at 95% CL. Is there something obvious I’m missing? (I admit it is)

4. Tony Smith - March 13, 2009

What about the 145 GeV peak (actually a valley) of the black curve ?

Would the 5/fb data be able to get close to a real conclusion,
such as (for example – not necessarily what will happen)
ruling out the 130 GeV peak(valley) as Higgs
but
being evidence (maybe not discovery) of Higgs at 145 GeV ?

If the 130 GeV peak(valley) does not go away with the 5/fb data,
and the 145 looks with 5/fb like Higgs,
could the 130 GeV be something other than Higgs ?

Tony Smith

5. Filippo Ottonieri - March 14, 2009

I am just an amateur, but I can’t help following the latest developments in particle physics with eager attention. However, I wish to pose a methodological question.
Let’s assume that we have no idea, from theory, about what the Higgs mass should be (do we? I don’t really think so). So we take a fairly broad mass interval (say, 100 GeV wide), and just check by experimenting. Let’s also assume that LHC finds a peak at 130 GeV, 99% CL, with 1 GeV sensitivity. One would say “Wow,they got it!”, right?
Now, let’s look at it from another point of view, and let’s assume that Higgs just doesn’t exist. If I run an ideal experiment over a 100 GeV interval, with 1 GeV sensitivity, the expected number of intervals where I will find a signal with 99% CL is exactly one, or am I wrong? In other words: if I take a mass range wide enough, I will statistically find a signal somewhere even if there is nothing there.
So, my final question is: does this quest for Higgs make sense at all? Or, better: what is the CL that one should accept for concluding that Higgs has “really” been found? Because I would say 99,99%…

6. dorigo - March 14, 2009

Hi Filippo,

you are perfectly right. What you describe is called “look elsewhere” effect, and it is something that any search based on what is dubbed a “flat prior”, a sort of anything-goes hypothesis for the mass of the sought particle, has to take into account.

In general, the accounting is more or less like the one you suggest, but to make it correctly one nowadays runs pseudo-experiments where no signal is present: by taking a million random background spectra and trying to interpret them as signal plus background, one understand what fraction of the times one finds a signal with any level of significance.

Cheers,
T.

7. G. Sardin - March 14, 2009

Those interested in a critical viewpoint may have a look at the following article:

http://arxiv.org/ftp/hep-ph/papers/0102/0102268.pdf

and the presentation:

http://personales.ya.com/sardin/lhc-i.pps

8. G. Sardin - March 14, 2009

Those interested in a critical viewpoint may have a look at:

the article: http://arxiv.org/ftp/hep-ph/papers/0102/0102268.pdf

the presentation: http://personales.ya.com/sardin/lhc-i.pps

the YouTube load: http://www.youtube.com/watch?v=fQrIuu31pFA

9. Tony Smith - March 15, 2009

Tommaso, your description of a “flat prior” search for Higgs is interesting.

It might also be interesting to compare with the same search from a different point of view:
the point of view of trying to confirm or refute a specific model.

Since I am familiar with my model, I will use it in this comment. I am not asking that you believe in my model, just that you use it as an example of looking at the Fermilab search results in the context of trying to refute or confirm a model.

My model has Higgs as a T-Tbar condensate and (in the relevant energy range, below 200 GeV) has two Higgs states and two T-quark states, which match up with the Fermilab March 2009 search exclusion zones as follows:

conventional SM Higgs at 176-188 GeV:
consistent with the Fermilab search 180-185 GeV permitted zone

conventional T-quark at 172-175 GeV:
since it is conventional, it is included as background for the Higgs search, and therefore consistent with the Fermilab search results

low-mass Higgs at 143-160 GeV;
consistent with the Fermilab search 114-157 GeV permitted zone
and
consistent with the LLR solid black curve valley at 145 GeV

low-mass T-quark at 130-145 GeV
since it is not conventional, it is not included as background for the Higgs search, and therefore shows up in the signal of the Fermilab search,
which signal is consistent with the Fermilab search 114-157 GeV permitted zone
and
consistent with the LLR solid black curve valley at 130 GeV

My view is that so far the Fermilab search has neither refuted nor confirmed my model,
so that my model should be considered a viable candidate with concrete predictions
that can and should be tested by further data and analysis.

Since my model has 4 predictions,
one of which (T-quark at 172-175 GeV) is pretty much universally agreed to be correct,
it is interesting to me that the other 3 predictions (which are controversial) closely match:

the narrow permitted zone at 180-185 GeV
and
the only two sharp valleys in the 114-157 GeV permitted zone,
at 145 Gev
and
at 130 GeV

Those circumstances encourage me,
but I do understand that refutation/confirmation of my model is not yet here,
so I await with interest future results of analysis of more Fermilab data.

Tony Smith

10. Rafael - March 15, 2009

Dear Tommaso,
Aren’t this Higgs constraints and evidences as:
http://lanl.arxiv.org/abs/0812.2030v1
http://xxx.lanl.gov/abs/0810.5357
already enough to “prove” non-standard model effects?
Regards,

Rafael.

11. dorigo - March 16, 2009

Hi Rafael,

no, I do not think so. The two papers you quote do evidence some intriguing effects that deviate from SM predictions, but they are very different both in the physics and in the most likely explanation. The first is most likely a statistical effect, while the second is systematic in nature, and most likely a ill-understood background.

In any case, what I say above does not mean we should not investigate better those effects. And in fact, I have started an analysis to search for events of the same kind as those discussed in the second paper, in CMS.

Cheers,
T.

12. dorigo - March 16, 2009

Dear Sardin,

I do not think your effort has much to do with the standard model, being a way to describe protons and neutrons as elementary bodies. You discuss beta decay in your paper: can your theory explain the energy spectrum of beta electrons (the Michel parameter) ? That would be a start. The Standard Model started with that simple aim: understanding the energy spectrum of beta- electrons led to the V-A theory.

Cheers,
T.

13. dorigo - March 16, 2009

Hi Tony,

I value the fact that your theory predicts mass ranges that are likely to be reached in the near future by the Tevatron experiments. We shall see what that brings us. However, I would tend to think that whatever extension of the Standard Model would require us to fit our data under those assumptions, because the presence of several new states -as those you suggest- would change background predictions, decay modes, and ultimately, make limits (or signals) not adequately justifiable.

In other words, to really test your model, one would need to plug it in a toy Monte Carlo, and see what distributions and what acceptances it predicts, to compare then to observed counts in the many kinematical distributions which are tested in the current analyses.

So in a way, your model will probably out-live any limits that would exclude Higgses at 170 GeV or above ;-)

Cheers,
T.

14. Tony Smith - March 16, 2009

Tommaso, hypothetically
– and not explicitly about my model or any other specific model –
what would you think if
the Fermilab additional data extended BOTH of the LLR black curve valleys down to signals at 130 GeV and 146 GeV,
say at the 3 sigma level
(maybe enough for evidence, but not for discovery) ?

Do the people at the labs do serious thinking/modelling/planning for such contingencies ?

Tony

15. dorigo - March 16, 2009

Tony, I would think that one is a true SM higgs and the other is a fluke! …But I am a blue sceptic as you know. I would probably consider the chance that both are flukes!

In any case, no, people doing these analyses doggedly run code, they leave interpretations and speculations to lazier bums like us :)

Cheers,
T.

16. chris - March 16, 2009

what i find most interesting is that the former peak at around 115 GeV seems to have vanished. ah, the joys of sub-1-sigma statistics!

17. Tony Smith - March 16, 2009

Tommaso, if you consider flukes to be unlikely,
then how about the following bet, for a bottle of Strega:

IF Fermilab’s sausage formula for the black LLR Obs curve remains unchanged
AND IF Fermilab’s criteria for data input remains unchanged
THEN
when Fermilab analyses the 5/fb of data
there will still be two valleys in the black LLR Obs curve pointing down to within +/- 3 GeV of 130 GeV and of 146 GeV.

In other words, if the two valleys are still there in roughly the same place after doubling from 2.5/fb to 5/fb, then I win Strega,
but if at least one of them moves or goes away, then you win,
assuming that sausage factory workings and data selection criteria remain the same.

Tony Smith

PS – I don’t know enough about how the sausage factory works to include in the bet about the valleys deepening to, say, 2 sigma or 3 sigma, but I would (against your stated advice) be interested in learning more about the sausage factory.

18. G. Sardin - March 16, 2009

Dear Dorigo,

I haven’t treated the energy spectrum of beta electrons from the dual orbital structure of the neutron, considered to be formed by a core proton and a shell, which in its decay mutates into an electron. An anti-neutrino is emitted in the energy transition of the shell into an electron. This model is much more straightforward than the standard one.

But I have proposed an approach for the evaluation of the proton mass. You may eventually have a look to it:

“Nature and Quantization of the Proton Mass: An Electromagnetic Model”
http://www.citebase.org/abstract?id=oai%3AarXiv.org%3Aphysics%2F0512108

I am presently working on new developments, such as the relation between magnetic moment and mass, from the particles structuring orbital method.

In my viewpoint physicists rely too much on maths. With some perseverance and skill one may make them to predict almost anything desired. I put models such as the Standard one and other modern theories in the category of chameleon approaches, i.e. they are structured to fit almost any environment. So, their reliability as conceptual guides may be questioned, since they are sufficiently ductile (with an abusive number of parameters) to fit almost any experimental data. Evidently this is just one opinion.

19. dorigo - March 17, 2009

Hi Chris,
yours is a meaningful observation, but unfortunately the Tevatron data is still not enough to provide a meaningful hint yet. We are still at the level of less than 1-sigma preferences for one interpretation or the other…

Tony, yes, let’s bet a strega. Only, I think you should consider (or you already have) that there isn’t so much difference between 3/fb and 5/fb, and since the data already collected is not going to change, what we are betting on is actually whether 66% additional data will be enough to wash out or not a 1-sigma deviation of the former dataset. Since 1-sigma corresponds to a 63% fluctuation, I think the bet is a fair one -having a roughly 50% chance of going one way or the other due to mere chance. However, by the same token, its result is not to be attached any meaning -other than a free toss, of course.

Dear Sardin,
doing science requires one to deal with numbers. The SM explains how a energy spectrum of beta-decay electrons near the endpoint energy arises, quantitatively. Any competitive model must explain that feature as well, otherwise they may be very nice, elegant theories, but they have a flaw: they are refuted by observation.
There are hundreds of quantities that are measurable by experiment, and on which theories may get the right number or fail. That is the way to do science: constantly refining a model, or switching to a better one, based on observations.
Good luck!
T.

20. G. Sardin - March 17, 2009

Dear T,

Of course any model must quantitatively fit experimental data. The point is how credible is the way it has been reached. The SM may be experimentally disproved in the near future. Already it has a lot of flaws. As experimentalist physicist, the crucial point for me is centred on the actual existence of quarks, gluons, Higgs bosons, etc… If they don’t then the SM model will have been just a fancy mathematical game. If the need to switch, as you say, to a different model emerges it will mean that the SM is wrong, in spite of its fittings. Fittings are a necessary condition but it is not sufficient, its fundamentals must also be right. Good luck to you in your enthusiastic belief in the SM.

21. dorigo - March 17, 2009

Dear Sardin,

I gave you a very clear and simple example of an experimental observation which motivated the V-A theory, and which is not explained in your model of neutron decays.

The standard model may one day be surpassed, like newtonian mechanics has by the Einstein theory of special relativity, but it will never be proven wrong, since in its domain of applicability it works perfectly well. It is, however, an effective theory, in that sense. Saying it is “wrong” puts you in a awkward position, no less than if you were to say that the laws of newtonian motion are “wrong”.

Best,
T.

22. Luboš Motl - March 17, 2009

I am always and repeatedly surprised by all those intellectual ants making their little steps under the marvelous structure of Nature, holding together because of the precious and precise laws of mathematics, when these ants say that mathematics is “just” mathematics, ignoring that mathematics is 10^{600} times more majestic and important than they are.

Quite universally, they think that they don’t need maths because they know almost nothing about the world of phenomena. Of course that the people who have seen at least dozens of dirty graphs of cross sections know that some maths is needed and it may be pretty deep and complex if it is supposed to cover all these things.

On the other hand, the people who have only heard that there are neutrons and there could be 3 quarks live in much smaller an intellectual world (or miniworld). They don’t need too much maths to calculate “1” neutron or something like that, so they can spend 99% of their time by drawing childish pictures and by smearing the marvelous structure that mathematics is – and those who actually live in the grand world where mathematics is paramount.

23. dorigo - March 17, 2009

I agree Lubos, but I wonder how you chose the 10^600 number… Any connection to the number of foldings of the CY ? Or just an idiosyncratic slip ? ;-)
Cheers,
T.

24. Luboš Motl - March 17, 2009

Dear Tommaso, I just wanted to be sure that if someone would incorrectly try to dilute my argument by referring to the landscape of 10^{500} stringy vacua, I would still win by a factor of a googol. ;-)

25. G. Sardin - March 17, 2009

Dear Dorigo,

I don’t question your example, but that is not an enough proof. The standard model has been worldwide developed for 40 years long. Since its emergence I have followed its developments, the first 20 years silently. The more it was supposedly improved the more I was getting disappointed by an endless increase of artificiality. Since I have been working unfortunately alone on the Orbital model I just can compete with the means and efforts put in the SM. But just feel free, as well as anyone else, to try solving quantitatively the neutron decay from the decay of its dual orbital structure.

About your point on what is considered wrong or not, the static universe of Einstein is said to be wrong (even by himself), the absolute Newtonian space is also said to be wrong, and I am sorry to say that depending on future experimental results the SM model may be said to be wrong. If the model I have proposed would end up experimentally disproved I would just consider it to be wrong. Evidently this applies as well to the SM.

If the SM was a model from one or a few scientists, the eventuality of being wrong would create any problem. Anyone has the right to be wrong (we are in democracy). But the SM is the official model, so if it is counterfeit what a huge miss use of intellectuality, for 40 years long. As someone answered to someone else defending that life exists only on Earth in spite of the vastness of the Universe: “If this is the case what a waist of means from the part of God”. If the SM ends up being experimentally disproved, what a waist of investments and intellectual resources, apart from the inhibition of different approaches and the lost of reliance in official standpoints.

Anyway I am not interested in further expressing my disaccord with the SM, only experimental results may bring some light on that question. I was just enquiring about the degree of intellectual freedom from those involved with the SM, with the remote hope of finding some helpful contribution. So, let’s end it here. Anyway, thanks for your replies.

26. Tony Smith - March 17, 2009

Tommaso, thanks for accepting the Strega bet.

You may be correct that as it stands “… its result is not to be attached any meaning ….”,
so,
to try to make it more meaningful, I would like to amend the bet by changing “”5/fb” to “10/fb”.
Please let me know if you agree to the change to 10/fb.

With the change to 10/fb,
you can still win if at least one of the valleys moves or goes away even at 5/fb (maybe this summer)
but
I will have to wait until 10/fb (maybe late 2010) to claim a win.

On a related matter, but not part of the bet, do you think that by 10/fb any valley (moved or not) might deepen to 3 sigma? If so, would you consider that to be evidence (but maybe not discovery) of the Higgs?

Tony Smith

PS – For the record, here are the terms of the bet if you agree to the amendment:

For a bottle of Strega:

IF Fermilab’s sausage formula for the black LLR Obs curve remains unchanged
AND IF Fermilab’s criteria for data input remains unchanged
THEN
when Fermilab analyses data going up to and including 10/fb
there will still be two valleys in the black LLR Obs curve pointing down to within +/- 3 GeV of 130 GeV and of 146 GeV.

In other words, if the two valleys stay in roughly the same place while going from 2.5/fb to 10/fb, then I win Strega,
but if at least one of them moves or goes away, then you win,
assuming that sausage factory workings and data selection criteria remain the same.

27. pal benko - March 18, 2009

Tommasso, I have a question about the Higgs exclusion. Particularly, what exactly has been excluded – is it a standard model Higgs, SUSY or what? Presumably there could still be a scalar particle sitting at 170 GeV, which happens not to decay in the way the SM predicts.

So basically, how much if any of all this is model-independent?

cheers for your time.

28. strings - March 18, 2009

The Fermilab exclusion at 160-170 GeV seems to depend on a downward fluctuation in the background — that is, the observed sensitivity at this mass range was better than the computed sensitivity.
It is OK to sometimes get lucky like that so as far as I know this isn’t a problem with the result; the downward fluctuation is small enough to not suggest any problem with the analysis, at least to an outsider.

But I do wonder about something. Assuming the analysis is correct,
should we expect the sensitivity to improve when more data are analyzed — say in the summer when maybe 5 fb^{-1} will be analyzed? Normally, one says yes, but in a case where sensitivity has benefited from a downward fluctuation in the background, it isn’t clear to me that the most likely outcome with doubled statistics will be a better limit.

29. dorigo - March 18, 2009

Dear strings, it is always a pleasure to have you here.

I can assure you that to the best of my knowledge the CDF and DZERO analyses on the Higgs are really the best these groups can produce -they have worked as a single man on these results. I have promised here an update on the most important searches, but I am lagging behind…

(Despite the above paragraph, there is a talk scheduled for tomorrow by Michael Dittmar who tries to sell that the Tevatron results are crap: stay tuned, I will report on it).

The global limit by CDF and D0 is summarized in this graph, which shows how the actual computed limit is within the 1-sigma band over the low-mass region, and below it for a part of the high-mass region. All those entering the average are independent searches, in the sense that they are separately optimized at each mass point; statistical overlap do make the results correlated, however.

Your question is interesting, because indeed, one does not usually focus on the “memory” issue; but the answer is straightforward. Since all analyses making up the combined limit re-use the old data when they produce updates, and since these analyses are -at least for the high mass range- quite mature and difficult to further improve, a downward fluctuation in the first 3/fb will remain there forever.

In other words, if backgrounds have been computed correctly, on average CDF and DZERO are condemned to have some deficit there. Say that in the first 3/fb they saw 250 events and expected 300 (the searches are complicated and not just counting experiments, but for the sake of argument let’s keep it simple): add 2/fb, and on average this will add 200 events from background, which will make it 450 observed and 500 expected. Sure, an upward fluctuation may still make up for the excess, but this is less likely to happen, on average.

It is just the lack of memory of quantum effects…

Cheers,
T.

30. dorigo - March 18, 2009

Dear Pal, yes, it is strictly a SM Higgs which has been disfavored at 95%CL. For different scalars, the limit is less stringent to meaningless, depending on its couplings.
SUSY searches for Higgs bosons depend on a number of parameters, not just the mass of those particles. So you usually see exclusions of swaths of two-dimensional plots, with ten lines of specifications on the free parameters assumed for that particular plot…

Cheers,
T.

31. dorigo - March 18, 2009

And strings, by reading back your comments I found I underestimated it. There is indeed a further subtlety to account for, which might change the conclusions of my former comment -but in a unknown way, because we do not have enough information to know enough.

The thing is, this limit is not a simple counting experiment, so complications which are only resolved with complex multi-variable likelihood functions are to be expected. But even in the simple counting-experiment kind of approach I exemplified above, there is a point to make.

Indeed, it is not given for granted that, once a deficit is observed in some data, the addition of other data on average yields a similar exclusion (a similar result on the upper limit of higgs events, in our case). That is because a downward fluke gets diluted as more data is added (everything being correct, no higgs etcetera). So, if 250/300 events produce a certain exclusion, it is perfectly meaningful to ask whether 450/500 give a better one or a worse one, on average.

I am afraid I am unable to answer this question, because it entails running the analysis code on pseudoexperiments. But I now see what is the point you made (and I should have guessed that this was the real culprit).

The expected limit (the black curve in the plot I linked above) acts like an “attractor” of the observed limit. It moves down as dictated by the total statistics one processes, and any fluctuation, while remaining in the data, gets slowly washed out by the added statistics, so the observed limit moves towards the expected limit, and if the expected limit does not move down fast enough, the deficit -i.e. the better-than-expected limit- moves up as it gets washed away!

So yes, the Tevatron might be expected to worsen its limits, under certain circumstances, by just adding statistics. It depends on the details of the likelihood how that would manifest itself.

Hope that helps,
Cheers,
T.

32. strings - March 19, 2009

Thanks – that second explanation is what I was inquiring about.

33. Higgsterics - March 19, 2009

Sardin,

when I read people like you I always ask myself a question that I now address directly to you: deep inside you know you are talking crackpottery, don’t you?

34. Daniel de França MTd2 - March 19, 2009

Hi Tommaso,

Someone famous here, and who John Baez really likes, is asking for an experimental value here: http://golem.ph.utexas.edu/category/2009/03/the_algebra_of_grand_unified_t_1.html#c022611

I am curious to know the answer. Would you mind helping him?

Cheers,

Daniel.

35. G. Sardin - March 19, 2009

Dear Higgsterics,

I you really to need to convince yourself that a critical viewpoint about the SM is synonym of crackpottery, just feel free. Already Luboš Motl has put me in the category of “intellectual ants, ignoring that mathematics is 10^{600} times more majestic and important than they are”. After all, if I may choose I rather prefer to be an “inteluctual ant” rather than “talking crackpottery”, it sounds better. But, since you have already quite clear the issue you shouldn’t mind about my critical views, just worry about the rightness of the SM independently of my thoughts. Or “deep inside, you know you are fearing the SM to be wrong, don’t you? Just work hard on experimental data because you absolutely must find the Higgs boson. It is too late to move backwards after 40 years of dogmatic propaganda.

I have proposed a model based on a single fundamental system whose allowed quantum states would generate all elementary particles. Apparently, it is an inexcusable heresy! Are we going to recover the inquisition, this time in hands of the scientific establishment? I have the impression that some poorly dominated fears from future experimental results makes Higgs hunters somewhat tense. I naively thought that science was a field in which ideas could be exchanged freely! By the way, never fall in the temptation to pay any respectful attention to the orbital model of elementary particles, and still less to try putting some skillful math, just to demonstrate how wrong it is. This way I will not bother you anymore, Higgs believers! For your information I am 67 years old, retired, with a long experience on physicists intellectual archetypes.

36. Clyde - March 19, 2009

G Sardin,

everybody already knows that the SM is wrong when applied to high energies – because it doesn’t include gravity and there is something funny going on with electroweak symmetry breaking. It is an effective theory, to use the lingo. But it is undeniably, categorically right at lower energies, where it predicts quantitatively precisely what is observed. That is the definition of being correct!

There is no controversy here, and no cover up. As Tommasso pointed out to you, saying the SM is wrong is like saying Newton’s laws are wrong – i.e. only in the obvious sense that it is not valid in all energy domains.

You should check that your theory gives the same predictions as the standard model at low (sub TeV scale) energies, because otherwise it is wrong. If it passes this test, then you should tell us all what it predicts at higher energies.

37. G. Sardin - March 20, 2009

To Clyde,

What I mostly complain about is that all the efforts have been exclusively put in the development of the SM. Anything else is just junk. There is the most remote official will to test any of this trash.

I quote: “As Tommasso pointed out to you, saying the SM is wrong is like saying Newton’s laws are wrong”. I cannot fully agree on an intellectual basis about this point, it is just a too simplistic way to put it. Newton’s laws do not predict particles, just behaviour. Although it is considered to be inadequate in its conceptual basis about the absolute space. About this respect I reference you the following paper: “Full Nexus between Newtonian and Relativistic Mechanics” http://arxiv.org/ftp/arxiv/papers/0806/0806.0171.pdf

I know that theoretical physicists like to reduce the grasp of the physical world to just a mathematical affair. On a broad intellectuality this is just too reductive. Physical reality is not made of math, math is a very useful language on quantitative grounds. But you may manage got to make it fit with experimental data and the conceptual fundaments still may be wrong. For an approach to the actual physical reality to be ascertained it must fulfil much more requirements.

You say: “It is an effective theory, to use the lingo. But it is undeniably, categorically right at lower energies, where it predicts quantitatively precisely what is observed. That is the definition of being correct!”. That is the point, it is just not a good definition of being correct. It must also be right in its material predictions. Quarks, gluons, fractional charges are not observed, they are presumably deduced, that is not the same. This reminds me mythologies that would make people believe in weird beings. For a theory to be ascertained its mythological beings must also be real.

You also say: “You should check that your theory gives the same predictions as the standard model at low (sub TeV scale) energies”. Let’s be serious, this is not fair. Do you really think that a single person can do it. How many of you have been working in the SM for 4 decades, and it is still a draft.

To conclude, I’ll retake my main complain. For 40 years you have been confined in a single approach, disdaining any alternative. This is not quite an honourable scientific attitude, nor a good policy. A broad scope should have been officially tested. I know that the SM model is for the moment the politically correct cruise, but the huge ship may sink if the mythological beings supposedly seen during the journey are experimentally proved not to be just sirens.

38. Tony Smith - March 20, 2009

G. Sardin said, about the Standard Model,
“… disdaining any alternative … is not quite an honourable scientific attitude, nor a good policy. …”.

On his web site, G. Sardin calculates the muon mass to be 105.55 MeV which is quite close to the experimentally observed value of the muon (the second-generation charged lepton),
and he says:
“… As a pictorial analogy, if the electron orbital is compared to a steady string, then the muon corresponds to a vibrating string. …”.

Giving credit to G. Sardin’s muon mass calculation,
I think that it is fair to explore G. Sardin’s model further by asking him to calculate the mass of the third-generation charged lepton, the tauon (roughly 1.8 GeV),
and to give a similar pictorial analogy for how it is related to the electron.

Tony Smith

39. G. Sardin - March 20, 2009

To Tony Smith,

Thanks for paying some credit to the orbital model. The difficulty when dealing with the Tauon arises from its high mass, then it cannot be approximated to an harmonic quantum oscillator, as I have done for the muon. The slight deviation from its experimental mass might already suggest a slight departure of the muon from a perfect harmonic oscillator. The way I see it, the Tauon would deeply be anharmonic, due to its quite high mass. In view of this difficulty I just went to a more interesting particle, the neutron, due to its role in nuclear cohesion. But anyone disposed to take the risk of spending some time in dealing with quantum anharmonic oscillator would be welcome.

Presently, I am busy in trying to finish a paper dealing with the structuring orbital inner dynamics, applied to the proton, and based on two simultaneous kinetics, a gyratory motion that generates the magnetic moment, and an oscillatory motion related to the Compton wavelength and hence to the mass. This approach provides a deeper meaning of the g-factor, relating it directly to the orbital structure. So, I will be first trying to finish this task.

Look at the orbital model as an open model and anyone can feel free to search for any development. I could provide some hints in case of interest.

40. G. Sardin - March 20, 2009

To anyone:

In case someone would be tempted to try to quantize the Tauon mass, from the OM perspective, let me specify some fundamentals. For mathematical convenience it may be seen as a spring or a string of size within the Fermi scale. For the electron and the muon the string has the length of the classical electron radius, and probably for the Tauon too. The electron, seen as a string, vibrates at a frequency corresponding to its mass. The muon string vibrates at two frequencies, the electron one plus a new one, corresponding to an harmonic quantum oscillator. The muon string is just an excited state of the electron string. So, the muon would vibrate at two frequencies. Most probably the Tauon would vibrate at three frequencies, the two previous one plus an extra one. This would stand in case the length of the string would have remained constant.

Another fundamental point to discern is why the electron string is vibrating, at a low frequency corresponding to an energy of 0.51 MeV. From the OM fundamentals, the primordial element is constituted by an electric dipole, and thus it would be constituted by two strings. The electron and the positron arise from the dipole rupture, leading so to two free strings. To somewhat short-cut the explanation, let us assimilate the electric dipole to a photon. The minimal energy to be transferred to it for its dissociation is known to be 1.02 MeV. This energy is transferred to each string and thus each one carry 0.51 MeV, one being an electron and the other one a positron. Therefore the ground state of any charged particle cannot be lower than 0.51 MeV and thus represented by the electron. So, the electron is the lower decay state of any elementary particle seen as a string.

When dealing with the magnetic moment, elementary particles should instead be seen as orbitals spun by a massless electric charge. When both mass and magnetic moment are considered at once then they should be seen as vibrating orbitals.

I hope no one has got too much frightened!

41. Gianni - August 27, 2009

Tevatron has only use part of their data in this analysis. Is there a date det when the full analysis will be published? Will they, at least in principle, be able to exclude the whole energy range?

dorigo - August 27, 2009

Hi Giovanni,

the blog you visited has moved to http://www.scientificblogging.com/quantum_diaries_survivor . Please visit me there for updates on the Higgs and other particle physics news.

As for your question, no, there is no date, because the analysis is always lagging behind data taking: calibrations have to be applied, validation of new data, processing, reconstruction, take some 4-6 months before a new
event is usable for analysis. What CDF and Dzero do is to get as much as they can for their updates for winter and summer analyses.

Cheers,
T.

42. G. Sardin - September 26, 2009

To anyone,

Those interested in new paradigmas may read the article:

Structural and dynamical significance of the proton g-factor

http://personales.ya.com/sardin/articles/proton-g-factor.pdf


Sorry comments are closed for this entry

Follow

Get every new post delivered to your Inbox.

Join 100 other followers

%d bloggers like this: