##
Fine tuning and numerical coincidences *July 1, 2008*

*Posted by dorigo in Blogroll, cosmology, games, internet, physics, science.*

Tags: crackpots, dark energy, fine tuning

trackback

Tags: crackpots, dark energy, fine tuning

trackback

The issue of fine tuning is a headache for today’s theorists in particle physics. I reported here some time ago the brilliant and simple explanation of the problem with the Higgs boson mass given by Michelangelo Mangano. In a nutshell, the Higgs boson mass is expected to be light in the Standard Model, and yet it is very surprising that it be so, given that there are a dozen very large contributions to its value, each of which could make the Higgs hugely massive: but they altogether magically cancel. They are “fine-tuned” to *nullify one another like gin and Martini around the olive in a perfectly crafted drink.*

A similar coincidence -and actually an even more striking one- happens with dark energy in Cosmology. Dark energy has a density which is orders and orders of magnitude smaller than what one would expect from simple arguments, calling for an explanation which is still unavailable today. Of course, the fact that neither for the Higgs boson nor for dark energy there is as of today a solid experimental evidence is no deterrent: these entities are quite hard to part with, if we insist that we have understood at least in rough terms what exists in the Universe and what is the cause of electroweak symmetry breaking in particle physics. Yet, we should not forget that there might not be a problem after all.

I came across a brilliant discussion of fine tuning in this paper today by sheer chance -or rather, by that random process I entertain myself with every once in a while, called “checking the arXiV”. For me, that simply means looking at recent hep-ph and hep-ex papers, browsing through every third page, and getting caught by the title of some other article quoted in the bibliography, then iterating the process until I remind myself I have to run for some errand.

So, take the two numbers **987654321** and **123456789**: could you imagine a more random choice for two 9-digit integers ? Well, what then, if I argued with you that it is by no means a random choice but an astute one, by showing that their ratio is 8.000000073, which deviates from a perfect integer only by nine parts in a billion!

Another more mundane and better known example is the 2000 US elections: the final ballots in Florida revealed that the Republican party got 2,913,321 votes, while the Democratic votes where only 2,913,144: a difference of sixty parts in a million.

Numerical “coincidences” such as the first one above have always had a tremendous impact on the **standard crackpot**: a person enamoured with a discipline but missing at least in part the institutional background required to be regarded as an authoritative source. A crackpot physicist, if shown a similarly odd coincidence (imagine if those numbers represented two apparently uncorrelated measurements of different physical quantities) would certainly start to build a theory around it with the means he has at his or her disposal. This would be enough for him or her to be tagged as a true crackpot. But there is nothing wrong with trying to understand a numerical coincidence! The only difference is that acknowledged scientists only get interested when those coincidences are really, really, really odd.

Yes, the feeling of being fooled by Nature (the bitch, not the magazine) is what lies underneath. You study electroweak theory, figure that the Higgs boson cannot be much heavier than 100 GeV, and find out that to be so there has to be a highly unlikely numerical coincidence in effect: this is enough for serious physicists to build new theories. And sometimes it works!

The guy in the picture on the right, Johann Jakob Balmer, got his name in all textbooks because of discovering the *ratio* (in the Latin sense) of the measured hydrogen emission lines. He was no crackpot, but in earnest all he did to become textbook famous was finding out that the wavelength of Hydrogen lines in the visible part of its emission spectrum could be obtained with a simple formula involving an integer number **n** -none other than the principal quantum number of the Hydrogen atoms.

So, is it a vacuous occupation to try and find out the underlying reason -the ratio- of the Koidé mass formula or other coincidences ? I think it only partly depends on the tools one uses; much more on the likelihood that these observed oddities are really random or not. And, since a meaningful cut-off in the probability is impossible to determine, we should not laugh at the less compelling attempts.

As far as the numerical coincidence I quoted above is concerned, you might have guessed it: **it is no coincidence! **Greg Landsberg explains in a footnote to the paper I quoted above that one could in fact demonstrate, with some skill in algebra, that

“It turns out that in the base-N numerical system the ratio of the number composed of digits N through 1 in the decreasing order to the number obtained from the same digits, placed in the increasing order, is equal to N-2 with the precision asymptotically approaching . Playing with a hexadecimal calculator could easily reveal this via the observation that the ratio of FEDCBA987654321 to 123456789ABCDEF is equal to 14.000000000000000183, i.e. 14 with the precision of .”

Aptly, he concludes the note as follows:

“Whether the precision needed to fine-tune the SM [Standard Model] could be a result of a similarly hidden principle is yet to be found out.”

Ah, the beauty of Math! It is so reassuring to know the absolute truth on something… Alas, too bad for Godel’s incompleteness theorem. On the opposite side, whether one can demonstrate that the Florida elections were fixed, it remains to be shown.

## Comments

Sorry comments are closed for this entry

Remember how people tried to explain fine structure constant by combining physical constants? Who was it that obtained 1/136 and felt he was on the right track? I don’t remember. Ah the power of math. It can blow the minds of people away.

Jeff

p.s. I remember a “joke” about Pauli. More or less it goes like this:

Pauli passes away and heads off to paradise. After the formalities at the gate he enters and is asked if he has any wish. He says “Why YES! … I wish God explain to me the fine structure constant!”

God appears and starts explains using a heavenly blackboard. God concludes and says “THIS is why the fines tructure constant has the value it has!”. Pauli stands up and says….. “That is false!”

For someone who devoted his life to study the laws of Nature it seems both inappropriate and odd to call ‘her’ bitch.

Best wishes, P

I think “bitch” is used as a term of endearment in recognition that she seems to give up her secrets so unwillingly at times. Mind you, with the frequent natural disasters that occur, “bitch” might seem an apt description from the perspective of life forms which suffer the consequences of these.

jeff mentions 1/136.

In their book “Combinatorial Physics” (World 1995) Bastin and Kilmister said:

“… Eddington did … in effect, assume that the calculations would be combinatorial … Eddington saw … 10×10 + 6×6 = 136 … as the origin of the fine-structure constant …

[When the]… experimental value …[was shown to be]… much nearer to the reciprocal of … 137 than to 136 … Eddington offered reasons for adding 1 …”.

Eddington’s method was rejected, and he was ridiculed by being called Sir Arthur Adding-One.

In the late 1960s and early 1970s, Armand Wyler (then a young Swiss mathematician with a 1966 PHD from the FIT) calculated the fine structure constant as 1/137.03608 … based on the geometry of bounded complex domains and their Shilov boundaries.

Wyler then spent a year at Stanford and a year at MIT, and then spent 1971-72 at IAS Princeton under Freeman Dyson.

Unfortunately, Wyler was not an effective salesman for his work, and it was rejected by the physics community. According to a report attributed to a Swiss physicist, Wyler had “lost it” while at IAS Princeton and was sent home and locked away in an institution for the insane.

Hostility to Wyler and his work continued even decades later. For example, in 1989 David Gross wrote a Physics Today article attacking Wyer’s work.

Only a few now write papers using and acknowledging Wyler’s work, and some of us (Carlos Castro, me, …) are blacklisted by the Cornell arXiv.

One whose papers have been allowed to be posted on hep-th is Walter Smilga, in Germany ( see hep-th/0304137 and hep-th/0508152 ) whose work produces a result that effectively “… reproduces Wyler’s formula for the fine-structure constant …”.

Tony Smith

“In a nutshell, the Higgs boson mass is expected to be light in the Standard Model, and yet it is very surprising that it be so, given that there are a dozen very large contributions to its value, each of which could make the Higgs hugely massive: but they altogether magically cancel. They are “fine-tuned” to nullify one another like gin and Martini around the olive in a perfectly crafted drink.”

Actually, for the SM Higgs to stay light it requires fine-tuning at every order in perturbation theory, which is normally call extraneous. Most theorists would consider this to be highly unnatural. Extensions to the SM such as supersymmetry solve this problem automatically without fine-tuning, provided that the superpartner masses are no greater than a few TeV.

Thank you Eric. I am trying to trivialize things for readers without a PhD obtained with professor N., but I appreciate your attention to detail. I actually did not know it was called extraneous – having a blog is a very good way to learn new things!

Anyway, it was precisely my point that the fine tuning is unnatural, be it at a single order or at all orders, or without mentioning perturbation theory altogether.

One point. Please, you can stay anonymous to readers if you wish so, but do not change your name every other day!

Cheers,

T.

Tommaso, thanks for the link. I should update that home page so it shows more recent stuff.

The Koide formula is based on circulant matrices. The basic idea is to write masses as the eigenvalues of 3×3 circulant matrices. Kea has long been beating the drum for “2-circulant” matrices, which are sort of the opposite of circulant matrices (which are called “1-circulant”). I pretty much ignored this, but a few days ago she pointed out that the CKM (quark mixing) matrix is approximately the sum of a 1-circulant and a 2-circulant. That got my attention so I wrote the MNS (lepton mixing) matrix as such a sum. It turns out to be eerily simple. And I already have the quark and lepton states written in 1-circulant + 2-circulant form (i.e. weak hypercharge and weak isospin), so I think we are close to getting a decent theory put together, maybe this summer.

Great post, Tommaso. Carl, of course I believe there is a quite sophisticated Abstract Nonsense interpretation of the circulant sums. Recall that if we view a triangle as the 1-circulant objects, and the dual trivalent graph as the dual 2-circulant objects, then the basic CKM sum is a kind of self-dual cohomological object in the diagram algebra based on B3 braids.

Tommaso

How could the Florida elections be fixed? Surely no one would go to all the trouble and risk of fixing an election and leave the result to a mere 60 ppm. No political operative has the skill to even attempt to cut it that close.

Only a theorist has that level of confidence!

Paul Neilson said “… Surely no one would go to all the trouble and risk of fixing an election and leave the result to a mere 60 ppm. No political operative has the skill to even attempt to cut it that close. …”.

I agree that Florida 2000 was not fixed in that sense.

Gore lost because his dumb lawyers failed to ask for a full recount of all the Florida votes until it was too late, giving the US Supreme Court grounds to bar a full recount. (Gore’s lawyer’s dumbness was in trying to be too selfish-clever and just recount only areas favorable to Gore.)

However, Ohio 2004 may indeed have been fixed against Kerry in favor of Bush. According to Freeman and Bleifuss in their book “Was the 2004 Presidential Election Stolen?” (Seven Stories Press 2006):

“… Ohio …

Bush Official Count 50.8%

Kerry Official Count 48.7%

…

Bush Exit Polls 45.4%

Kerry Exit Polls 54.2%

…

In November 2004, concern about a possible fraudulent election based on exit-poll discrepancy made headlines across America. However, it was not that month’s U.S. presidential election …

but … an election in … Ukraine …

Yet the U.S. media never questioned why a comparable discrepancy …[in the U.S.]… meant nothing …”.

If you want details of how likely it was that computer fraud may have given Ohio 2004, and the 2004 presidential election, to Bush, read the book.

Tony Smith

PS – I should say that I am not judging whether Bush or Kerry should have won the election, because in some sense elections are contests like love and war in which anything is fair if you get away with it.

IIRC, Kerry bragged that he would win Ohio by buying votes in the Cleveland area, using the same tactics used by Kennedy in defeating Nixon (another likely case of decisive vote fraud, although old-fashioned without computers).

My view is that if Bush (or more precisely his adviser Karl Rove) were smart enough to rig Ohio computer fraud to more than offset Kerry old-fashioned fraud, and Kerry was too dumb to counter it,

then maybe the Bush team was smarter than the Kerry team and so deserved to win the fight-in-which-all-is-fair.

Lubos a few years ago condemned the book ‘The Final Theory’, which was being sold on Amazon with lots of positive reviews, after reading the first chapter which was available free ( http://motls.blogspot.com/2005/08/amazoncom-controlled-by-crackpots.html ). I later found that book quite by accident beside Feynman’s lectures on physics in a library! It’s based on the discovery of the numerical ‘coincidence’:

G = (1/2)g(r^3)/(Rm) = 6.82*10^{-11} (m^3)/(kg*s^2),

(2% higher than the observed constant), where g is the acceleration due to gravity on the surface of the Earth, R is Earth’s radius, r is mean orbital radius of the ground state of hydrogen, and m is the mass of a hydrogen atom. The book tries to justify this by an argument that the atoms, planets, etc., are expanding at an accelerative rate proportional to their radii. The idea in the book is that the ground accelerates upwards as the Earth expands, creating the illusion of gravity since everything expands.

This idea fails to hold water ( http://nige.wordpress.com/2007/06/20/the-mathematical-errors-in-the-standard-model-of-particle-physics/ ) for the simple reason that it would cause gravity to be solely dependent on the radius of the planet, instead of the mass of the planet. Also, his ‘theory’ doesn’t doesn’t explain the convenient arbitrary factor of 1/2 in his ‘coincidence’, and the expansion rate is higher by an astronomical factor than the observed acceleration of the universe. So it is numerology, and can’t explain the facts of gravitation. It doesn’t tie in to the observed expansion of the universe, or to the universal law of gravity where the force depends on the product of masses of the planets involved, not the radii of the planets.

Feynman explained sarcastically why coincidence can be a misunderstood concept: ‘On my way to campus today, I saw a car with the licence plate XRT-375 in the parking lot – isn’t that amazing? What are the odds of seeing that exact licence?’ The point is, probability just indicates your uncertainty given your ignorance of the situation. Once you have observed something, you are no longer uncertain. If something occurs, it’s no longer unlikely (to those who know about it); it’s just a certain fact. So ‘coincidences’ alone are not really too impressive. You expect some numerical coincidences when juggling with different values in a physics data book, so it’s only impressive if you have a theory that will predict a ‘coincidence’ in advance of finding empirical evidence. If you predict you will see a car with licence plate XRT-375 in the parking lot, then confirm the prediction, that would be a useful coincidence.

By comparison, Koide’s formula is relatively impressive because it has survived improvements to the data, and has various extensions. It is great that it is leading towards a theory which may be able predict other particle masses.

Hi Paul,

my point about fixing the FLA elections was just a provoking statement. Of course, there were news of people lining up to vote and being denied to, in counties where Gore was thought to have a majority. And a suspicion of that kind must be automatic in a democracy where the governor of a state is the brother of the would-be president…

If you try to fix elections, you are going to do it in swing states, and where better than FLA for Bush ? Of course you’d have no control on micromanaging a 0.00x% difference, so this is a vacuous discussion, of which I apologize.

Cheers,

T.

Yikes

I thought I was just making a joke about the confidence of theorists!

Now, for real skill in stealing elections take a look at Cook County IL, where the dead just keep on voting and voting and voting …

“…the feeling of being fooled by Nature (the bitch, not the magazine) ”

Readers in awe! Congratulations.

Only a crackpot would look for science in numerical coincidences?

But then what does that make P.M. Dirac, and his large number

hypothesize. Let no one call the father of notation

a crack.

I enjoyed reading the paper you linked to on Kiodes mass formula,

surely it must mean something deep. What does a democratic mass matrix mean to the Higgs sector?

It means we don’t need fairy fields.

Ah, Paul, that is a very common trick. Interesting it is still used.

;-) Tulpoeid

Dr Adams, PAM Dirac was an eagle, but that wouldn’t mean all he did was peak high. In any case, calling people crackpots is a very annoying thing in our field, that people like Lubos Motl like to entertain themselves with. I still think it is doing science, if a bit less rigorously so.

Cheers,

T.

I thought that I had read a comment in a thread on Woit’s blog a few years back that Lumo had written what was like the shortest paper in the history of arXiv, that predicted the mass of the hydrogen atom or something, via numerology.

Anyway, history will prove that Dirac was right, (almost)… and so was Florida!!!… ;)

Dirac’s hypothesis included a gravitational constant that decreased with age over time, while the electric force remained constant, and this is how he explained the dimensionless number which appears in physics, 10^40, per its relation to the distance in astronomical units across the universe.

He noted that the electric force between the proton and the electron is inversely proportional to the square of the distance, while the gravitational force is also inversely proportional to the square of the distance, but the ratio of those two forces, 10^39, does not depend on the distance. He reasoned that the magnitude of the number makes other units of comparison inconsequential due to the largeness of the number, so it stands alone at the order of magnitude, unity.

If you express the age of the universe in atomic units of time, then you get a number of about 10^39, approximately the same as the previously mentioned number.

In 1937, Dirac proposed an explanation of the two large numbers in terms of a third one, which was the age of the universe t_U, (the epoch), measured in units of a typical atomic time e^2/mc^3. Using the present age of the universe, it turns out that…

e_3=t_U/(e^2/mc^3)=e_1

From this, Dirac postulated that the number of nucleons in the universe must increase with t_U^2, and the gravitational “constant” G must decrease by t_U-1.

Dirac said:

That means that the electric force compared with the gravitational force is not a constant, but increases in proportion to the age of the universe. [...] The most convenient way of describing this is to use atomic units, which make the electric force constant; then, referred to these atomic units, the gravitational force will be decreasing. The gravitational constant, usually denoted by G, when expressed in atomic units, is thus not a constant any more, but is decreasing inversely proportional to the age of the universe.Dirac also mentions the possibility that hc/e^2 and/or m_p/m_e might vary proportionally to the logarithm of t_U.

But gravity is cumulative, and so real particle pair production should increase the gravity of the universe, unless the creation of particles from vacuum energy leaves real holes in the vacuum that serve to counter-balance the increased gravitational effect, by increasing vacuum tension, via the increase in -rho that occurs with further rarefaction of the vacuum, where…

P is related to vacuum energy density, by P = -rho.

The number of nucleons in the universe must increase with t_U^2, as the universe ages, per the second law of thermodynamics in an expanding universe which requires that the breakdown of energy include the isolation of of high-energy photons that are known to interact with virtual particles in the quantum vacuum to create real particle pairs.

So the number of nucleons in the universe increases, while G remains constant, since the decrease which occurs with t_U^-1 represents an increase in -rho, which is immediately offset by the increase in the matter density that comes about when you make a real massive particle from virtual particles in the vacuum.

I say, “further” rarefaction because real particle pair production should also increase with the increase in negative energy density, (DE), that comes about as a result of increasing tension between the vacuum and ordinary matter.

The square of the age of the universe equals the number of particles in it, because the size of the universe in astronomical units is proportional to the number of particles that have been created, meaning that vacuum expansion is inversely proportional to real and virtual particle pair production as the negative pressure component increases in proportion to the holes that get made in the vacuum by way of the condensation of its energy.

General Relativity can be accommodated to this idea fairly easily.

An Interview with P.A.M.D.

And just an FYI, but Dirac was Florida Cracker when he died!!!… lol

QUACK!!!

Island, don’t tell me you liked the way the election went in 2000… Bush is the worst president the US ever had in a long, long time!

Thank you for the quotes by Dirac btw. But are you arguing he was right on the whole thing ?

Cheers,

T.

[...] can prevent electoral results from getting fixed. The matter was already briefly discussed here in a thread just a few days ago, yet today I cannot resist reporting a story which is highly improbable but [...]

My comments about numerical coincidences are here:

http://motls.blogspot.com/2008/07/excitement-about-numerical-coincidences.html

Hi Lubos,

I liked your post… Better than mine. Take this comment and save it though, it does not happen frequently ;-)

Cheers,

T.

Dear Tommaso, thanks for your kind words. For these things to happen more frequently, I will have to write about the same topics as you more frequently. It is no numerical coincidence that the frequency of your compliments about my texts about your topics is an increasing function of the frequency how often I write about your topics. ;-)

How boring – I am forced to agree with you again. Please do not reply with anything meaningful below, I don’t think I could stand having to agree with you three times in a single day !

Cheers,

T.

Hi Tommaso,

I’d forgotten all about this post of yours. It came up from doing a search for Walter Smilga to see if he’d actually managed to publish any of his stuff on the fine structure constant.

His latest paper shows a bit of a coincidence with the paper I submitted to Phys Math Central and I would like to reference it. His calculation (which I’ve just glanced at) uses the assumption that, in QED, one should rewrite the theory so that it is one of electrons only, without the photons, while my paper has the same philosophy about the quarks and gluons. He has a couple little leaps in his paper, see Probing the mathematical nature of the photon field Walter Smilga, 0901.4917, which I need to understand better.