jump to navigation

The Oklo reactor and a varying fine structure constant April 18, 2007

Posted by dorigo in Blogroll, books, physics, science.
trackback

In the process of slow absorption of Ray Kurzweil’s enlightening book The Singularity is Near, I learned about the existence of a natural nuclear reactor in Oklo (Gabon – see aerial picture above), which was active until about 1.5 billion years ago, and was discovered in 1972 by Francis Perrin.

A natural nuclear reactor! I remember thinking about the possibility that something like that could occur in a uranium ore while reading The Making of the Atomic Bomb, an entertaining account of the history of the first years of nuclear physics by Richard Rhodes.

The physics of nuclear reactions is complex, but a few basic facts are sufficient to understand what happens with uranium, which in nature is a mixture of relatively stable U-238 isotopes and a small percentage (0.72%) of more reactive U-235 isotopes. U-235 undergoes spontaneous fission into two nuclear fragments by emitting a few energetic neutrons, a process that can also be triggered by the collision with a fast neutron. A nuclear chain reaction occurs spontaneously in uranium if the neutrons emitted in the fission of a U-235 nucleus have a sizable probability of producing another fission before being captured by a U-238 nucleus or escaping the fissile material. The probability that a neutron initiates another fission of course grows with the fraction of U-235 in the uranium mass and with the amount of substance, but it also critically depends on other subtle conditions.

I am not a nuclear physicist so I will abstain from attempting  a thorough explanation of the many details here. One thing suffices: water is a substance which has the potential of slowing down the neutrons emitted by U-235 to a speed which is just about right for maximum fission probability when they hit another U-235 atom.

What is thought to have happened two billion years ago in Oklo and in a few other sites nearby is that water infiltrated into the uranium ore, effectively moderating the neutrons continuously produced by the sub-critic mass of U-235, and turning on a chain reaction. The generated heat would boil the water away, turning the reaction off until new water seeped through the vein.

Physics is fascinating… And the dynamical equilibrium that prevented a meltdown of the Oklo reactor, generating periodic amounts of heat, is intriguing indeed. One thing to note: such things cannot happen nowadays, when the higher decay rate of U-235 has made it four times less abundant in uranium ore than it was two billion years ago… Conditions were just about right back then, with a 3.5% fraction of U-235 which closely matches that of fission reactors running on enriched uranium. Also, the pressure deep underground certainly helped.

Physicists who analyzed the substances collected on the Oklo site believe that the reaction went on for several hundred thousand years, and they actually are able to measure many details of those ancient events – the duration of each reaction, the generated temperature, and more. They do so by looking at the relative abundance of several tale-telling isotopes, ones which are produced by secondary products of the reaction (such as 142-Neodymium or 99-Ruthenium) and ones which disappear from the ore because of that (such as, of course, U-235, whose concentration in uranium ore was found to be as small as 0.4% with respect to the usual 0.72% of present-day uranium ore).

What is interesting is that the amount of specific isotopes (such as Samarium-149)appears to allow a measurement of the value that the fine structure constant, labeled by the greek letter alpha, had two billion years ago. This is possible because the cross section of neutron capture by that substance changes with the fine structure constant, and the concentration of Sm-149 becomes a yardstick for the latter.

What if alpha was different from now ? Well, since alpha is the square of the electric charge of electrons divided by the product of Planck’s constant by the speed of light, finding a smaller alpha would imply that the speed of light was larger, unless you were to buy the even steeper hypothesis of a changing electron charge or a changing Planck’s constant.

Measurements in Oklo ore were actually used for a while to demonstrate that the speed of light was constant in the last two billion years, but a 2004 paper by Steve Lamoreaux and Justin Thorgerson of Los Alamos National Laboratories [PRD 69 (2004), 121701] appeared to show a decrease of alpha by 45 parts in a billion. They used a more realistic spectrum of the energy of neutrons in the reactor than what had been done in previous studies, and claimed their result was thus more precise.

The whole thing is certainly fascinating! For a longer and better account of the story and some additional insight in other measurements of alpha in the far past, see this fine article  in the New Scientist’s site. The most recent material on the issue, however, appears to be Nucl-Ex/0701019 , where Steve Lamoreaux and collaborators revise their own previous estimates, and now put a more stringent (although admittedly model-dependent) bound on the variation of alpha,

-0.11 <delta(alpha)/alpha < 0.24

in 10^-7 units, at 2-sigma level. So the latest data is consistent with the fine structure constant being, in fact, a constant, but small changes are not ruled out yet. Indeed, the matter is complex, and the final word might not have been said on the Oklo reactors yet…

While we wait for it, why not considering that a sizable number of respected, mainstream physicists have been devolving years of research time in the attempt at determining a variation of alpha, which is basically at the same level of heresy as what   Louise Riofrio has been proposing for a while ?

What I mean to say the lesson is: let the data speak one way or another rather than preemptively throw the first stone! No true scientist (bureaucrats and lackeys do not belong to the category of course) can ever be completely crackpot-free. We all, in fact, share an attraction to the bold, revolutionary idea. Be it a varying speed of light or a tiny but nonzero amount of energy filling empty space, it still fascinates us, until it is proven false.

Comments

1. andy - April 18, 2007

No true scientist (bureaucrats and lackeys do not belong to the category of course) can ever be completely crackpot-free. We all, in fact, share an attraction to the bold, revolutionary idea.

I’m glad that you said that. Being crotchety about new ideas is bad for the process of science.

2. Alexander W. Janssen - April 19, 2007

I first learned about the Oklo-reactor from Terry Pratchett’s very funny and insightful book “The Science of Discworld”. This one is darn funny, it starts out with a very cold winter in Ankh Morporks and one of the wizzards sees a possibility to get funds for his “thaumic reactor” – a device to split up the magical thaum-particle. Heat is a byproduct of the thaumic reactor (which is build on the squash-court of the Unseen University;)

However, the experiment goes all wrong and they need to divert all the magical-energy onto some target which uses up all the magic, so that the don’t blow up – they put all the energy into the “Roundworld Project” (the scene takes place on the “Discworld” – a world run by magic rather than by the laws of nature). The Roundworld-Project was an Gedankenexperiment by the wizzards to find a place where the laws of magic have no effect.

The Roundworld is a small orb of probably 2 feet in diameter but it’s dark inside… And suddenly, after some bored wizzard poked with his finger in it, the Big Bang happens 😉

The book is about how the wizzards use the Roundworld to figure out how their magic is working through observing how a place *without any magic* is working. Sounds familiar, doens’t it?

I stop the spoiling it now, you should just read it. Every other chapter is not fiction, but an scientific explanation what happened in the last chapters. Terry had two co-authors as scientific consultants, Ian Stewart, a mathematician and Jack Cohen, a biologist.

http://en.wikipedia.org/wiki/The_Science_of_Discworld

Go and get it, you’ll enjoy it.

OK, back to work.
Cheers, Alex.

3. DB - April 19, 2007

Why is the interpretation of a changing electron charge “steeper” than that of a changing speed of light? Certainly if you work in units where c=1 and hbar=1, a change in “e” is the only possible interpretation.

4. Louise - April 19, 2007

Thanks, it is very promising that changing c is discussed in so many places now.

5. Andrea Giammanco - April 19, 2007

I first heard of the Oklo “reactor” in a very amusing article in a popular scientific review, when I was a teenager. I remember that it said that at first, when the isotopic anomaly was found and confirmed, the first hypotheses where, in the order:
– somebody had stolen some tons of raw material from the ore and replaced it with depleted uranium (maybe to sell the good one to wannabe-nuclear-powers);
– an alien spacecraft (powered by nuclear energy, of course) had crashed there milliards of years ago, contaminating the ore with its spent fuel.

I don’t know if these hypotheses were ever taken seriously into account, or where just mentioned in order to amuse the reader. But I would have found the alien hypothesis very attractive, although the reality is almost as amazing, for a nuclear physicist 🙂

(I am a nuclear physicist by formation; I converted to particle physics from the laurea onward, after several exams on low-energy stuff, including a couple of months of data analysis at a cyclotron.)

[quote]it is very promising that changing c is discussed in so many places now.[/quote]

well, the Oklo reactor was already a textbook example (of the constraining of alpha) some years ago; I know it for sure, because I found a detailed explaination in some photocopies of some text that I can’t remember, that I inherited by my former room-mate.
but i don’t know if at the time it was felt that this was important to constrain a variation of c, or of the electron charge.
anyway, i believe that it’s important to test ALL the possible fundamental parameters, although there is not yet any theory predicting deviations from the standard knowledge.

6. Andrea Giammanco - April 19, 2007

I just realized that my poor english may lead to a misinterpretation of my last sentence.
I meant EVEN WHEN there is no theory predicting etc.
(I know, indeed, that there are several theories having the consequence of non-constancy of several “constants”.)

7. dorigo - April 19, 2007

Hi all,

thanks Andy, I said the above because I really believe it.

Alex, I will check the links during the weekend…

DB, I think the electron charge – being quantized – is psychologically harder to change from its status of constant.

Hi Louise, you’re welcome… Of course controversial ideas should get more attention these days… Cosmology is at risk of being on a dead track, Particle physics is in a coma… We need something to liven things up.

Andrea, I did not know you were a nuclear physicist by formation. Intreresting idea the one about somebody stealing enriched uranium and leaving U-238 behind, LOL!

Cheers,
T.

8. Quantoken - April 20, 2007

Dorigo:

You can not have a light speed, or Planck constant, or alpha that changes over time for two very good reasons:

1.It establishes an absolutely time of the universe, which is against the basic philosophy on which Einstein established his theory of relativity. You sit in a lab which does not interact with the rest of the universe, and you do a precise experiment to determine alpha and you know what year you live in. That’s impossible. Physics laws do not change over time.

2.The reason light speed can not change is because to define a unit of speed, you need to define exactly how long is one meter, and how brief a time period is one second. You must find a natural yardstick that you can rely on to define your length and time unit. Light speed is one of such a natural yard stick itself. You might use a physical object as your yard stick, like a piece of metal yard stick. But then the length of the metal is decided by the lattice constant, the gap between atoms. And that is decided by the Bohr radius, Bohr radius ultimately depends on light speed. So you are still measuring light speed using light speed.

3.Likewise the Planck constant can not be changed. It is needed to define exactly how long one second is. The planck constant is a natural clock and you can not say whether this clock goes faster or slower because you do not have a better clock to checke against.

4.You need one more thing to fix the three basic measurements of length, time, and mass. I propose the mass of an electron can be used as the basic mass unit.

9. dorigo - April 20, 2007

Hey, thanks! You promised two good reasons and listed four.

You are listing things that make sense provided one sticks to a dogmatic view of physics. Physics laws are assumed to show no change over time, but we do not know that for a fact, because we can only experiment physics laws in a small time window: then of course tests such as the one Oklo ore makes possible are welcome to check if that dogma is holding water after all.

If string theorists talk about ten dimensions (we know there are three, don’t we ? Well, I am not so sure) and a lot more which we never measured nor experienced, cosmologists model their universe with an inflationary period when space expanded way faster than c (and inflation is impossible to test), and now explain the acceleration of the universe with a unnatural value of vacuum energy, I wonder whose dogmas are stronger.

Cheers,
T.

10. Quantoken - April 20, 2007

Dorigo:

Sorry I did not count it clearly. Reason 2,3,4 I listed are actually one reason, that is, we need to fix those few fundamental physics constant in order to obtain some basic unit with which we do meaningful measurements.

I don’t read into the Oklo “reactor” theory too much. There is some discrepancy in the proportion of U235/U238, but it could possible have much more natural explanation. I remember reading some where that some one concluded that the uranium on the earth at least came from 7 different sources at different times during the evolution of the solar system, instead of coming from the same primordial soup.

Do you know one interesting fact that if you consider the proportions of U238/U235/U234, the ratio of the abundance is extremely close to 1:alpha:alpha^2, with alpha being the fine structure constant. Another fact is the ratio of the CMB background temperature and the water boiling temperature, the ratio is also exactly alpha.

Talking about primordial soup that formed the solar system. Don’t you feel it is ODD that the lighter gas, like hydrogen, sink to the center and formed the sun, while heavier rocks get ejected to outer layer and form the solid planets? Doesn’t it make mroe sense that most of the rocks should sink to the center of the solar system, hence form the bulk mass of the sun? Some one is suggesting that the sun is mainly composed of iron instead of hydrogen. We only observe hydrogen because we can not see through to the center of the sun. What do you think?

11. dorigo - April 20, 2007

Hi,

I find it hard to believe that while the fraction of U-235 is constant everywhere and only smaller in a few sites around Oklo, its origin is seven-fold.

As for the other numerology, it is a nice coincidence, but do you mean to speculate that intelligent life on earth is only possible when uranium isotope ratios and alpha are the same ? I find that quite hard to buy too.

Hydrogen is almost 100% of the universe. There is no wonder that it aggregates into stars. Instead, the fact that planetesimals form from heavy elements means there are complex phenomena in the early phases of formation of solar systems.

The sun being made of iron is instead a real howler. Helioseismology studies allow us to know extremely well the composition of the sun’s interior.

But at this point I think you are just putting together all the weirdest ideas to see if you win any support here. Sorry… I am in favor of bold ideas if there are things hard to explain by ordinary means, but I am an occamist by nature after all…

Cheers,
T.

12. Quantoken - April 21, 2007

Dorigo:

It is not a weird thought, but conventional wisdom that heavy things sink to the bottom and light things float. It is hard to imagine that light hydrogen gas sink to the center and form the sun, and heavy rocks float out to form the planets. It just does not make sense.

True if you analyze the photons coming from the sun or any star, it’s 75% hydrogen and 25% helium (you said almost 100% hydrogen that’s incorrect). But fact is we can only see the surface of the sun, not below it.

Another evidence is they easily jump to conclusion is Jupiter. They see only hydrogen on the surface of Jupiter, and thus concluded Jupiter is mostly just hydrogen gas. But it’s odd why planets either further away like pluto, or further in, like mars, earth, are all solid rocks, why in between, hydrogen will condense and form a gaseous planet? Besides, there’s extreme radiation around Jupiter and extreme strong magnetic field. If it is mostly just hydrogen, where does the magnetic field and radiation come from? We know for a fact that Jupiter emits more heat than it absorb from the sun. The source of the heat has got to be either nuclear fusion or nuclear fission. Fusion is inpossible because Jupiter is not big enough or hot enough to allow fusion to occur. Then it must have a very big solid core containing lots of radioactive elements. The same rocks that firmed the earth and mars must form the bulk of the Jupiter mass, with hydrogen then trapped in due to the huge mass of Jupiter.

I am not trying to propose anything weird. I am trying to propose something that is reasonable, logical, and does not defy conventional wisdom. Unfortunatelly nowadays the weirdest ideas (like strong theory) are considered the standard text book of physics, and the most logical and reasonable ideas are considered crackpots.

Do you happen to know the Eddington Scandal?

I read some where that people concluded the uranium on earth has 7 different origins. I shall find my source and give you the link if I find them.

13. Quantoken - April 21, 2007

I am not the one who first proposed the sun having a solid core. I never thought about it that way. But once I discovered this, I immediately find it reasonable and fits conventional logic. See:

http://www.thesurfaceofthesun.com/

http://www.omatumr.com/

I must say I disagree with his explanation how the heat of the sun is generated. But I really can not dispute that the sun MUST have a solid core as the bulk of the mass if you consider the simple fact that heavy rocks must sink to the bottom.

Regardless, without a rocky core to start, if it is just hydrogen gas, there is no way the hydrogen gas can condense into a star purely due to gravity. It wouldn’t work, the hydrogen gas will diffuse away. You must have a solid core to start with to trap the hydrogen.

Now, you may want to do a little bit calculation to see how big a solid core you must have to start with, for you to trap enough hydrogen to form a star the size of the sun. Especially start with an extremely dilute gas cloud in the galaxy.

14. Quantoken - April 21, 2007

Here is one link regarding cosmic origin of uranium on earth:

http://www.uic.com.au/nip78.htm

15. dorigo - April 21, 2007

Hi,

Helium was present after the big bang, but most of it now is due to Hydrogen fusion in the stellar cores.
We can “see” inside the sun by studying wave oscillations of its surface, much the same way as we understand the earth’s interior with seismic waves. And I am afraid a “solid” core makes utterly no sense in the sun, with the enormous pressures involved.

Cheers,
T.

16. Quantoken - April 21, 2007

Dorigo said:

“Helium was present after the big bang, but MOST of it now is due to Hydrogen fusion in the stellar cores.”

Excellent answer! I completely agree with the notion that MOST of the helium in the universe today is due to hydrogen fusion in stellar cores. This has got to be true because the so called first generation stars all blow up in supernovaes and that’s how they say heavy elements are generated (in supernovae explosions). Supernovae explosion happens when a considerable portion of the hydrogen fuel has been burned up into helium. I am happy that you agree with that.

That is exactly the problem of Big Bang Nucleosynthesis. They ignored helium generation from stellar hydrogen fusion. They believe the currently observed Helium abundance, about 24%, all came from the primordial soup after the big bang, they have a data model that gives that 24% abundance, which meet the observed abundance. But that requires ignoring helium of fusion origin. That’s completely wrong.

My calculation shows that Helium generated from hydrogen fusion in stars alone, accounted for all known helium abundance in the universe, no more and no less. There is no need of primordial helium abundance to explain today’s helium.

Detailed calculation could not be discussed here. But here is the thumb of rule: Our sun burned about 8% hydrogen into helium since its birth 4.6 billion years ago. The sun’s age is about 1/PI of that of the universe. So for the whole universe, about PI*8% = 25% of hydrogen mass has been turned into helium. The calculation assumed the sun is an average star, which it is. Do your own exercise of calculation if you don’t agree with me.

Not only that. If you calculate the total luminent mass of the whole universe, most of it hydrogen to start with, which is about 5.4% of the total mass of the whole universe. You assume 25% of it converted into helium by hydrogen fusion. You calculate the total energy released. And that energy exactly accounted for the total energy of the 2.725K cosmic microwave background radiation. Not more and not less. Exactly equal. As a matter of fact, I used that assumption and arrived at the CMB temperature being 2.7243K, which exactly matches the experimental value.

Again you can disagree with me. But try to do your own calculation.

17. dorigo - April 21, 2007

Hi Quantoken,

I see you are quite interested in the matter. I am also, but I am afraid I cannot help you if you seek a confirmation of your non-mainstream ideas on nucleosynthesis, solar dynamics, or the like. I just am not the right person for that…

Cheers,
T.

18. island - April 22, 2007

That’s ‘Sir Quantoken Hoyle’, and that’s not a cut-down but I can’t make a call either.

That is exactly the problem of Big Bang Nucleosynthesis

There is no need of primordial helium abundance to explain today’s helium.

So, the density of neutrons and protons could have been different than the standard big bang model predicts was the case at the time of nucleosynthesis, right?

So that maybe all dark matter *can* be baryonic and other more-exotic junk isn’t necessary???

19. Alexander W. Janssen - April 23, 2007

Quantoken:
You said: Besides, there’s extreme radiation around Jupiter and extreme strong magnetic field. If it is mostly just hydrogen, where does the magnetic field and radiation come from?

A widely accepted theory is that the inner core consist of higly compressed hydrogen. At very high pressures (some Gigapascals) hydrogen behaves pretty much like a metal, causing a magnetic field. Jupiter’s core seems to be bigger than previsouly thought, resulting the metallic hydrogen-core to be closer to the surface, producing a stronger magnetic field.

Cheers, Alex.

20. Quantoken - April 24, 2007

Alex:

Give it up. The metallic hydrogen is too stretched an explanation for the radioactivity of Jupiter, no one can believe it. The Jupiter is known to EMIT heat and that heat must come from natural decay of radioactive heavy elements. Hydrogen does not decay.

Plus every one has directly or indirectly witnessed Shoemaker plunging into Jupiter. It happened. It was a fact. The rocks remain in Jupiter. During early evolutions of the solar system there must be much more rocks plunge into Jupiter and form the bulk of Jupiter mass. The same rocks that formed other planets. It’s a crackpot theory to believe that Jupiter is 100% gas just because we could not see the rocky core directly.

21. Thomas D - April 24, 2007

I have worked on the possible implications of ‘varying alpha’ for quite some time now and I STRONGLY resent the implication that it is somehow on a level with proposing some arbitrary ‘variation of the speed of light’.

Why?

Well, for one thing there is significant astrophysical data, which were obtained specifically in order to test the constancy of alpha and instead point to a nonzero variation at more than 3 sigma significance. You ought at least to know about that. Work of Murphy, Webb, Flambaum and other authors, published in several journals.

But the main reason is because alpha is DIMENSIONLESS and therefore unambiguously measurable. No matter what type of ruler or clock or calorimeter you use, one ought to get always the same numerical value of alpha.

By contrast, ‘varying c’ is physically ill-defined because the numerical value of c depends on the way you choose your units. With SI units, you can’t measure any variation of c at all, because the meter and the second are linked to each other by … the speed of light in vacuo!

As an experimentalist one ought to know that dimensionful quantities cannot be measured – except in relation to some dimensionful units.

If someone wants to propose that ‘c is varying’, then they should say in what units it should be measured, and also which dimensionless quantities are varying. (For example

If dimensionless quantities are NOT varying, then you can always choose your units so that c = constant (or even c=1!) and ‘varying c’ has no physical content at all. Zero!

So do you understand now why one thing is physics and the other thing is not?

22. Thomas D - April 24, 2007

Oh, and you misquoted the limit from the latest Gould et al paper.

They work in ‘units of 10^-7’ concerning variations, so the limits on alpha are actually extremely tight, at the few x 10^-8 level!

(Actually I think it is not good practice to miss out factors of 10^-7 in one’s equations, but that is what they choose to do, instead explaining it in the text…)

23. Quantoken - April 24, 2007

Thomas:

All measurements are fundamentally dimensionless quantities. If you measure the mass of some object to be 2.5 kilograms, for example, it only means a dimensionless numerical ratio of 2.5 between the mass of your object, and the mass of that piece of standard kilogram alloy stored some where in Paris. I agree with you that it is meanless to discuss a varying C, because you need to fix C to be able to measure any speed.

Since alpha is related to electron charge, hbar and C, one of them has got to be rariable for alpha to be variable. My opinion is none of them is variable, and neither is alpha. I am fully aware of the Webb result when it first came out. There has been no independent confirmation, and the credibility of the result is being challenged. It is just one another piece of paper rushed out because the author was too eager to annoucne something that impresses his colleagues.

When measuring light from tens of billions of light years away, you are literally collecting just a handful of photons over many many many hours, it’s hard to say how reliable the data is, it is even harder to determine how big is the sigma. If the actual sigma is 10 times lareg than what you think it is, what you thought is 3 sigma is actually only 0.3 sigma. Of course, if one is eager to show 3 sigma, it isn;t hard at all to tweak a few things or ignore a few systematic errors to get a smaller sigma, like Eddington did.

24. dorigo - April 24, 2007

Dear Thomas,

please go back to read what I wrote in the post, because in your comments above you give the impression to have equivocated them a bit.

In particular, your personal attack:

“As an experimentalist one ought to know that dimensionful quantities cannot be measured – except in relation to some dimensionful units. […] So do you understand now why one thing is physics and the other thing is not?”

is unmotivated and a bit over the top. What can I say ?, thank you for the lesson. Now please tell me why a ten-dimensional world with branes, cosmic strings, and all the bells and whistles is physics to you, any more than some speculations about a possible variation of physical laws.

Oh, and I did not misquote the result. I said in 10^-7 units the limit is between -0.11 and +0.24, which is indeed a few parts in 10^-8. Quite a tight limit, as you say.

Cheers,
T.

25. island - April 24, 2007

I’ll take Quantoken’s failure to reply to my attempts to help him explain, to mean that he doesn’t know what it means.

26. dorigo - April 24, 2007

Hi Island,

I have no idea what Quantoken knows or not. Besides, being an anonymous entity, we can say whatever we want of him or her, and it will still be true – as long as we can find a typing entity with those characteristics. Sorry Quantoken, but anonymity comes with a price tag too.

Cheers all,
T.

27. island - April 24, 2007

No, I only meant to *try* to help, and have no desire to start a war with anyone over it, so I’m sorry that I said anything.

28. dorigo - April 24, 2007

Don’t be sorry… Your contribution here is always welcome.
T.

29. Quantoken - April 24, 2007

Island:

Your question wasn’t phrased in a way easily understandable so I wasn’t sure exactly what you try to ask and I did not know how to respond. But if you really wants me to say something. What I can tell you is I have studied the Big Bang Theory pretty well, and knows all the predictions it made, and measurements that it “confirms”. But it is still a wrong theory. The cosmology I am developing, is a more natural theory without the need to singularity, gives more precise predictions. And fits much better with observations. Let me stop here and do not discuss further since this is not my place. If you are interested look at my old posts:

http://quantoken.blogspot.com/

And here is a precise prediction of the neutron mass, precise to 9 decimal places, and completely within experimental error. No one else has predicted any particle mass so precisely:

http://quantoken.blogspot.com/2005/02/proton-and-neutron-mass-from-guitar.html


Sorry comments are closed for this entry