jump to navigation

Hats off to Lubos Motl. November 21, 2008

Posted by dorigo in internet, personal, physics, science.
Tags:
trackback

No kidding, this post is to finally thank Lubos, who today shows he is much above the average on a very important quality: being able to acknowledge a mistake. Not just that: he is also surprisingly capable of putting aside bad feelings, anger, enmity he sometimes display. The comment he left this morning in my thread (which comes from his private email address and has the right IP) just shows that, plus more. We must tip our hat to him today.

This is a lesson that truth is worth something to him, too. I had lost hope about it on this particular issue. Of this, I myself apologize, and I also apologize to Lubos for the sarcasm I directed at him in my former posts. It was maybe understandable given what he had written about me in his blog, but still excessive.

UPDATE: To clarify (see a comment by Tripitaka below), in the second paragraph above I meant to say that I had incorrectly interpreted the reactions of Lubos to my explanations as a unwillingness to admit his mistake. On this I was wrong and I apologize for my suspects.

Comments

1. tripitaka - November 21, 2008

Great to see guys! Just want to say there seems to be a typo in the post (I think?) where you say you didn’t expect this, I guess you mean that you didnt expect this apology (the alternative reading would seem to make the post insincere indeed and we know you’re not insincere T).
Regards

2. goffredo - November 21, 2008

I like Lubos. I think he did more GOOD than harm:
1) Lubos provoked Tommaso into working hard. In the end TD did a very good job in illuminating points that needed to be exlained better. We all profitted from this.
2) In the end Lubos did show that it is possible, in science, for all of us to learn from mistakes (overlap with point 1) and that indeed there is, in science, a sound basis to admit errors without shame. Mistakes are made by all, no one exluded.(Even Einstein made MANY mitakes and seems even to have been quite stubborn and inistant in doing so. See Hans C. Ohanian’s “The Human Failing of a Genius”). Once a certain point or a certain boundary is reached and then crossed, it is of course human to insist in denying any error but it is not scientific. A person must, at that point or bondary, freely decide what to do.
3) Papers may always be written better.

3. mfrasca - November 21, 2008

The best of all possible endings. Very good!

Marco

4. Luboš Motl - November 21, 2008

Congratulations to my scalp, Tommaso, surely something that your blogging colleagues can’t usually boast.

It was a clever plot you did: to write a probably wrong paper that moreover says something else than what it means,😉 to get 369/600 additional signatures so that it is actually published, and to lure me into reading the paper – by combining the abstract, a set of paragraphs, and tables – in the same way as Matt Strassler.

But I would only publish a similar apology-style comment on my blog when higher standards are followed, namely when CDF officially and collectively realizes and states that they are really saying that they have 200 pb of unusual events – because, as we now probably agree, it is 3 times worse than the team’s previous statement that was already pretty bad or, in fact, extremely bad!😉

I suspect that the paper only got the support it did because many of those 370 people were misled the same way Matt and I was i.e. that the confusing nature of the presentation was deliberate.

The way how (and why) 2100/pb of data is discussed in the paper remains obscure to me, after hours of reading. I don’t understand why the 2100/pb portion of the paper is so different, why the clear counting is only discussed for 742/pb, and why you need a high luminosity at all.

If you claim to have hundreds of thousands of events, which is more than producing a few Higgses, why do you need to go to high luminosities and why is the 2100/pb mentioned in the abstract at all? The correct explanation (probably a subtlety) to be eventually found will almost certainly be understandable even with the 742/pb of data.

And yes, now I understand that the experimenters never use reduced luminosity figures for any subsets of events chosen by triggers. This would only be legitimate if the selected events remained “representative” which is pretty much impossible if the trigger carries any information. Still, it would be easier for me to believe that they chose such a convention than to believe that they claim to produce an effect similar to a new particle at 200 pb which is really a lot.

It is also strange that the 742,1426,2100 luminosities are pretty much in the same ratio as the tight-SVX-filtered, loose-SVX-filtered, SVX-unfiltered events. And yes, I still think that the paper explicitly says what I think it says and what you didn’t apparently mean. For example, table I explicitly says that there were only 143,000 QCD events in the 742/pb sample. It’s just bad that such a caption – even when it’s huge – says nothing about the filters (SVX tight) applied to the events listed in the table and/or their efficiency. In the very same way, it is just bad that the table II doesn’t say anything about the integrated luminosity for which the event numbers are listed.

The abstract is wrong, too, because the ghost events that were sufficiently explicitly – and visibly in the paper – isolated were not at 2100/pb but only at 742/pb.

5. Gaugino - November 21, 2008

I don’t think you’ve been excessive at all. This guy called you “crackpot”, “liar”, “dumb”, “fool”, “moron”, among many other beautiful things. Insulting people you don’t agree with is not an option in science, even if you’re right, and that wasn’t the case.

There’s no reason to use such language, even if he pretends to be the new “enfant terrible” of physics.

6. dorigo - November 21, 2008

Hi Tripi, yes, my text was somewhat ambiguous, and I clarified it.

Hi Jeff, I agree on all counts.

Lubos, I will take some time to answer about the physics (on a rush now).

Gaugino, the thing is, I know I have to use a different scale with Lubos, when evaluating adjectives. And I can take a lot of that without actually getting upset. Thank you for your support, anyway.

Cheers all,
T.

7. Luboš Motl - November 21, 2008

Otherwise, of course, I have updated the text on my blog to reveal that I was wrong about what CDF meant.

8. Lubos Motl’s apology « A Quantum Diaries Survivor - November 21, 2008

[…] Lubos apologized! And he did using the very same words I suggested above, plus more. See here. He finally understood […]

9. Luboš Motl - November 21, 2008

Incidentally, this would be my recommendations for some enrichment and standardization of tables and graphs of events.

Tables with numbers of events, such as Table I, would say it is Tevatron, CDF, integrated luminosity 742/pb, and list filters/triggers as acronyms (tight SVX) that were applied to all events whose numbers are listed in the table (special rows/columns could have extra filters). That’s only one additional line but a huge prevention of misunderstandings and increase (and speedup) in readability.

Redundantly, there would be an extra column in Table I etc. that would translate the number of events into [estimated] cross sections, to make sure that some important factors, efficiencies etc. are not forgotten when readers interpret the figures.

For the sake of simplicity, the full error margins could be omitted in these lists. Instead, one could say e.g. 207~ pb which means that the error margin is between 1 and 5 percent. (Or hasn’t someone introduced a similar notation?) Similar tilde-notation would be introduced for other relative error margins. The error margins carry less useful information than the central value.

Figures with numbers of events, such as figure 4, would be given a FNAL/CDF logo plus a standardized label with the integrated luminosity and acronyms of triggers/filters used, too. In many cases, you could save this text in the main text as a result. Also, the y-axis could have two scales – the histogram number of events in a “slot” as well as differential cross section per unit (invariant mass) in this case.

Cross sections are closer to the things considered by the phenomenologists – the key readers of such papers – and it would be good for the experimental papers to do this part of work: it’s one division. Experimenters get a feeling how many events of certain kind they expect in various papers – because they know how the luminosity and efficiency is normalized – but theorists don’t want to learn a new normalization with each exp. paper. They think in terms of objective, experiment-independent values such as cross sections and they should get them.

10. Imam Yahya - November 21, 2008

LM said, “I suspect that the paper only got the support it did because many of those 370 people were misled the same way Matt and I was i.e. that the confusing nature of the presentation was deliberate. ”

So much for the sincere apology. He’s still full of bullshit. No happy ending that I can see.

11. Luboš Motl - November 21, 2008

Dear Imam, sorry to disappoint you but what’s been established is that Tommaso correctly interpreted the cross section in their paper, not that the paper is accurately and clearly written, uncontroversial, or even that it avoids mistakes and discovers new physics.

To appreciate the reasons to consider seriously the possibility that the team is just doing something silly and fooling itself, as the most likely explanation, one should take certain numbers into account.

The production cross section 200 pb for something that looks like a new particle is a truly stunning statement whose apparent absurdity you don’t seem to appreciate. For example, the top-quark at 1.8 TeV was produced with cross section 6.4 pb at the Tevatron. So the new CDF paper silently suggests that they may be producing a new particle that is 30 times more visible than the top quark – yet no one has ever noticed it was there.

All kinds of upper bounds on production cross sections of new particles are around a few pb or less. This estimate was true for LEP at its energies but also for p-pbar collisions at the Tevatron. 75 pb already looks shocking but 200 pb is really something extra (even though, at these high numbers, everything is probably equally shocking – what one needs is nothing less than a “paradigm shift”, anyway). It just sounds pretty unlikely that such a visible particle/resonance/something-particle-like could have been invisible so far.

To impress phenomenologists, the statement “production cross section for a new particle that we almost see is 200 pb” would be really powerful. If no loopholes that go beyond the paper exists, it actually follows from the paper. Still, this powerful statement didn’t appear anywhere in the paper. Not even the term “cross section” occurred. (And it is likely that CDF has chosen not to formulate the result in this way – for example, Matt’s 2 colleagues who are CDF members – and who didn’t sign the paper – may have read his paper but they didn’t tell him about the mistake with his cross section.) One might ask Why. It’s not the most important question in the world but it is legitimate to ask.

OK, I don’t know 100% about the sociology here but I am pretty certain that the fact how the statements were broadly formulated has almost everything to do with the sociology of the CDF. I am not accusing them of anything bad whatsoever. They objectively face a situation that may turn out to be a source of fame or intimidation and no one is *quite* sure which it is at this moment and they have to decide in *some* way, despite this ignorance. That’s what often happens in science and it is a different face of the reasons why science is often exciting.

But I find it obvious that even with the unusual setup, the paper could have been written much more clearly and there might be reasons – such as unjustified excessive cautiousness – why it was written in this way and why the seemingly necessary conclusions were not articulated in their full glory but rather as a mumbo jumbo of mostly incomprehensible and not-fully-interpreted tables.

12. dorigo - November 21, 2008

Lubos, I have no time today so I need to be telegraphic, but please understand that the “ghost sample” includes about 45% background according to the CDF paper. The cross section of the tentative new signal is thus about half of what you mention, i.e. 100 pb, not 200.
So, please stop insisting on 200pb.

Cheers,
T.

13. dorigo - November 21, 2008

And another thing Lubos (#7): although I tipped my hat, you of course realize that all the insults mentioned by gaugino #5 above are still there in your blog, for everybody to read. People reading your blog and not mine (and there are certainly a few) did not get to know that you apologized in a comments thread here, nor that the cross section was not 75 pb. Fine print at the bottom of posts now old do not repair totally the damage. Of course that’s life, but I leave it to your honesty to do something about that.

Cheers,
T.

14. dorigo - November 21, 2008

Ok, while a root macro is running, I can address some of Lubos’ comments #4 above.

I already said about the 200pb: experimentalists do not speak of mixed cross sections. N in N=\sigma L is usually the number of events already after background subtraction. Now, in the CDF analysis there is a first background subtraction, rather an “extrapolation” from the tight SVX sample to the whole sample, by dividing the 143k events by 0.244. However, there are additional backgrounds in the “ghost sample” remaining from the subtraction: among the 153000 events, about 70000 are accounted for by hadronic backgrounds. N is thus some 80000, which divided by 742 makes some 110 pb, not 200 as you keep repeating.

The support the paper got in CDF is something I cannot comment on, but many who did not sign did so because they wanted more studies.

Then when you say “The way how (and why) 2100/pb of data is discussed in the paper remains obscure to me, after hours of reading. I don’t understand why the 2100/pb portion of the paper is so different, why the clear counting is only discussed for 742/pb, and why you need a high luminosity at all.” you ignore what I kept repeating: the counting experiment part of the study is done with the data which allow it, because having been collected by a unprescaled trigger allows easy cross section estimates. The rest is invaluable additional data to model as well as possible some kinematic distributions. No mystery here.

I also already acknowledged that the paper is written in a cryptic way in several points.

Cheers,
T.

15. Luboš Motl - November 21, 2008

Dear Tommaso, I hope you’re not serious to talk about your “damages” when you’re encouraging anonymous fake supersymmetric particles and anonymous Yemeni terrorists to raise attacks against me.

What has been established is that as far as the cross section estimate goes, you’re probably on the same ship with the remaining 370 members of the unknown subset of the CDF collaboration. Whether you are doing serious physics here (perhaps together with the other 369 folks) remains an open question.

Also, I called you a “liar” for a legitimate reason, as you admitted, namely because you claimed that I started to talk about the cross section “out of the blue”. In fact, I was answering your polite question. So I will surely not apologize for that.

Incidentally, I have a more interesting thing to say about the character of the “ghost” events but I am afraid you’re not that interested in these things.

16. dorigo - November 21, 2008

Hmm I fail to understand what you are talking about. Really. anonymous fake susy particles ? anonymous yemeni terrorists ? I do not know what you are talking about.

I will be reading with interest what you have to say about the ghosts.

Cheers,
T.

17. goffredo - November 21, 2008

human nature

18. Luboš Motl - November 21, 2008

Anonymous fake SUSY particle is “gaugino”. If you believe that the “gaugino” who contributes to your blog is a real, Majorana fermion, then maybe we should congratulate you to the discovery of supersymmetry.

The Yemeni terrorist is the former king of Yemen, Imam Yahya. Again, if you think that it is the real king who is contributing, let me warn you that he died sometime in the 1940s.

Fine. So I was trying to reconstruct the 0.244 figure for the average efficiency of the SVX tight selection. I suppose it is obtained as some weighted average of efficiencies of events listed in Table I. As both of us know, the correctly calculated weighted average is 143,743/743,006 = 0.193. This sentence was formulated in such a way that it would hold even if you discovered new physics! The only condition is that the weighted average must be calculated with the exactly correct efficiencies for all kinds of processes.😉

So the task is to fix Table I and obtain the correct 0.193 efficiency as the weighted average. I believe the table is not right, and if it is, the average efficiency is not computed properly or at least not transparently.

As far as I understand, Table I (with the decomposition of the SVX tight events) right now is assumed to contain only two types of events as far as the efficiency goes, either 0.257 or 0.237. If it is not the case, again, the vicinity of page 12 fails to declare all the input that was used.

Because the weighted average ends up being 0.244, it doesn’t look like substantial amounts of event types whose efficiency is well below 0.24 was included in the calculation of the weighted average.

Yet, on page 23, it is claimed that 8% of in-flight decays of pions survive the SVX tight filter, which should be up to 5,000 SVX-tight events. So these events with decaying pions (and kaons) should be somewhere in table I, right, assuming that table I lists all SVX-tight dimuon events. Are these 5,000 in-flight decays or so among the PP events (and BP and CP for the mixed ones)? It looks that the QCD efficiency was used for all the PP events, even though the SVX-tight efficiency e.g. for pairs of pions is only 3%, if I understand that it is roughly 0.16 times 0.16.

With 5,000 events among 143,000 whose efficiency is close to zero, the average efficiency should drop by 1 percentage point or so, from 24 to 23, right? If the starting one was somewhere pretty much in the middle of 23.5 and 25.5, the average had to be close to 24.5%, and I just reduced it by 1% so 24.4 can’t be right. We might be at 23.5 now.

Is there a way to fix the table? It seems to me that some events were misattributed to wrong groups even here, or the efficiency used for different groups was exaggerated. It would be great to have a corrected Table I that not only says what sample the data are supposed to represent but also attributes them into the more accurate groups, including the in-flight meson decays, and shows how the weighted average is calculated.

Another concern are the trimuon (or more) events that also seem to indicate that the table I is incorrect or, to say the least, incomplete. What is the algorithm to divide the 3 or more muons in trimuon events to the 2 muons and the “additional ones”?

Again, Table I only seems to classify the events only according to 2 of the muons, and it is not really explained which 2 muons are chosen in the case of trimuon (or more) events, or at least I don’t see it. This possibly affects 10% of the events because 10% of the dimuon [or more] events are trimuon (or more) events.

Do I understand well that the SVX tight efficiency drops for trimuon events because all three (or more) muons must leave nice tracks to be SVX tight? If I do, well, then the sentence in the paper that “this observation anticipates that a fraction of the ghost events contain more additional muons than QCD events” is pretty much a tautology, a reformulation of the previous sentence, not an additional argument in favor of new physics. It’s just the statement that there are more trimuon events that fail the SVX tight test, and these trimuon events are another reason why the observed efficiency, 143/743=0.193, is smaller than calculated by the incorrect calculation.

If e.g. 10% of the remaining, QCD-like, above-24% events in the 143,000 sample would have efficiency reduced from 24% to 16%, I would get another percentage point down in the weighted average.

There are probably similar concerns of this type – different other “ordinary” sources of the “ghost” events will have different percentage and influence the splitting of the 143,743 differently. At any rate, isn’t it please possible to write a more detailed version of the table I, splitting the 143,743 events into the group, with the estimated efficiency for each subgroup and the weighted average? It would be also great to write it including the error margins for the efficiencies and for the numbers of events, and transfer the error margins correctly when the weighted average is calculated.

I predict that when all these things are done, the weighted average will be the correct 0.193 plus minus something that doesn’t exceed a few standard deviations or error margins.

19. Luboš Motl - November 21, 2008

Let me mention one more example why I think that the apparent sloppiness about the identity of “two muons” in the trimuon events could be important.

The paper says that about 96% of dimuon events have both muons originating less than 1.5 cm from the beam line, according to the SM simulations.

Now, which are the “both muons” in trimuon events? Of course, there are two extreme answers (and interpolations). These two extreme answers are “the two closest muons” and the “two furthest muons”.

Imagine that the simulations choose the first definition. Then the sentence means that 96% of the simulated dimuon (or more) events contain at least two muons that are at most 1.5 cm from the beam line.

But the negation of the sentence might be subtle.😉 For only-two dimuon events, the negation is that at least one muon is more than 1.5 cm from the beam line. But for exactly-three trimuon events, it says that at least two muons are further than 1.5 cm from the beam line. That’s a difference.

The last group, with at least two muons more than 1.5 cm from the beam line, is probably negligibly small for normal purposes (let me assume SM now).

So there can be a whole systematic discrepancy in classifying many trimuon events. The SM simulation may say that most of them belong to the 96% because at least two muons are close to the beam (and the third is far). But the experimenters may choose other two muons, either the distant ones or the random two, and classify it as outside the 96% group.

The intuition is that 1 muon has 98% chance to be close. Two muons have 96% chance to be (both) close. 94% of trimuon events have all three muons close i.e. 6% of them have at least one muon far. They can still be included among the 96% “close 2” group in the simulations but you may interpret them as the other group, because of the sloppy treatment of “which two muons”.

If this whole group is misclassified, that’s about 6% of 10% of all events, about 0.6% of events. Among 700,000 events, I could explain away additional 40,000 events. This is also 1/4 of the ghost sample, and recall that 1/2 of it (69,000) was already explained in the CDF paper.

Isn’t it please possible simply to do all these calculations correctly, be careful how the groups are defined and how the events are divided, especially when the membership is ambiguous?

20. dorigo - November 21, 2008

Lubos, I appreciate your comments, but today I am deep into a root macro and as much as I would like to, I cannot answer with the amount of care your questions deserve. Maybe this evening.

Cheers,
T.

21. almidda - November 21, 2008

And what about your terrorist attacks, Lubos? You are full of complains when someone speaks harsh of you but you are ten times more harsh and malicious to those you dislike. What makes you so special that allows you to violently attack anyone you dislike (in many cases for most childish reasons) but those who attack you (and usually for good reasons) are immediately classified by you as terrorists? No one is asking you to be politically correct. No one is asking you to change opinions in professional matters. You are only being asked to be human, and open-minded as possible as you can. And if our “humbled correspondent” would be indeed humble, this would have been a wonderful achievement.

22. jimmy - November 22, 2008

Hey, Lubos, now your posts are even longer than those of Tony Smith. As they say, if you’re in a hole, stop diggin’.

23. Philipe - November 22, 2008

Has anyone read a post from Tony Smith that didn’t contain loads and loads of quotes from other people? How does he do it? I’m impressed.

24. Philipe - November 22, 2008

Lubos,

Good job. Now you must go apologize to Peter Woit, Lee Smolin, and Sabine Hossenfelder.

Hey look at this link:

http://backreaction.blogspot.com/2007/08/lubo-motl.html

and this one too (the best of Lubos):

http://prime-spot.de/Bored/bolubos_short.doc

Oh, this one’s really funny!

http://www.math.columbia.edu/~woit/wordpress/?p=412

25. Tony Smith - November 22, 2008

I am not sure why jimmy brought up my name in this thread,
but
I am flattered that jimmy thinks that now that Lubos is worthy of a “Hats off” from Tommaso, his posts begin to resemble mine.

As to jimmy saying “if you’re in a hole, stop diggin'” ,
that is not how I was raised.
I grew up in iron mines,
and
the deeper you dug the hole the more iron ore you got.
Even if you dug down to where the iron ore ran out,
you learned more about the local geology so that you would do a better job of picking a spot to dig the next hole.

You also learned that whatever theoretical model you had about the local iron ore geology,
the experimental observation of what was underground (which could only be done by digging) ruled,
and theory was no good unless it was consistent with real experimental results.

Actually,
that might explain my mindset mentioned by Philipe

(and why I don’t propound theoretical models that do not produce calculations that are consistent with observation).

Tony Smith

26. Iphigenia - November 22, 2008

Tommaso, I don’t think you quite understand what is going on here. Put yourself in LM’s position [difficult, I know]. He realises that he has screwed up big time, and that all of his many enemies are laughing their asses off. But on the other hand, he has to reckon with the very real possibility that Prof Strassler will retract and apologise at any moment. If that happened, LM would look even more of a fool than he does already — if that is possible. So he had *no choice* but to apologise. True, that must have been intensely painful, but it is better than the alternative. Furthermore, he is enough of a politician to know that no apology is final — you can always slowly take it back if you are slimy enough. And that is exactly what all these long posts are about: “Sure, I made a stupid mistake, but IT’S NOT MY FAULT — it’s all a conspiracy by you evil experimentalists.”

It’s good to be simpatico and all that, but sometimes in life you come across people who are just piles of shit, and this sad fact has to be recognised. It may help if you think of LM as Berlusconi with a very incomplete physics education. And without the money.

27. Haelfix - November 22, 2008

Incidentally, being a phenomenologist I also thought the paper was cryptic in certain parts. But still its serious science and worthy of publishing. This isn’t exactly standard material here, and the backgrounds are extremely dirty, so the CDF people may be excused for a few lapses in typographical clarity.

I’m actually rather interested in what it may mean down the line assuming this isn’t new physics (and thats odds on the best bet atm). If there are some uncontrolled backgrounds, is there potential for altering past results and/or statistics to a significant degree?

28. dorigo - November 22, 2008

Hi Iphigenia,

I know Lubos has his reasons to apologize, which may not be the same as those we believe he should have. I have my reasons for being kind to him too. In general, I do not actively seek enmity. Also, I usually am not too affected by insults or other allegations. I cannot change how Lubos Motl behaves, and I do not care too much, the world is a nice place because it is so varied…

Cheers,
T.

29. dorigo - November 22, 2008

Haelfix, of course. That is the whole reason for publishing the paper regardless. Many in CDF wanted more checks done, to fortify the claim of new physics or to rule it out. The paper was published overruling their wishes because it affects past measurements of the bb cross section, of the integrated B mixing parameter, and other ones too. It is clearly said in the conclusions.

Cheers,
T.

30. jimmy - November 22, 2008

Tony, no offense meant at all. It’s only that anyone who has read you knows that synthesis is not your strongest skill. You may take it as a constructive criticism: making your posts/papers shorter without sacrificing content may make them more appealing to many more readers.

Regarding Lubos’ posts, on the Internet, unlike in mines and on oil rigs, the best thing to do when one is in a hole is to stop digging.

31. Chris Oakley - November 26, 2008

Lubos apologising for something. Come on! That’s about as unlikely as a black man becoming president of the United States of America.

Oh, but wait! Oh my God!

Maybe LM could be the Republican candidate in 2012 – might at least help to give Obama a decent shot at a second term.

32. dorigo - November 26, 2008

Chris, chris, come on… Obama will have to earn his shot at the second term by himself. I am confident he will.

Cheers,
T/

33. Home Top News Special Reports | truckingprofession.com - December 9, 2008

[…] Hats off to Lubos Motl. « A Quantum Diaries Survivor […]

34. A chat with Arkani-Hamed at CERN « A Quantum Diaries Survivor - December 9, 2008

[…] The third was the trap into which poor Lubos Motl fell head first, when I asked him to justify a wrong cross section estimate in Strassler’s paper (admittedly an error caused by the cryptic way the CDF paper is written, but Lubos would have had several occasions to understand he was fighting a lost cause, if his huge ego had not hindered him), which caused an endless exchange, only stopped by my request for an official apology -which ultimately rang a bell and forced Lubos to ask to independent sources, after which he came back to indeed apologize. […]

35. Bookmark this article Print this article | truckingprofession.com - December 14, 2008

[…] Hats off to Lubos Motl. « A Quantum Diaries Survivor […]


Sorry comments are closed for this entry

%d bloggers like this: