jump to navigation

A top mass measurement technique for CMS and ATLAS July 8, 2008

Posted by dorigo in news, physics, science.
Tags: , ,
trackback

The Tevatron experiments CDF and D0 have been producing more and more precise measurements of the top quark mass in the last few years, in an impressive display of technique, inventiveness, and completeness. The top quark mass, one of the fundamental parameters of the Standard Model, is now known with 0.8% precision – a value way beyond the most optimistic pre-Run II estimates of technical design reports and workshops.

What is interesting to me at this point is not so much how precise the Tevatron determination can ever get, but how long will CMS and ATLAS have to wrestle (and me within CMS) with their huge datasets if they are to ever surpass the Tevatron in this very particular race. The top pair production cross section at the LHC is a hundred times larger than it is at the Tevatron, so the statistical uncertainty on whatever observable quantity one cooks up to determine the top mass is going to become negligible very soon after the start-up of the new accelerator. Instead, the systematic uncertainty from the knowledge of the jet energy scale will be a true nightmare for the CERN experiments.

Was I a bit too cryptic in the last paragraph ? Ok, let me explain. Measuring the top mass entails a series of tasks.

1 – One first collects events where top quarks were produced and decayed, by selecting those which most closely resemble the top signature; for instance, a good way to find top pair candidates is to require that the event contains a high-energy electron or muon and missing energy, plus three or four hadronic jets. The electron (or muon) and missing energy (which identifies an escaping neutrino) may have arisen from the decay of a W boson in the reaction t \to W b \to e (\mu) \nu b; the hadronic jets are then the result of the fragmentation of another W boson (as in \bar t \to W^- b \to q \bar q' b) plus the two b-quarks directly coming from top and antitop lines.

2 – Then, one establishes a procedure whereby from the measured quantities observed in those events – the energy of the particles and jets produced in the decay – a tentative value of the top mass can be derived. That procedure can be anything from a simple sum of all the particles’ energies, to a very complicated multi-dimensional technique involving kinematical fits, likelihoods, neural networks, whatchamacallits. What matters is that, at the end of the day, one manages to put together an observable number O which is, on average, proportional to the top mass. This correlation between O and M is finally exploited to determine the most likely value of M, given the sample of measurements of O in the selected events. Of course, the larger the number of top pair candidate events, the smaller the statistical uncertainty on the value of M we are able to extract [and on this one count alone, as mentioned above, the LHC will soon have a 100:1 lead with the Tevatron.]

3 – You think it is over ? It is not. Any respectable measurement carries not only a statistical, but also a systematic uncertainty. The very method you chose to determine M is likely to have biased it in many different ways. Imagine, for instance, that the energies of your jets are higher than you think. That is, what you call a 100 GeV jet has instead -say- 102.5 GeV: if you rely on a observable quantity O which depends strongly on jet energies, you will be likely to underestimate the top mass M by 2.5%! So, a careful study of all possible effects of this kind -systematical shifts that may affect your final result- is necessary, and it usually involves a lot of work.

Unfriendly systematic uncertainties: the jet energy scale

The example I made with jet energies is not a random one: the jet energy scale (JES) -the proportionality between the energy we measure and the true energy of the stream of hadrons- is the single largest source of uncertainty in most determinations of the top quark mass. Alas: that is because top quarks always produce at least one hadronic jet when they decay, and we usually cannot avoid using their energy in our recipe for O: we tend to think that the more information we use, the more accurate will be our final result. This is not always the case! In the limit of very high statistics, what matters is instead to use the measurements which carry the smallest possible systematic uncertainties.

Let us make a simple exercise. You have two cases.

Case 1. Imagine you know that the top mass is correlated with the Pt of a muon from its decay, such that on average M=3 \times P_t + 20 GeV, but the RMS (the root-mean-square, a measure of its width) of the distribution of M is 50% of its average: if you measure P_t=50 GeV then M=170 \pm85 GeV. The correlation is only weak, and the wide range of possible top masses for each value of Pt makes the error bar large. Also, you have to bear in mind that your Pt measurement carries a 1% systematic uncertainty. So it is actually P_t=50 \pm0.5 GeV, and the complete measurement reads M=170 \pm 85 \pm 1.5 GeV (where the latter number is three times 0.5, as dictated by the formula relating Pt and M above), from a single event.

Case 2. The top mass is also correlated with the sum S of transverse energy of all jets in the event, such that on average M=0.8 \times S-54 GeV. In this case, the RMS of M for any given value of S is only of 10%: if you measure S=280 GeV, then M=0.8 \times 280-54=170 \pm 0.10 \times 170 which turns out to be equal to M=170 \pm 17 GeV. This is remarkably more precise, thanks to the good correlation of S with M. You should also not forget the systematic uncertainty on the jet transverse energy determination: S is known with 2.5% precision, so in the end you get M=170 \pm 17 \pm 0.025 \times 0.8 \times 280 which equals M=170 \pm 17 \pm 5.6 GeV, for the event with S=280 GeV.

Now the question is: which method you should prefer if you had not just one top event, but 100 events ? And what if you had 10,000 ? To answer the question you just need to know that the error on the average decreases with the square root of the number of events.

With 100 events you expect that the muon Pt method will result in a statistical uncertainty of 8.5 GeV, while the systematic uncertainty remains the same: so M=170 \pm 8.5 \pm 1.5 GeV. Case 2 instead yields M=170 \pm 1.7 \pm 5.6 GeV, which is significantly the better determination. You should prefer method 2 in this case.

With 10,000 events, however, things change dramatically: Case 1 yields M=170 \pm 0.85 \pm 1.5 GeV, while Case 2 yields M=170 \pm 0.17 \pm 5.6 GeVa three times larger error bar overall! This is what happens when systematical uncertainties are allowed to dominate the precision of a measurement method.

Well, it turns out that in their measurements of the top quark mass CDF and D0 in Run II have almost reached the point where their jet energy measurement, something that cost years of work to tens of dedicated physicists to perfect, does not help much the final result. So large is the jet energy scale uncertainty as compared to all others, that it makes sense to try alternative procedures which ignore the energy measurement of jets.

Now, I would get flamed if I did not point out here, before getting to the heart of this post, that there are indeed cunning procedures by means of which the jet energy scale uncertainty can be tamed: indeed, by imposing the constraint that the two jets produced by the W \to q \bar q' decay make a mass consistent with that of the W boson, a large part of the uncertainty on jet energies can be eliminated. This alleviates, but ultimately does not totally solve, the problem with the JES uncertainty. In any case, until now the excellent results of the Tevatron experiments on the top quark mass have not been limited by the JES uncertainty. Still, it is something that will happen one day. If not there, then at CERN.

Let me make a simple example of one analysis in CDF that does not apply the “self-calibrating” procedure mentioned above: it is a recent result based on 1 inverse femtobarn of data, of which I need not discuss the details today. Here is the table with the systematic uncertainties:

The total result of that analysis is M = 168.9 \pm 2.2 \pm 4.2 GeV. Since uncertainties add up in quadrature (a somewhat arguable statement, but let’s move on), the total uncertainty on the measured top mass is 4.7 GeV. Without the JES uncertainty (3.9 GeV alone), it would be 2.7 GeV!

The new result

Ok, after I made my point about the need for measurement methods which make as little use as possible of the energy measurement of jets in future large-statistics experiments, let me describe shortly a new analysis by CDF, performed by Ford Garberson, Joe Incandela, Roberto Rossin, Sue Ann Koay (all from UCSB), and Chris Hill (Bristol U.). Their measurement technique uses previously employed techniques, combining them successfully in a JES-free determination.

[A note of folklore – I should maybe mention that Roberto Rossin was a student in Padova: I briefly worked with him at his PhD analysis, an intriguing search for B_s \to J/\psi \eta decays in the \mu \mu \gamma \gamma final state. On the right you can see a picture I took of him for my old QD blog three years ago… The two good bottles of Cabernet were emptied with the help of Luca Scodellaro, now at Cantabria University.]

They use two observable quantities which are sensitive to the top mass. One is the transverse decay length of B hadrons produced by b-quark jets: the b-quark emitted in top decay is boosted thanks to the energy released in the disintegration of the top quark, and the distance traveled by the long-lived B hadron (which contains the b-quark) before in turn decaying into lighter particles is correlated with the top mass. The second quantity is the transverse momentum of the charged lepton – we already discussed its correlation with the top mass in our examples above.

After a careful event selection collecting “lepton+jets” top pair candidate events in 1.9 inverse femtobarns of Run II data (ones with an electron or a muon, missing Et, and three or more hadronic jets, one of which containing a signal of b-quark decay), an estimate of all contributing backgrounds is performed with simulations. The table on the right shows the number of events observed and the sample composition as interpreted in the analysis. Note that for the L2d variable there are more entries than events, since a single event may contribute twice if both b-quark jets contain a secondary vertex with a measurable decay length.

The mean value of P_t and L_{2d} is then computed and compared to the expectations for different values of the top mass. Below you can see the distribution of the two variables for the selected data, compared to the mixture of processes already detailed in the table above.

Of course, a number of checks are performed in control samples which are poor of top pair decay signals. This strengthens the confidence in the understanding of the kinematical distributions used for the mass measurement.

For instance, on the right you can see the lepton Pt distribution observed in events with only one jet accompanying the W \to l \nu signal: this dataset is dominated by W+jets production, with some QCD non-W events leaking in at low lepton Pt (the green component): especially at low Pt, jets may sometimes mimic the charged lepton signature. All in all, the agreement is very good (similar plots exist for events with two jets, and for the L2d variable), and it confirms that the kinematic variables used in the top mass determination are well understood.

To fit for the top mass which best agrees with the observed distributions, the authors have used a technique which strongly reminds me of a method I perfected in the past for a totally different problem, the one I dubbed “hyperball algorithm. They only have two variables (I used as many as thirty in my work on the improvement of jet energy resolution), so theirs are balls – ellipses, actually – and there is little if anything hyper. In any case, the method works by comparing, in the plane of the L2d and lepton Pt variables, the “normalized distance” between observed data points and points distributed within ellipses in the plane, obtained from Monte Carlo simulations with different values of generated top quark mass: they compute the L2d and Pt distances \delta L_{2d}=L_{2d}^{data} - L_{2d}^{MC}, \delta P_t = P_t^{data} - P_t^{MC}, and using the RMS of the MC distributions \sigma (L_{2d}), \sigma (P_t) they define the quantity

D = \sqrt{ (\delta P_t/\sigma (P_t))^2 + ( \delta L_{2d}/\sigma (L_{2d}))^2}.

D is then used in a fit which compares different hypotheses for the data -ones with different input top masses- to extract its most likely value.

Having worked for some time with distances in multi-dimensional spaces, I have to say I have a quick-and-dirty improvement to offer to the authors: since they know that the two variables used have a slightly different a-priori sensitivity to the top mass, they could improve the definition of D by weighing the two contributions accordingly: this simple addition would make the estimator D slightly more powerful in discriminating different hypotheses. Anyway, I like the simple, straightforward approach they have taken in this analysis: the most robust results are usually those with fewer frills.

Having said that, let me conclude: the final result of the analysis is a top mass M = 175.3 \pm 6.2 \pm 3.0 GeV, where the second uncertainty is systematic, and it is almost totally JES-uncertainty free. This result is not competitive with the other determinations of CDF -some of which have been extracted from the very same dataset; however, this will be a method of great potential in LHC, where jets are harder to measure for a number of reasons, and where statistics will be large enough that the attention of experimenters will soon turn to the optimization of systematic uncertainties.

Comments

1. Robert I. Marsh II - July 9, 2008

With half of the ‘Standard Model’ missing, shrouded within a mathematical haze of pure speculation, and the LHC being built upon these antiquated precepts, there is absolutely no way of telling what awaits CERN! It will take these experiments to extricate the physics community from their stagnated, depressing, and quagmired current positions! At least one sector of the ‘Standard Model’ shall receive a tsunami of change, that will send the mathematicians and physicists scrambling wildly in their ‘click’ groups, to hurriedly install these new, very much needed corrections! There is no doubt, that the future world desperate energy needs lie in LHC technologies; however, the production course should be traveled with extreme caution! The LSAG ‘safety report’ covers only lower energy 2008 ‘start-up’ operation projections, and speaks nothing toward the pre-planned decade of precision energy upgrades to come; set to begin in 2009! This same report only covers previous public dockets of concern, and nothing of the ‘new’ emerging risk assessment meetings, that are in progress ‘Behind Closed Doors’! CERN is grappling with multiple variance-calculation paradoxes, even as Michelangelo Mangano (and others) penned the expedited, now famous ‘quiet the public’ ‘Safe-Status’ safety report! Two such situations are known: #1). CERN Uncertainty RE: Quantum Time-Dilation Contraction-Calibration Equations; used for particle beam timing/focus, to maximize the optimum collisions per/second during ‘Impact Moment, being detector analyzed. This line of equations must be precise, or facility damage may result! #2). RE: ALICE heavy (Pb) ion collisions, scheduled to begin (once financed) in 2009! This project generates hyper-density plasmatic fields, that could affect a gravitational curvature within a forced equilibrium state; thus possibly producing a compression singularity vortex, creating an event-horizon expansion! This is known as the expanded: Einsein-Rosen Bridge Wormhole: QUANTUM WORMHOLE! Director General Robert Aymar, Catherine Decosse (ALICE), Michelangelo Mangano, Stephen Hawking, CERN Theory Unit, and LSAG have entered discussions, at this time!

2. nige cook - July 9, 2008

Hi Tommaso, thanks for this interesting article. The way you estimate the top quark mass looks very complicated and indirect. I’m just a bit surprised by discussion’s everywhere of ‘quark masses’ as if such masses are somehow fundamental properties of nature. Because quarks can’t be isolated, giving them masses as if they were isolated doesn’t make sense: the databook quoted masses of the three quarks in a baryon add up to only about 10 MeV which is just 1% of the mass of the baryon. So most of the mass in matter is due to mass associated with virtual particle vacuum fields, not intrinsically with the long-lived quarks. Therefore the whole exercise in assigning masses to particles which can’t be isolated looks like nonsense? Surely the most striking fact here is that observable masses of particles (hadrons) containing long-lived quarks has nothing much to do with the masses associated with those long-lived quarks? So why discuss imaginary (not isolated in practice) quark masses at all? What matters in physics is what we can observe, and since quarks can’t be isolated, the calculations have no physical corespondence to the mass of anything that really exists!

On p102 of Siegel’s book Fields, http://arxiv.org/PS_cache/hep-th/pdf/9912/9912205v3.pdf , he points out:

‘The quark masses we have listed are the “current quark masses”, the effective masses when the quarks are relativistic with respect to their hadron (at least for the lighter quarks), and act as almost free. But since they are not free, their masses are ambiguous and energy dependent, and defined by some convenient conventions. Nonrelativistic quark models use instead the “constituent quark masses”, which include potential energy from the gluons. This extra potential energy is about .30 GeV per quark in the lightest mesons, .35 GeV in the lightest baryons; there is also a contribution to the binding energy from spin-spin interaction. Unlike electrodynamics, where the potential energy is negative because the electrons are free at large distances, where the potential levels off (the top of the “well”), in chromodynamics the potential energy is
positive because the quarks are free at high energies (short distances, the bottom of the well), and the potential is infinitely rising. Masslessness of the gluons is implied by the fact that no colorful asymptotic states have ever been observed.’

I first heard about quarks when an A-level physics student in 1988, and couldn’t understand why the masses of the three quarks in a proton weren’t each about a third of the proton mass. I did quantum mechanics and general relativity later, no particle physics or quantum field theory, so it’s only recently that I’ve come to understand the enormous importance of field energy (binding energy) in particle masses. So from my perspective, I can’t see why anybody cares about quark masses. Because quarks can’t be isolated, such masses are just a mathematical invention; quarks will always really have different masses because of the fields around them when they are in hadrons. In the Standard Model, quarks don’t have any intrinsic masses anyway; the mass is supplied externally by the Higgs field. Whether it is 99% or 100% of the mass of quark composed matter which is in the field that is binding quarks into hadrons, surely the masses of quarks is not important. Surely, to predict observable particle masses, a theory needs to predict the binding energy tied up in the strong force field, not just add up quark masses. This seems to indicate that the mainstream off in fantasy land when trying to estimate quark masses and present them as somehow ‘real’ masses, when they aren’t real masses at all.

3. Quark masses are pseudoscience because you can’t isolate quarks, and at least 99% of hadron masses are strong force field energy, not quark masses « Gauge theory mechanisms - July 9, 2008

[…] energy, not quark masses Filed under: About — nige @ 11:55 pm In my previous posts here and here, I’ve given evidence for the composition of all observable […]

4. dorigo - July 9, 2008

Dear Nige,

it is of course true that quarks cannot be isolated, and that their “current mass” values are not measurable with infinite precision. In that sense, you could even say that those quantities are ill-defined, to a certain extent (larger for light quarks, O(100 MeV) for the top quark).

What should surprise you is that that kind of fuzzy definition does happen with most physical quantities we measure and use, to some extent: that is a source of systematic uncertainty which is usually neglected. What does it mean, for instance, when we say that the inner temperature of a alive human body is 100+-x °F ? Define body, define alive, define inner, define temperature from a microscopic point of view… What does it mean to say that the gravitational acceleration at sea level is 9.8 m/s^2 ? Define acceleration, define sea level, explain how it depends on whether we are at the north pole or on the equator, whether there’s water or rock around.

The message is the following: physical quantities we measure and use have a meaning in a certain context, less so if absolutized. The fine structure constant is a very good quantity to use in low-energy electrodynamics, but it is no constant in high-energy physics.

Quark masses are crucial to perform calculations of cross sections, which are proportional to clicks in our detectors, and in a number of other theoretical calculations. You should be careful to avoid equivocating the meaning of pseudo-science, which certainly does not apply to the case you mentioned.

Cheers,
T.

5. dorigo - July 9, 2008

Dear Robert,

The LSAG ’safety report’ covers only lower energy 2008 ’start-up’ operation projections, and speaks nothing toward the pre-planned decade of precision energy upgrades to come; set to begin in 2009!

Please don’t get confused between energy and luminosity. The report considers the full center-of-mass energy of 14 TeV that the LHC will deliver when fully commissioned. Luminosity will instead keep increasing, with absolutely no change in the quality of microscopic processes created, but just on their rate.

This same report only covers previous public dockets of concern, and nothing of the ‘new’ emerging risk assessment meetings, that are in progress ‘Behind Closed Doors’!

You must be kidding, right ? The safety issues connected with operating the beam and the facilities have been dealt with well before the LHC was built. And in any case you are talking about things that are not different from managing a large construction area, not about destroying the world.

…affect a gravitational curvature within a forced equilibrium state; thus possibly producing a compression singularity vortex, creating an event-horizon expansion…

Sorry, too much sci-fi and too little physics for me to handle properly your comment.

Cheers,
T.

6. Luboš Motl - July 9, 2008

Dear Tommaso,

it’s very nice of you that you have joined cook Nigel in arguing that quark masses do not exist. Finally, your blog has a clear message: quarks are crap!😉

The people who don’t want to be taught by crackpots may want to be assured that quark masses are well-defined parameters of effective field theories that can, in principle, be measured up to the accuracy of powers of (energy/cutoff) where energy is the energy scale where we measure things and the cutoff is the maximum energy where an effective field theory is valid.

The effective field theories such as the Standard Model depend on a few parameters – bare quark masses (or Yukawa couplings) are a subset – and they predict pretty much all the phenomena we can see, so you may bet that it is possible, not only in principle, to measure all these parameters.

Bare quark masses shouldn’t be confused with various masses in very-low-energy effective quark models etc. The bare masses of light quarks are much lighter than 1/3 of the proton mass.

Greetings to all crackpots here, greetings to Tommaso, too.

Best
Lubos

7. dorigo - July 9, 2008

Hi Lubos,

thank you for explaining in complicated terms what I was unable to explain in simple terms😉

I still prefer the experimental message, which is rather clear, to the theoretical one, which is full of caveats. The experimental message is that the top quark mass can be measured with an error not smaller than a few hundred MeV from its decay, due to unavoidable QCD effects. The pole mass, on the other hand, can be measured by determining the cross section of production as a function of energy in a electron-positron collider, but again with a precision not significantly smaller than \Lambda_{QCD}, i.e., again, a couple hundred MeV.

As for the theoretical ambiguities, check this paper for more information: M.Smith, S.Willenbrock, “Top-quark Pole Mass”,
PRL 79, 3825 (1997) –

“The top quark decays more quickly than the strong-interaction time scale, ΛQCD-1, and might be expected to escape the effects of nonperturbative QCD. Nevertheless, the top-quark pole mass, like the pole mass of a stable heavy quark, is ambiguous by an amount proportional to ΛQCD.”

Exactly that: ambiguous.

Cheers,
T.

8. goffredo - July 9, 2008

Dear crackpots and not. A poll question:
Isn’t all physics “effective?

Were do you stand?

9. Robert I. Marsh II - July 9, 2008

Dear dorigo; I have had direct one on one conversations (May 08 – to date), regarding my information from LSAG, and they are in communications, with the other aforementioned groups, and individuals! I have particle beam experience, from CEBAF (Jefferson Labs, Newport News VA., on Jefferson Ave.), and my comments are based upon discussions between the two agencies. This is now an ongoing set of debates, that will delay LHC, but more importantly ALICE 2009! The sci-fi you mention, can be found within a culmination of Einsteins’ General and Special Theory of Relativity, papers on the Einsten-Rosen Bridge, combined with the Heisenberg Uncertainty Principle, in regard to particle location within Space/Time. By the way, I never said anything about it destroying the Earth, because at the point of the initiation phase, there would be macrocosmic Time-Dilations, which would cause an emergency LHC shut-down! The residual effects would last an estimated 12-30 Hours, with an outside perimeter of 30-60 Miles (this should sound familiar). Alternate effects: If quantum pathways become disturbed, this can alter nuclear positionings, changing elements and chemical compounds; thus a postulated loss of structural integrity of the facility, during testing! I actually do appreciate the counter-comment in regard to the ‘Luminosity’ reference, and will adjust my written composition for clarity! Thank you, dorigo!

10. nige cook - July 9, 2008

‘Quark masses are crucial to perform calculations of cross sections, which are proportional to clicks in our detectors, and in a number of other theoretical calculations.’ – Tommaso

Hi Tommaso,

Isn’t that a circular argument, because you’re defining quark masses on the basis of a calculation based on measuring cross-sections, and then using that value to calculate cross-sections?

Since cross-sections are the effective target areas for specific interactions, I don’t see how mass comes into it. Whatever cross-section you are dealing with, it will be for a standard model interaction, not gravitation. Since mass would be a form of gravitational charge, mass is only going to be key to calculating cross-sections for gravitational interactions in quantum gravity.

Clearly mass can come into calculations of other interactions, but only indirectly. E.g., the masses of different particles produced in a collision will determine how much velocity the particles get, because of conservation of momentum.

Re: the analogy to the temperature of the human body. In comment 2 I’m not denying that quarks have mass, just curious as to why so much mainstream attention is focussed on something that’s ambigious. The internal temperature of the human body is easily measurable: a thermometer can be inserted into the mouth.

You can’t isolate quarks, so whatever mass you calculate for an isolated quark, you aren’t calculating a mass that exists. The actual mass will always be much higher because of the mass contribution from the strong force field surrounding the quark in a hadron.

Hi Lubos,

Quark masses can be well-defined in various different ways. They just can’t be isolated, so while they are useful parameters for calculations, they aren’t describing anything that can be isolated (even in principle). Nobody has ever measured the mass of an isolated quark, they have made measurements of interactions and inferred the quark mass, which doesn’t physically exist because quarks can’t exist in isolation. Other masses, such as lepton and hadron masses, may also involve indirect measurements, but at least there you end up with the mass of something that does exist on its own.

In any case, their masses are negligible compared to the masses of the hadrons containing quarks. So hadron masses are accounted for by the strong field energy between the 2or 3 quarks in the hadron, not the masses of the quarks themselves.

11. Luboš Motl - July 9, 2008

Dear goffredo,

no, it’s not quite true that all physics is effective. String theory, for example, is (technically speaking) not an effective theory but an exact one.

Dear Tommaso,

that’s really great to prefer experimental definitions of quark masses but because the Standard Model actually describes the reality, the subtleties one needs to refine in order to know what he means by a quark mass are exactly as large in the experiment as they are large in the Standard Model a theory.

You need to have a renormalization scale and scheme or, equivalently, you need to have some agreed upon experiment that measures these masses. It’s the very same thing simply because the theory works.

Best
Lubos

12. Gauge theory errors corrected by facts, giving tested predictions « Gauge theory mechanisms - July 9, 2008
13. Robert I. Marsh II - July 9, 2008

My reference to: LHC technologies, and our future world desperate energy needs; refer to LHC, if long-term successful, may produce the manipulation of matter, luminosity (plasma), and energy, that could conceivably render the concealed knowledge for controlled and sustainable nuclear fushion, for our future! It is even probable, that a ‘new’ branch of energy-physics could be born, with unimaginable outgrowth!!!

14. goffredo - July 9, 2008

Hi Lubos
hmmm. I am absolutely not knowlegable about string theory so forgive my dumb questions and comments. Nornally theories (all physics?) are effective because they assume energy cut-offs (thresholds) beyond which the theory breaks down (assumptions for calculations are not valid). What are assumptions of string theory? Are there any or is it a truely assumptionless theory? Doesn’t string theory too assume something (takes something for granted) or does it really boot-strap itself into existence. If it “really” can boot itself into full blossom out of nothing then I would indeed agree it would be a nice example of a down-to-the-root theory of “reality”. I would be deeply impressed. If instead it does need some-thing or some-form to start from then it is effective like all run-of-the-mill theories.
In the later case (effective) I would still be impressed, but once there was some experimental prediction and successful matching with some nice data, but I would be less impressed than I would be it it were truely assumptionless.

jeff

15. dorigo - July 9, 2008

Hi Jeff,

Concerning your question #8, I think I understand what you mean. We are experimentalists, and our approach is to use our understanding of physics in a given regime, within certain boundaries, and according to some model. We are not accustomed to “absolutes”. String theory is indeed an attempt at creating things from scratch. However, it’s been unsuccessful so far, and there’s strong doubts it can ever be.

Of course Lubos is welcome to give the informed opinion of a string theorist – which of course is worth more than mine on this particular issue.

Cheers,
T.

16. dorigo - July 9, 2008

Hello Robert,

ok – you did not claim that the LHC will destroy our planet. But you are supporting a scenario which is -to be euphemistic- unlikely. In any case, rest assured that the LHC will not be delayed by what you mention.

And, concerning your second comment: we do not need the LHC to ignite nuclear fusion. We need ITER whose funds, alas, have been cut.

Cheers,
T.

17. dorigo - July 9, 2008

Nige, no, no circularity. You need a top mass as a parameter if you want to determine the theoretical prediction for the number of top-antitop events you collect. The top mass is needed because the parton luminosities depend on the fraction of momentum of the parent proton or antiproton they carry. There are fewer partons at larger momentum, so the cross section decreases with the top mass, because a higher top mass “fishes out” the rarer high-momentum partons.

Cheers,
T.

18. Robert I. Marsh II - July 10, 2008

TOKAMAK: magnetic toroidal field plasma containment!

19. Luboš Motl - July 10, 2008

Dear Jeff,

yours are great questions and suggestions. First of all, you are absolutely correct that a defining feature of the effective theories is a cutoff below which they should be valid. String theory doesn’t have a cutoff (above which it should break) – it is exactly meant to describe the phenomena below and above every conceivable cutoff, including the ultimate cutoff, the Planck scale. Of course, there are approximate descriptions of string theory that have “cutoffs” – e.g. perturbative string theory becomes useless for very strong coupling – but the whole string theory, and we know it pretty well these days, doesn’t have such a limitation.

In quantum gravity in general, there are no “new layers” of knowledge above the Planck scale = there are no meaningful distances shorter than the Planck length. A valid theory of quantum gravity (and string theory is an example, the only fully consistent one) has to “peacefully truncate” the physics near this point, exactly say what the transition from the known low-energy effective theories into this new “nothingness” is, and how the “nothingness” influences low-energy physics by corrections. At very high, trans-Planckian center-of-mass energies, all generic states must be black hole microstates whose macroscopic properties are again encoded in low-energy physics. For various backgrounds (superselection sectors), string theory inserts many new intermediate energy scales, such as one where excited strings occur (below black hole masses, but above light particle masses).

Now, no one knows what the most “general” assumptions of string theory should be – those that could be used to deduce all of the theory’s features from them. Of course, there are many ways how the path towards our present understanding of the theory could be found. For example, assume that elementary particles are 1D strings with a conformal field theory describing their 2D world sheets. Assume that the theory is otherwise treated as a generalization of a quantum field theory in spacetime. Everything else we know follows from basic rules of physics – such as postulates of quantum mechanics – and the requirements of consistency. In this sense, people have been discovering string theory much like Columbus was discovering America. He had to start somewhere near Cuba (?).

There’s no known way to “construct” all of string theory artificially – in other words, we don’t know what are the ultimate “defining” principles of the whole structure. We are finding a pre-existing structure and its properties always seem to be uniquely determine in every regime by consistency. It wasn’t guaranteed to be so from the beginning but it happens to be so.

You are right that this is close to the full bootstrap paradigm (also historically, string theory arose from the – later mostly failed – bootstrap approaches) except that we still need *a* hardcore constructive starting point. In the history, the first ones (constructive approaches) were one-dimensional objects that are as specific and constructive as e.g. quarks (non-bootstrap approach) – even though people found out that strings are “fundamental” in the weakly-coupled limit only and many other objects such as branes become equally/more fundamental at strong coupling.

There exist new starting points such as AdS/CFT duals of gauge theories or a Matrix theory definition, and their complete analysis leads to the same string theory. One can start in many corners – and one has to start in one corner. It would be great to have a full description of the structure that doesn’t require you to start from any particular corner/description – like a Columbus who discovers all of America simultaneously (by dropping a lot of Columbuses from space?)😉 but we don’t have it yet. A constructive, very explicit feature (such as 2D CFT) is still needed.

So there has to be one more assumption besides the general consistency “bootstrap” assumptions so far – string theory is not quite assumptionless today.😉 Regardless of the way you formulate it, it always has one assumption too many.

Best
Lubos

20. Luboš Motl - July 10, 2008

Tommaso,

what the hell are you talking about concerning “unsuccesses”? String theory so far has been completely successful. It describes all known features of the real world in detail in a fully consistent picture that has no continuous adjustable parameters or other arbitrary assumptions.

The only unsuccess I am aware of is that the laymen have not been really able to grasp it – but this unsuccess has been occurring for pretty much every theory of physics for many centuries and string theory was always guaranteed to be much more extreme in this because it incorporates all of previous theories of physics.

Everyone who thinks that there has been something fundamentally “unsuccessful” about string theory is a completely ignorant layperson.

Best
Lubos

21. dorigo - July 10, 2008

Dear Lubos,

I have my own ideas, but as Groucho once said, “if you don’t like them – well, I have others”. I think we perceive string theory as unsuccessful because we expect so much out of a TOE. I will be totally sold only once I get to see a fundamental parameter or two predicted by a model. Until then, I will not agree that a theory has been successful.

Cheers,
T.

22. nige cook - July 10, 2008

‘Nige, no, no circularity. You need a top mass as a parameter if you want to determine the theoretical prediction for the number of top-antitop events you collect. The top mass is needed because the parton luminosities depend on the fraction of momentum of the parent proton or antiproton they carry. There are fewer partons at larger momentum, so the cross section decreases with the top mass, because a higher top mass “fishes out” the rarer high-momentum partons.’ – Tommaso

Thanks for this explanation, but nobody measures the isolated mass of any quark, since quark masses can’t be isolated. The derivation of the mass of the quark comes from reaction cross-sections to make the theory work, and then you use that calculated quark mass to calculate somethiung else. At no point has the isolated quark mass been measured, because it has never been isolated.

By analogy, the original 1960s string theory of strong interactions requires that the strings have a tension of something like 10 tons weight (100 kiloNewtons of force). This figure is required to make the theory describe the strong force, and using this parameter other things about the nucleus can be calculated. However, this isn’t the same thing as ‘measuring’ the tension of strings which bind nuclei together. Just because you can indirectly use experimental data to quantify some parameter and then use that parameter to make checkable calculations, doesn’t mean that it is a real parameter or that it is a real ‘measurement’. In the case of hadronic string theory, it was soon realised that exchange of gluons caused the strong interaction, not string tension.

I think it’s fundamentally misleading for properties of quarks to be quoted where those properties aren’t observable even in principle because of the impossibility of isolating a quark. It’s against Mach’s concept that physics be based on observables. Once you start popularising values for isolated quark masses when isolated quarks never exist even in principle, you break away from Mach’s conception of physics. Hadron masses are directly observable, and they are only 1% constituent quark masses, and 99% mass associated with hadron binding energy. I think that people should be analyzing the latter as a priority, to understand masses.

Hadron masses can be correlated in a kind of periodic table summarized by the expression

M= mn(N+1)/(2*alpha) = 35n(N+1) MeV,

where m is the mass of an electron, alpha = 1/137.036, n is the number of particles in the isolatable particle (n = 2 quarks for mesons, and n = 3 quarks for baryons), and N is the number of massive field quanta (such as Higgs bosons) which give the particle its mass. The particle is a lepton or a pair or triplet of quarks surrounded by shells of massive field quanta which couple to the charges and give them mass, then the number of massive particles which have a highly stable structure might be expected to correspond to the ‘magic numbers’ of nucleon shells in nuclear physics: N = 1, 2, 8 and 50 are such numbers for high stability.

For leptons, n=1 and N=2 gives: 35n(N+1) = 105 MeV (muon)
Also for leptons, n=1 and N=50 gives 35n(N+1) = 1785 MeV (tauon)
For quarks, n=2 quarks per meson and N=1 gives: 35n(N+1) = 140 MeV (pion).
Again for quarks, n=3 quarks per baryon and N=8 gives: 35n(N+1) = 945 MeV (nucleon)

I’ve checked this for particles with lifespans above 10^{-23} second, and the model does correlate well with the other data: http://quantumfieldtheory.org/table1.pdf Obviously there’s other complexity involved in determining masses. For example, as with the periodic tables of the elements you might get effects like isotopes, whereby different numbers of uncharged massive particles can give mass to a particular species, so that certain masses aren’t integers. (For a long while, the mass of chlorine was held by some people as a disproof of Dalton’s theory of atomic weights.)

It’s just concerning that emphasis on ‘measuring’ and ‘explaining’ unobservable (isolated) quark masses deflects too much attention from the observable masses of leptons and hadrons.

23. goffredo - July 10, 2008

Hi Cook
I suggest leaving Mach out of discussion.

Mach was a man of his age. He threw a nice stone in the pond and created waves. But things went quickly forward and did not stop there. Modern post-mach Physics has and will continue to do very well to introduce and deal with primitive concepts that might even be unobservable as long as there are observable consequences; i.e. it is ok to start off with with weird unobservable concepts but the obligation, to be scientific, is that you construct observables. Period.

If a unobservable concept is too vague to be of any help and confuses from to the point of being sterile then its shouldn’t be used. If instead it has a teaching or heuristic value, where one learns much from using it to the point of not only learning how inadequate it is but it also points the way for deeper concepts, then it is ok to use it. It will have served its purpose and allowed for progress. Not even at this point should it be dropped.

Modern physcists are opportunists and not fanatical machians that believe that physics just boils down to seeing moving dials move, counters count and blinking lights blink. Mach and his fanatical parrots appeared and disappeared in very short time span.

24. nige cook - July 10, 2008

‘… Modern post-mach Physics has and will continue to do very well to introduce and deal with primitive concepts that might even be unobservable as long as there are observable consequences…’

Hi goffredo,

There aren’t any observable consequences of calculating and using a value for the isolated mass of non-isolatable quarks.

All the other features of quarks apart from the calculated ‘isolated’ mass are real features that quarks have when in hadrons, not hypothetical values for isolated quarks, when quarks can’t be isolated. E.g., colour charge and spin are properties that quarks actually have when in hadrons. These are perfectly scientific because they refer to properties of quarks when in pairs or triplets inside mesons and baryons. What is less helpful in teaching the subject is to specify isolated masses for things that can’t be isolated.

25. nige cook - July 10, 2008

A pretty good example of the issue of the lack of observable consequences is epicycles. Ptolemy used observational measurements to calculate the sizes and speeds of epicycles of planets. Those parameters were solid numbers, based on observations. He then calculated the positions of planets based on this model. However, despite the sizes of the epicycles being based on fitting the model to real world data, and despite the calculations based on the epicycle parameters being checked by observations, at no point was the epicycle size parameter measuring anything real. (It’s the same fallacy as where theorists use hadronic string theory to work out that the strong force is due to strings with a given amount of tension, and then use that parameter to calculate other things. At no point is that parameter anything measurable or real, even though it is indirectly based on measured data, and predicts measurable data.)

26. goffredo - July 10, 2008

Hi Cook
nice points. I must agree. Liked Ptolemy example

27. Luboš Motl - July 11, 2008

Dear haters of science and crackpots,

the parameters describing epicycles are absolutely real and every theory that had or has at least an infinitesimal chance to survive must be able to account for their values and, if it goes beyond the geometrical description by epicycles, it is must also be able to explain and calculate these values. Better theories also give us more accurate description than one using epicycles but the epicycles are a legitimate and historically useful order-by-order expansion of the orbits’ shape.

The notion that there is something “unobservable” about the features of epicycles proves that the speaker has no idea what he is talking about.

Moreover, cook Nigel has also screwed the analogy with high-energy physics. His map is upside down. The epicycle description is a phenomenological description not explaining “why” that is analogous to the effective field theory – e.g. the Standard Model – while Newton’s laws of motion explain the shape of the orbits much like string theory explains the origin of the effective field theory.

In both cases, the effective, older theory views some parameters as predetermined assumptions (epicycle parameters; masses and couplings). In both cases, the newer theory (Newton’s theory; string theory) gives a more accurate description that is valid in a broader class of contexts but a description that reduces to the older description whenever it should.

In both cases, the newer theory shows that some of the parameters or assumptions of the older description are not God-given fixed constants but rather features of the history of the Universe (planetary orbits were first created together with the solar system; the low-energy couplings were chosen together with the correct stringy vacuum on the “landscape” at the beginning of this Universe’s life).

I am always amazed how basic things can be so severely misunderstood by the laymen, including the laymen who claim that they are interested in physics.

Best
Lubos

28. goffredo - July 11, 2008

Hi Lubos
hmmm. Interesting view. Let me think about this.

29. goffredo - July 11, 2008

Hi Lubos
I just re-read you last comments and I have absolutely no problem with them.

Cook writes of “reality”. He writes “…at no point was the epicycle size parameter measuring anything real.” While it seems to me that you do without a notion of “reality”.

You emphasize the effective (pratical) aspect, something I cherish much too. Feynman liked to compare the modern physics grad student and the mayan apprentice priest. The mayan’s goal is to calculate venus’ motion; the grad student’s goal is to calculate cross-sections. As long as the goal is met there is no problem and each culture is happy. If the way the method is taught is heuristic mysterious and esoteric, whether it uses semi-intuitive graphs alla feynman or criptic methods alla schwinger for cross-sections, or who-knows-what methods for venus, it all doesn’t matter when the goals are met.

That is fine to me, and indeed that is why I feel all of physics is effective. But I also feel that science and physics above all is reality driven. By that I do not naively think that our representations are more or less faithful to how “reality” is, but that we are not free to represent “reality” anyway we choose. In this sense I can imagine that a significant parameter (full of meaning), from some theoretic point of view, be unreal because it is in hard conflict with what we have learned about nature. No relativism here. All ideas are not equal!
If a concept or parameter is unreal then there is little reason to use it unless there is a very short term goal. If I need to build a rigid planatarium then I could use Ptolomy epicycles even if I knew from the outset that they are artifacts (unreal). But I wanted to flexibly include new features (comets, space-shit travel) then I would use Newton because the epicycles are fitting parameters “without meaning” (unreal); i.e. they cann’t be used successfully to describe a broader context. And the attitude towards the theory would change because we would be representing “reality” in a different way.

Mental representations are not empty superstructures that can be ignored when really considering what physics does, but they are ways we actively look at the world; they suggest how to take hold of the world and act (experimentally) to explore it further. Along the way we learn that come handles are illusory while others are robust and keep us on a fertile path.

30. nige cook - July 11, 2008

“… the parameters describing epicycles are absolutely real and every theory that had or has at least an infinitesimal chance to survive must be able to account for their values… ” – Lubos Motl

This is sadly incorrect, Lubos. Those parameters aren’t real. The epicycle theory did manage to fit and predict the apparent positions of planets which could be observed in Ptolemy’s time (150 A.D.), but it fails today to predict the distances of the planets from us which can now be measured. It assumes that all the planets, the moon, and the sun orbit the Earth in circles and go around small circles (epicycles) around that path as the orbit in order to resolve the problems with circular orbits around the Earth.

http://everything2.com/title/Ptolemaic%2520system

“Ptolemy’s model was finally disproved by Galileo, when, using his telescope, Galileo discovered that Venus goes through phases, just like our moon does. Under the Ptolemaic system, however, Venus can only be either between the Earth and the Sun, or on the other side of the Sun (Ptolemy placed it inside the orbit of the Sun, after Mercury, but this was completely arbitrary; he could just as easily swapped Venus and Mercury and put them on the other side, or any combination of placements of Venus and Mercury, as long as they were always colinear with the Earth and Sun). If that was the case, however, it would not appear to go through all phases, as was observed. If it was between the Earth and Sun, it would always appear mostly dark, since the light from the sun would be falling mainly where we can’t see it. On the other hand, if it was on the far side, we would only be able to see the lit side. Galileo saw it small and full, and later large and crescent. The only (reasonable) way to explain that is by having Venus orbit the Sun.”

Other specific points on the error of epicycles:

1. The Moon was always a serious problem for the theory of epicycles. In order to predict where in the sky the moon would be at any time using epicyles (instead of an ellipical orbit of the moon as it goes around the earth), Ptolemy’s epicycles for the Moon has the unfortunate problem of making the moon recede and approach the earth regularly to the extent that the apparent diameter of the Moon would vary by a factor of two. Since the Moon’s apparent diameter doesn’t vary by a factor of two, there is a serious disagreement between the correct elliptical orbit theory and Ptolemy’s epicycle’s fit to observations of the path of the Moon around the sky. You can get epicycles to fit the positions of planets or the Moon in terms of latitude and longitude on the map of the sky, but it doesn’t accurately model how far the planet or the Moon is from the Earth. The problem for the moon also exists with the other planets, whose diameters were too small to check against the epicycle theory in Ptolemy’s time when there were no telescopes.

2. If you knew the history of the solar system, you’d also be aware that the classical area of physics including Newton’s laws stem ultimately from Tycho Brahe’s observations on the planets. He obtained many accurate data points on the position of Mars, and Kepler tried analysing that data according to epicycles and have up when it didn’t provide a suitable accurate model. This is why he moved over to elliptical orbits. With lots of epicycles and many adjustable parameters adjusting these, the landscape of possible models made from epicycles is practically infinite, so you can use the anthopic principle to select suitable epicycles and parameters model planetary positions of longitude and latitude on the celestial sphere, and once you have determined nice fits to the empirical data by selecting suitable parameters for epicycle sizes (by analogy to the selectable size and shape moduli of the Calabi-Yau manifold, when producing the string landscape), you can then “predict” the paths of planets and the Moon around the sky as seen from the Earth. But it is not a good three-dimensional model; it fails to predict accurately how far the planets are from the Earth.

“Moreover, cook Nigel has also screwed the analogy with high-energy physics. His map is upside down.” – Lubos.

Maybe my map just appears to up to be upside down to you, because you’re standing on your head? Thanks for giving us your wisdom on how we can go on using epicycles and string theory.

31. nige cook - July 11, 2008

sorry for my typing errors above

32. Jeff - July 17, 2008

typing errors?
I am plagued with them and some must be freudian.
I wanted to write “space-ship” but I typed “space-shit”.

33. Another pro-LHC top mass measurement « A Quantum Diaries Survivor - October 3, 2008

[…] in news, physics, science. Tags: CDF, LHC, top mass, top quark trackback A few months ago I reported here on a CDF technique to measure the mass of the top quark without relying on hadronic jets, whose […]

34. Some posts you might have missed in 2008 - part II « A Quantum Diaries Survivor - January 6, 2009

[…] July 8: a report of a new technique to measure the top quark mass which is very important for the LHC, and the results obtained on CDF data. For a similar technique of relevance to LHC, also check this other CDF measurement. Possibly related posts: (automatically generated)My talk on new results from CDFLooking Ahead – Schedule – NYTimes.com […]


Sorry comments are closed for this entry

%d bloggers like this: