jump to navigation

3/fb reached! June 13, 2007

Posted by dorigo in mathematics, news, physics, science.
trackback

The Tevatron has been painstakingly producing proton-antiproton collisions for six years now, at an accelerating pace. The goal of 3 inverse femtobarns of collisions has now been crossed, as shown in the graph below (of which you can always find an up-to-date version in  http://www.fnal.gov/pub/now/tevlum.html ).

To understand what the heck is a integrated luminosity of  three inverse femtobarns, just think about shooting a lot of bullets with a short gun at a dime placed ten yards away. You do not expect to hit the dime, do you ? In fact, your bullets will cover a wide area around the dime rather randomly – say an area of about a square foot wide (if you are good). In order to hit the dime you will on average need to shoot a number of bullets equal to the ratio between the area covered by the dime (about a sixth of a square inch, or one cm^2) and one square foot – or about a thousand of them.

Let us put this in a tidier form: if we shot 1000 bullets per square foot, that is exactly the same concept physicists talk about when discussing the amount of collisions they managed to make: an “integrated luminosity”  L = 1000/sq ft, or about 1/cm^2 since a square foot is about 1000 cm^2. Since the dime has a cross section S of one cm^2, you expect to have made N=  SL = 1 cm^2 x 1/cm^2 = 1 hit on average!

Now, physicists use the same line of reasoning to estimate the chance of producing a rare process when colliding particles.  Rare processes have a very small cross section, and so to produce them we need to shoot many bullets! The dreaded “inverse femtobarn” is nothing but a measure of the number of bullets we shot per unit area. A femtobarn is the impossibly small area of 10^-39 cm^2: the area of a square whose side is a few billionths of a billionth of a meter. So three inverse femtobarns means having “illuminated” with three protons every such square of the incoming antiproton.

With three inverse femtobarns, one can produce really rare processes. The total cross section of a proton-antiproton collision is about S = 10^-25 cm^2, so with L=3/fb = 3/(10^-39 cm^2) we have actually produced N = SL = 3 x 10^14 collisions (or three hundred thousand billions)! Now, a really rare process such as Higgs boson production has a cross section of a few hundred femtobarns: with 3 inverse femtobarns of data we expect to have produced several hundreds of them!

The problem, then, is finding these few hundred Higgs bosons in the three hundred thousand billions… Hehm. It is much harder than finding the dime once it was hit by your bullet!

Not to worry. CDF and D0 are up to the task. And the Tevatron is expected to more than double the total amount of collisions it produced this far by the end of 2009… More dimes to find, more chances to get rich.

Comments

1. Peter Woit - June 13, 2007

Hi Tommaso,

Congratulations to Fermilab on getting the Tevatron to this point!

One thing I’ve always wondered about: CDF and D0 seem to still be mostly reporting results on the first fb^-1 of data. Once you have figured out how to make sense of the first fb^-1, one might naively think that you could just run the same code on the rest of the data, so why does it take so long to get it analyzed? I’m assuming it’s not because you guys are lazy or because your collaboration is holding up the release of results for silly reasons…

2. dorigo - June 13, 2007

Hello Peter,

good question. The time it takes to produce results with new data seems excruciatingly long from the outside, but it is even more so from the inside!

The problem is that there are several things that need to happen in series, before one can “turn the crank” of an already well oiled machinery.

The following applies to CDF, but D0 is probably very similar in all phases:

1) as collisions are delivered to CDF, the data is stored on tape.

2) Once store, the data needs to be reconstructed by software algorithms and then “Produced” by using calibration data which follows every few chunks of data taking. The steps leading to final production events are multiple and it may take up to four-five months to get from the moment of collection to the moment when the data can be used for analyses.

3) Then another problem starts: the data has to be validated – every analysis wants to check that the newly collected events are no different than the former ones. Here one also may encounter the problem of different running conditions: the instantaneous luminosity with which the data was taken in the first 1/fb was on average lower, and many (bad) things happen to data reconstruction as L increases. So analysis cuts may have to be retuned, cleanup revisited, etc. Moreover, the trigger collecting the data of interest of one particular analysis may (and often will) have been modified – it too for problems connected to changing luminosity. Many triggers, in fact, get changed frequently. And modeling the changes takes time.

4) Much more on a longer timescale is the issue of obtaining new jet energy corrections, new lepton efficiency numbers, new determinations of the b-tagging scale factor and other things: sets of numbers that depend on the data, and which are only partly obtained by turning a crank – the methods are there, but tweaks are always necessary. Also, during this phase a suitable amount of Monte Carlo events needs to be compared to the data, and producing large amounts of Monte Carlo takes time. Given that the Monte Carlo simulation in CDF reflects the details of data taking conditions, it needs to be produced fresh.

5) Finally, one can add the new data to the old data, and redo the whole analysis. More data means more CPU time for things such as optimization, running pseudoexperiments, etcetera.

6) Then the result needs to be blessed anew. And this also requires a repeated scrutiny by the collaboration, given that things are not so trivial when one redoes the analysis (as per points 1,2,3,4,5).

All in all, it may take one year from collisions to blessing… Alas, experiments in particle physics are not a trivial matter!

Cheers,
T.

3. Peter Woit - June 13, 2007

I see, basically the problem is that the behavior of the detector keeps changing.

Related to this, I remember reading a few years ago that sooner or later radiation damage would seriously degrade the performance of the inner parts of D0 and CDF. Is this still a problem and will it put a limit on how long the Tevatron can keep running?

4. dorigo - June 13, 2007

Yes, in some sense the detector needs constant calibration if one wants to obtain the best resolutions, despite the fact that in general, these things are quite stable (some parts of CDF are 25 years old, and still going strong!).

The silicon sensors of CDF and D0 are expected to withstand well the radiation dose from 8/fb of data. The degradation thereafter is minor, and these machines are quite redundant in design, so that they can outlive some of their parts without jeopardizing the performance appreciably. I believe we would be able to do a very good use of twice as much luminosity as the amount we can collect until 2009.

About the silicon, I am more worried by the intrinsic dangers of high luminosity running. Beam aborts may cook our microstrip detectors, as can high losses! For that reason, in fact, after a new beam has been injected in the Tevatron, we turn on the silicon in CDF only after the beam has stabilized (following a phase called “scraping”).

In the end, you never know… A few years ago we killed quite a few sensors because the wire bonds went in resonance with the L1 data taking frequency above 30 kHz, and broke. This was due to the hall effect!

5. ana - June 14, 2007

Could you please output full text in RSS feed? It will make your blog more great!

6. dorigo - June 14, 2007

Hi Ana,

how would my blog be better if syndication feeds received the full text rather than a summary ? I tend to think that by broadcasting only a summary, I encourage those who read the feeds and are interested by the topic to visit my site.

Cheers,
T.

7. questioner - June 14, 2007

Can I ask a dumb theorist’s question? The 3 fb^-1, is this 3 fb^-1 per experiment or 3fb^-1 in total? So does this mean CDF now has 1.5 fb^-1 of run II data to analyse, or 3 fb^-1? Thanks.

8. dorigo - June 14, 2007

Hi Questioner,

it is 3/fb per experiment.

Actually, a sordid detail is that CDF and D0 measure the instantaneous luminosity each with their own devices, and D0’s luminosity appears to always be smaller by a few percents, despite the fact that the beam optics are the same in the center of the two experiments. As far as I know the Tevatron believes more in the CDF number and uses that one, so when 3/fb are delivered, D0 is probably seeing 2.95/fb or so…

Cheers,
T.

9. gordonwatts - June 14, 2007

Another thing that takes a long time: we are always upgrading our detector. For example, D0 added a new layer of silicon. Getting that tuned up takes a lot of work!

10. gordonwatts - June 14, 2007

T – The beam optics are definately not the same at the center of both CDF and D0. -Gordon.

11. gordonwatts - June 14, 2007

BTW — if you want to see D0’s collected lumiosity as a function of time — you can here: http://d0server1.fnal.gov/projects/operations/D0RunII_DataTaking_files/image006.png — that is a public plot and it is updated about once a week (give or take). We have 3.04 at the moment. You’ll notice there are two numbers: the 3.04 and a 2.58. The larger number is what we think the Tevatron has delivered, and the 2.58 is what we have managed to write to tape (due to problems and other things).

12. gordonwatts - June 14, 2007

I agree with Ana, btw. I read almost all my blogs in a feed reader, going out of the reader means starting a web browser — so I don’t read your blog nearly as often as I do others that give the full feed.🙂 If you weren’t a blog about physics, I would have removed you from my blog reader already.😉 -G.

13. dorigo - June 14, 2007

Gordon, thanks for the information – I think now I remember having been told that once (the beam optics).

And I will modify the feed.

Cheers,
T.

14. Fred - June 15, 2007

Good and swift move T. concerning the RSS suggestions by Ana and Gordon. Blogs and inter-corporate communications are the 2 lucky stiffs of this developed tool so far. Strangely, many but not all, news services appear to use web feeds as a slight-of-hand tactic. An interesting study can be found at: http://www.icmpa.umd.edu/pages/studies/rss_study_details/rss_study.html Of course, this is probably distorted in it’s own way. It’s simply a matter of filtered trust. Oh, congrats to you and your cronies for being dedicated workers. You should be proud of that nice curve, but don’t forget to count your blessings.

15. gordonwatts - June 15, 2007

Thanks! I appreciate that; makes it much easier to read your blog (along with others).

16. ana - June 15, 2007

As for the full content feeds, I have the same feelings with Gordon.
Here,I recommended you to a good article that listed the reasons and comparison for abastract and full content feed:

For the Love of the Web, Please Use Full Content Feeds!
link here:http://www.devlounge.net/articles/for-the-love-of-the-web-please-use-full-content-feeds

17. dorigo - June 15, 2007

Ana, I made the change today. Your feed should now include my posts in all their glory😉

Cheers,
T.

18. Jeffrey Scofield - June 16, 2007

Thanks for the explanation of 1/fb. I have been wondering
what this means for a couple of months after seeing it in
Peter Woit’s blog.

(Hmm, I just found a definition by googling for“inverse barn.”
But yours is far superior.)

Obeisances and regards,
Jeff S.

19. dorigo - June 17, 2007

You are welcome Jeffrey. Always glad to explain particle physics jargon away…

Cheers,
T.

20. Luminosity Profile « Life as a Physicist - June 19, 2007

[…] person, showed this plot. I like this plot because it really made me rethink what is going on with our experiments luminosity. I like plots that jar me out of some preconceived notion I […]

21. A particle mass from its production rate « A Quantum Diaries Survivor - June 26, 2007

[…] of the quarks and gluons inside the proton, we can compute the probability of a collision – the cross section for a given process. Now, it so happens that PDF functions are very large at low values of x, […]


Sorry comments are closed for this entry

%d bloggers like this: