## ICHEP blogJuly 12, 2010

Posted by dorigo in astronomy, Blogroll, cosmology, internet, news, physics, science.

Just one line here to mention that since May there is a new blog out there – a temporary blog that will cover the end of July event in Paris – the International Conference on High Energy Physics -, how we get there, and the aftermath. The effort includes several well-known bloggers in high-energy physics, and is definitely worth following.

You can visit it here.

## Post summary – April 2009May 1, 2009

Posted by dorigo in astronomy, Blogroll, cosmology, internet, news, personal, physics, science, social life.
Tags: , , , , , , , , , , , ,

As the less distracted among you have already figured out, I have permanently moved my blogging activities to www.scientificblogging.com. The reasons for the move are explained here.

Since I know that this site continues to be visited -because the 1450 posts it contains draw traffic regardless of the inactivity- I am providing here monthly updates of the pieces I write in my new blog here. Below is a list of posts published last month at the new site.

The Large Hadron Collider is Back Together – announcing the replacement of the last LHC magnets

Hera’s Intriguing Top Candidates – a discussion of a recent search for FCNC single top production in ep collisions

Source Code for the Greedy Bump Bias – a do-it-yourself guide to study the bias of bump fitting

Bump Hunting II: the Greedy Bump Bias – the second part of the post about bump hunting, and a discussion of a nagging bias in bump fitting

Rita Levi Montalcini: 100 Years and Still Going Strong – a tribute to Rita Levi Montalcini, Nobel prize for medicine

The Subtle Art of Bump Hunting – Part I – a discussion of some subtleties in the search for new particle signals

Save Children Burnt by Caustic Soda! – an invitation to donate to Emergency!

Gates Foundation to Chat with Bloggers About World Malaria Day – announcing a teleconference with bloggers

Dark Matter: a Critical Assessment of Recent Cosmic Ray Signals – a summary of Marco Cirelli’s illuminating talk at NeuTel 2009

A Fascinating New Higgs Boson Search by the DZERO Experiment – a discussion on a search for tth events recently published by the Tevatron experiment

A Banner Worth a Thousand Words - a comment on my new banner

Confirmed for WCSJ 2009 – my first post on the new site

## NeuTel 09: Oscar Blanch Bigas, update on Auger limits on the diffuse flux of neutrinosApril 3, 2009

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , ,

With this post I continue the series of short reports on the talks I heard at the Neutrino Telescopes 2009 conference, held three weeks ago in Venice.

The Pierre Auger Observatory is a huge (3000 km^2) hybrid detector of ultra-energetic cosmic rays -that means ones having an energy above 10^18 eV. The detector is located in Malargue, Argentina, at 1400m above sea level.

There are four six-eyed fluorescent detectors: when the shower of particles created by a very energetic primary cosmic ray develops in the atmosphere, it excites nitrogen atoms which emit energy in fluorescent light, collected in telescope. It is a calorimetric measurement of the shower, since the number of particles in the shower gives a measurement of the energy of the incident primary particle.

The main problem of the fluorescent detection method is statistics: fluorescent detectors have a reduced duty cycle because they can only observe in moonless nights. That amounts to a 10% duty cycle. So these are complemented by a surface detector, which has a 100% duty cycle.

The surface detector is composed by water Cherenkov detectors on the ground, which can detect light with PMT tubes. The signal is sampled as a function of distance from the center of shower. The measurement depends on a Monte Carlo simulation, so there are some systematic uncertainties present in the method.

The assembly includes 1600 surface detectors (red points), surrounded by four fluorescence detectors (shown by green lines in the map above). These study the high-energy cosmics, their spectra, their arrival direction, and their composition. The detector has some sensitivity to unidentified ultra-high energy neutrinos. A standard cosmic ray interacts at the top of atmosphere, and yields an extensive air shower which has an electromagnetic component developing on the ground; but if the arrival direction of the primary is tilted with respect to the vertical, the e.m. component is absorbed when it arrives on the ground, so it contains only muons. For neutrinos, which can penetrate deep in the atmosphere before interacting, the shower will instead have a significant e.m. component regardless of the angle of incidence.

The “footprint” is the pattern of firing detectors on the ground. It encodes information on the angle of incidence. For tilted showers, the presence of an e.m. component is strong indication of neutrino shower. An elongated footprint and a wide time structure of signal is seen for tilted showers.

There is a second method to detect neutrinos. This is based on the so-called “skimming neutrinos“: the Earth-skimming mechanism occurs when neutrinos interact in the Earth, producing a charged lepton via charged current interaction. The lepton produces a shower that can be detected above the ground. This channel has better sensitivity than neutrinos interacting in the atmosphere. It can be used for tau neutrinos, due to early tau decay in the atmosphere. The distance of interaction for a muon neutrino is 500 km, for a tau neutrino is 50 km. for electrons it is 10 km. These figures apply to 1 EeV primaries. If you are unfamiliar with these ultra-high energies, 1 Eev = 1000 PeV = 1,000,000 TeV: this is roughly equivalent to the energy drawn in a second by a handful of LEDs.

Showers induced by emerging tau leptons start close to the detector, and are very inclined. So one asks for an elongated footprint, and a shower moving at the speed of light using the available timing information. The background to such a signature is of the order of one event every 10 years. The most important drawback of Earth-skimming neutrinos is the large systematic uncertainty associated with the measurement.

Ideally, one would like to produce a neutrino spectrum or an energy-dependent limit on the flux, but no energy reconstruction is available. Observed energy depends on the height at which the shower is developing, and since this is not known for penetrating particles as neutrinos, one can only give a flux limit for them. The limit is in the range of energy where GZK neutrinos should peak, but its value is an order of magnitude above the expected flux of GZK neutrinos. A differential limit in energy is much worse in reach.

The figure below shows the result for the integrated flux of neutrinos obtained by the Pierre Auger observatory in 2008 (red line), compared with other limits and with expectations for GKZ neutrinos.

## Ten photons per hourMarch 23, 2009

Posted by dorigo in astronomy, games, mathematics, personal, physics, science.
Tags: , , ,

Every working day I walk for about a mile to my physics department in Padova from the train station in the morning. I find it is a healthy habit, but I sometimes fear it also in some sense is a waste of time: if I catched a bus, I could be at work ten minutes earlier. I hate losing time, so I sometimes use the walking time to set physics problems to myself, trying to see whether I can solve them by heart. It is a way to exercise my mind while I exercise my body.

Today I was thinking at the night of stargazing I treated myself with last Saturday. I had gone to Casera Razzo, a secluded place in the Alps, and observed galaxies for four hours in a row with a 16″ dobsonian telescope, in the company of four friends (and three other dobs). One thing we had observed with amazement was a tiny speck of light coming from the halo of an interacting pair of galaxies in Ursa Major, the one pictured below.

The small speck of light shown in the upper left of the picture above, labeled as MGC 10-17-5, is actually a faint galaxy in the field of view of NGC3690. It has a visual magnitude of +15.7: this is a measure of its integrated luminosity as seen from the Earth. It is a really faint object, and barely at the limit of visibility with the instrument I had. The question I arrived at formulating to myself this morning was the following: how many photons did we get to see per second through the eyepiece, from that faint galaxy ?

This is a nice, simple question, but computing its answer by heart took me the best part of my walk. My problem was that I did not have a clue of the relationship between visual magnitude and photon fluxes. So I turned to things I did know.

Some background is needed to those of you who do not know how visual magnitudes are computed, so I will make a small digression here. The scale of visual magnitude is a semi-empirical one, which sets the brightest stars at magnitude zero or so, and defines a decrease of luminosity by a factor 100 per every five magnitudes difference. The faintest stars visible with the naked eye in a moonless night are of magnitude +6, and that means they are about 250 times fainter than the brightest ones. On the other hand, Venus shines at magnitude -4.5 at its brightest -almost 100 times as bright as the brightest stars-, and our Sun shines at a visual magnitude of about -27, more than a billion times brighter than Venus. The magnitude difference between two objects is in a relation with their relative brightness by a power law: $L_1/L_2 = 2.5^{-M_1+M_2}$; the factor 2.5 is an approximation for the fifth root of 100, and it corresponds to the brigthness ratio of two objects that differ by one unit of visual magnitude.

Ok, so we know how bright is the Sun. Now, if I could get how many photons reach our eye from it every second, I would make some progress. I reasoned that I knew the value of the solar constant: that is the energy radiated by the Sun on an area of 1 square meter on the ground of the Earth. I remembered a value of about 1 kilowatt (it is actually 1.366 kW, as I found out later in wikipedia).

Now, how many photons of visible light arriving per second on that square meter of ground correspond to 1 kilowatt of power ? I reasoned that I did not remember the energy of a single visible photon -I remembered it was in the electron-Volt range but I was not really sure- so I had to compute it.

The energy of a quantum of light is given by the formula $E = h \nu$, where $h$ is Planck’s constant and $\nu$ is the light frequency. However, all I knew was that visible light has a wavelength of about 500 nanometers (which is $5 \times 10^{-7} m$), so I had to use the more involved formula $E = hc/\lambda$, where now $c$ is the speed of light and $\lambda$ is the wavelength. I remembered that $h=6 \times 10^{-34} Js$, and that $c=3 \times 10^8 m/s$, so with some effort I could get $E=6 \times 10^{-34} \times 3 \times 10^8 / (5 \times 10^-7) = 4 \times 10^{-19}$, more or less.

My brains were a bit strained by the simple calculation above, but I was relieved to get back an energy roughly equal to that I expected -in the eV range (one eV equals $1.6 \times 10^{-19}$ Joules -that much I do know).

Now, if the Sun radiates 1 kW of power, which is a thousand Joules per second, how many visible photons do we get ? Here there is a subtlety I did not even bother considering in my walk to the physics department: only about half of the power from the Sun is in the form of visible light, so one should divide that power by two. But I was unhindered by this in my order-of-magnitude walk-estimate. Of course, 1 kW divided by $4 \times 10^{-19}$ makes $2.5 \times 10^{21}$ visible quanta of light per square meter per second.

Now, visual magnitude is expressed as the amount of light hitting the eye. A human eye has a surface of about 20 square millimeters, which is 20 millionths of a square meter: so the number of photons you get by looking straight at the sun (do not do it) is $1.2 \times 10^{14}$ per second. That’s a hundred trillions of ‘em photons per second!

I was close to my goal now: the magnitude of the speck of galaxy I saw on Saturday is +15.7, the magnitude of the Sun is -27, so the difference is 43 magnitudes. This corresponds to $2.5^{43}$, which you might throw up your hands at, until you realize that every 5 units of the exponent the number increases by 100, so you just do $100^{43/5}$ which is $100^{8.6}$ which is $10^{17.2}$… Simple, isn’t it ?

Now, taking the number of photons reaching the eye from the Sun every second, and dividing by the ratio of apparent luminosities of the Sun and the galaxy, I could get $N_{\gamma}=10^{14} / 10^{17} = 10^{-3}$. One photon every thousand seconds!

Let me stress this: if you watch that patch of sky at night, the number of photons you get from that source alone is a few per hour! With my dobson telescope, which intensifies light by almost 10,000 times, I could get a rate of a few tens of photons per second, and the detail was indeed detectable!

If you are intested in the exact number, which I worked out after reaching my office and the tables of constants in the PDG booklet, I computed a rate of $N_{\gamma}=3.4 \times 10^{-3}$ photons per second with unaided eye, and 22 per second through the eyepiece of the telescope. Without telescope, that galaxy sends to each of us about 10 photons per hour!

UPDATE: this post will remain as one clear example of how dangerous it is to compute by heart! Indeed, somewhere in my order-of-magnitude conversions above I dropped a factor 10^2 -which, mind you, is not horrible in numbers which have 20 digits or so; but when one wants to get back to reasonable estimates for reasonably small numbers, it does count a lot. So, after taking care of some other (more legitimate) approximations, if one computes things correctly, the number of photons from the galaxy seen with the unaided eye is more like two hundred per hour, and in the telescope it is of about 350 per second.

## Neutrino Telescopes day 2 notesMarch 12, 2009

Posted by dorigo in astronomy, cosmology, news, physics, science.
Tags: , , , , , , , ,

The second day of the “Neutrino Conference XIII” in Venice was dedicated to, well, neutrino telescopes. I have written down in stenographical fashion some of the things I heard, and I offer them to those of you who are really interested in the topic, without much editing. Besides, making sense of my notes takes quite some time, more than I have of it tonight.

So, I apologize for spelling mistakes (the ones I myself recognize post-mortem), in addition to the more serious conceptual ones coming from missed sentences or errors caused by my poor understanding of English, of the matter, or of both. Also, I apologize to those of you who would have preferred a more succint, readable account: As Blaise Pascal once put it, “I have made this letter longer than usual, because I lack the time to make it short“.

NOTE: the links to slides are not working yet – I expect that the conference organizers will fix the problem tomorrow morning.

Christian Spiering: Astroparticle Physics, the European strategy ( Slides here)

Spiering gave some information about two new bodies, European organizations: ApPEC and ASPERA. ApPEC has two committees offering advice to national funding agencies, improve links and communication between the astroparticle physics community and scientific programmes of organizations like CERN, ESA etc. Aspera was launched in 2006, to give a roadmap for APP in Europe.Close coordination with ASTRONET, and links to CERN strategy bodies.

Roadmapping: science case, overview of the status, some recommendations for convergence. Second thing, a critical assessment of the plans, a calendar for milestones, coordinated with ASTRONET.

For dark matter and dark energy searches, Christian displayed a graph showing the cross section of WIMPS as a function of time, the reach of present-day experiments. In 2015 we should reach cross sections of about 10^-10 picobarns. We are now at some 10^-8 with our sensitivity. The reach depends on background, funding and infrastructure. Idea is to go toward a 2-ton-scale zero-background detectors. Projects: Zeplin, Xenon, others.

In an ideal scenario, LHC observations of new particles at weac scale could place these observations in a well-confined particle physics context, direct detection would be supported by indirect signatures. In case of a discovery, smoking-gun signatures of direct detection such as directionality and annual variations would be measured in detail.

Properties of neutrinos: direct mass measurement efforts are KATRIN and Troitzk. Double beta decay experiments are Cuoricino, Nemo-3, Gerda, Cuore, et al. The KKGH group claimed a signal of masses of a few tenths of eV, but normal hierarchy implies 10^-3 eV for the lightest neutrino mass of the same order. Experiments are expected to be in operation (cuoricino, nemo-3) or start by 2010-2011. Supernemo will start in 2014.

A large infrastructure for proton decay is advised. For charged cosmic rays, depending on which part of the spectrum one looks, there are different kinds of physics contributing and explorable.

The case for Auger-North is strong, high-statistics astronomy with reasonably fast data collection is needed.

For high-energy gamma rays, the situation has seen an enormous progress over the last 15 years. Mostly by imaging atmospheric Cherenkov telescopes (IACT). Whipple, Hegra, CAT, Cangaroo, Hess, Magic, Veritas. Also, wide-angle devices. For existing air Cherenkov telescopes, there are Hess and Magic running, very soon Magic will go into Magic-II. Whipple runs a monitoring telescope.

There are new plans for MACE in India, something between Magic and Hess. CTA and AGIS are in their design phase.

Aspera’s recommendations: the priority of VHE gamma astrophysics is CTA. They recommend design and prototyping of CTA and selection of sites, and proceeding decidedly towards start of deployment in 2012.

For point neutrino sources, there has been tremendous progress in sensitivity over the last decade. A factor of 1000 within 15 years in sensitivity to fluxes. IceCube will deliver what has promised, within 2012.

For gravitational waves, there is LISA and VIRGO. The frequency tested of LISA is in the 10^-2 Hz, VIRGO will go to 100-10000 Hz. The reach is of several to several hundred sources per year. The Einstein telescope, a graviwaves detector underground, could access thousand of sources per year. Einstein will construct starting in 2017. The conclusions: Einstein is the long-term future project of ground-based gravitational wave astronomy in Europe. A decision on funding will come after first detections with enhanced LIGO and virgo, but is most likely after collecting about a year of data.

In summary,the budget will increase by a factor of more than two in the next decade. Km3net, megaton, CTA, ET will be the experiments taking the largest share. We are moving into regions with a high discovery potential. An accelerated increase of sensitivity in nearly all fields.

K.Hoffmann, Results from IceCube and Amanda, and prospects for the future ( slides here)

IceCube will become the first instrumented cubic km neutrino telescope. Amanda-II consists of 677 optical modules embedded in the ice at depths of 1500-2000 m. It has been a testbed for icecube and for deploying optical modules. Icecube has been under construction for the last several years, Strings of PMT tubes have been deployed in the ice during the last few years. 59 of them are operating.

The rates: IC40 has 110 neutrino events per day. Getting close to 100% live time. 94% in January. IceCube has the largest effective area for muons, long track length. The range of sensitivity in energy is to TeV-PeV range.

Ice properties are important to understand. A dust logger measures dust concentration, which is connected to the attenuation length of light in ice. There is a thick layer of dust sitting at a depth of 2000m, clear ice above, and very clear ice below. They have to understand the light yield and propagation well.

Of course one of the most important parameters is the angular resolution. As the detector got larger, it improved. One of the more exciting things this year was to see the point spread function go peak at less than one degree with long track lengths for muons.

To see the Moon for a telescope is always reassuring. They did it, a >4 sigma effect for cosmic rays impinging on the Moon.

With the waveforms they have in IceCube, the energy reconstruction has muons that are non-minimum ionizing. They reconstruct energy by number of photons along the track. Can achieve some energy resolution, progress in understanding how to reconstruct energy.

First results from point-source searches. The 40-string configuration data will be analyzed soon. The point sources are sought with a unbinned likelihod search. Taking into account energy variable in point source search. They expect point sources to have higher energy spectrum than atmospheric neutrinos. From 5114 neutrino candidates in 276 days, they found one hot spot in the sky, with a significance after trial factor accounting that is of about 2.2 sigma. There are variables next year that will be less sensitive to dust model, so they might be able to say more about that one soon.

For a 7-years data, 3.8 year livetime, the hottest spot has a significance of 3.3 sigma. With one year of data, icecube 22 will already be more sensitive than Amanda. Icecube and Antares are complementary, since icecube looks at northern declination and antares is looking at the southern declinations. The point source flux sensitivity is down to 10^-11 GeV cm-2 s-1.

For GRBs, one can use a triggered search, that is an advantage, and latest results give for 400 bursts a limit. From IceCube22, a unbinned search similar to the one of the point source search, gives an exclusion power expected to 10^-1 GeV per cm^2 (in E^2 dN/dE units), in most of the energy range.

The naked-eye GRB of March 19, 2008, had detector in test mode, only 9 of 22 strings taking data. Bahcall predicted flux peaks at 10^6 GeV with a flux of 10^-1, but the limit found is 20 times higher.

Finally, they are looking for WIMPS. A search was recently sent for publication by the 22-string IceCube. 104 days of livetime. Can reach down well.

Atmospheric neutrinos are also a probe for violations of Lorentz invariance -possibly from Quantum Gravity effects. The survival probability depends on energy, assuming maximal mixing their sensitivity is down to a part in 10^-28. They are looking for a change in what one would expect for flavor oscillation. Atmospheric neutrinos are produced, depending on where they are produced they traverse more of the core of the Earth. So one gets a neutrino beam with different baselines, based on energy, and you would see a difference in the neutrino oscillation probability. The neutrino oscillation parameter will be energy dependent.

In the future they would like to see a high-energy extension. Ice is the only medium where one can see a coherent radio signal and an optical one, and acoustic too. Past season was very successful, with the addition of 19 new strings. Many analyses of 22-string configuration are complete. ANalysis techinques being refined to exploit size, energy threshold, and technology used. Underway to develop tech to build GZK scale nu detector after IceCube is complete.

Vincenzo Flaminio, Results from Antares ( slides here)

Potential sources of galactic neutrinos can be SN remnants, pulsars, microquasars, and extragalactic ones are gamma-ray bursts and active galactic nuclei. A by-product of Antares is an indirect search for dark matter, results are not ready yet.

Neutrinos from supernovas: these act as particle accelerators, can give hadrons and gammas from neutral pion decays. Possible sources are those found by Auger, or for example the TeV photons which come from molecular clouds.

Antares is an array of photomultiplier tubes that look at Cherenkov light produced by muon crossing the detector. The site is south of France, the galactic center is visible for 75% of the time. The collaboration comprises 200 physicists from many european countries. The control room in Toulon is more comfortable than the Amanda site (and this wins the understatement prize of the conference).

The depth in water is 2500m. All strings are connected via cables on the seabed. 40km long electro-optical cable connects ashore. Time resolution monitored by LED beacon in each detector storey. A sketch of the detector is shown below.

Deployment started in 2005, in 2006 first line installed. Finished one year ago. In addition there is an acoustic storey and several monitoring instruments. Biologists and oceanographers are interested in what is done, not just neutrino physicists.

The detector positioning is an important point, because lines move because of sea currents. Installed a large number of transmitters along the lines, use information to reconstruct minute-by-minute the precise position of the lines.

Collect triggers at 10 Hz rate with 12 lines. Detected 19 million muons with first 5 lines, 60 with the full detector.

First physics analyses are going on. Select up-going neutrinos, low S/N ratio with atmospheric muons is avoided this way. Rate is of the order of two per day using multi-line configuration.

Conclusions: Antares has successfully reached the end of construction phase. Data taking is ongoing, analyses in progress on atmospheric muons and neutrinos, cosmic neutrino sources, dark matter, neutrino oscillations, magnetic monopoles, etcetera.

David Saltzberg, Overview of the Anita experiment ( slides here)

Anita flies at 120,000 ft above the ice. It is the eyepiece of the telescope. The objective is the large amount of ice of the Antarctica. Tested with 8 metric tons of ice to test effect for detection. Done at SLAC. Observe radio pulses from the ice. A wake-field radio signal is detected. It goes up and down in less than a nanosecond, due to its Cherenkov nature. It is called Askaryan effect. You can observe the number of particles in the shower, and the measured field effect does track the number of particles in the shower. The signal is 100% polarized linearly. Wavelength is bigger than the size of the shower, so it is coherent. At a PeV there are more radio quanta emitted than optical ones.

They will use this at very high energy, looking for GZK-induced neutrinos. The GZK converts protons into neutrinos, 50 MPc around sources.

The energy is at the level of 10^18 eV or higher, proper time is 50 milliseconds, longest baseline neutrino experiment possible.

Anita has a GPS antenna for detection, and orientation which needs a fraction of a degree resolution. Solar powered. Antennas are pointed down 10 degrees.

This 50-page document describes the instrument.

Lucky coincidences: 70% of world’s fresh water is in antarctica, and it is the most quiet radio place. The place selects itself, so to speak.

They made a flight with a live time of 17.3 days, but this one never flew above the thickest ice, which is where most of the signal should be coming from.

The Askaryan effect gets distorted by antenna detection, electronics, and thermal noise. The triggering works like any multi-level trigger. Sufficient energy in one antenna, same for neighbors. L3 goes down to 5 Hz from a start of 150 kHz. L2 does coincidence between adjacent L1 triggers.

They put a transmitter underground to get pulses to be detected. Cross-correlation between antennas do interferometry, and gets position of source. The resolution obtained on elevation is an amazing 0.3 degrees, and for azimuth it is 0.7 degree resolution. The ground pulsers make even very small effects stand out. Even 0.2 degree tilt of detector can be spotted by looking at errors in elevation as a function of azimuth.

First pass of analysis of data: 8.2M hardware triggers. 20,000 of those point well to ice. After requiring upcoming plane waves, isolated from camps and other events, remain a few events. Could be some residual man-made noise. Background estimate: thermal noise, which is well simulated, and gives less than one event after all cuts, and anthropogenic impulsive noise, like iridium phones, spark plugs, discharge from structures.

Results: having seen zero vertical polarization events surviving cuts, constraints on GZK production models. Best result to date in the energy range from 10^10 to 10^13 GeV.

Anita 2 has 27 million better triggers, over deeper ice, 30 days afloat. Still to be analyzed. Anita 1 is doing a 2nd pass deep analysis of the data. Anita 2 has better data, expect factor 5-10 more GZK sensitivity from it.

Sanshiro Enomoto, Using neutrinos to study the Earth: Geoneutrinos. ( slides here)

Geoneutrinos are generated by beta decay chain of natural isotopes (U,TH,K). These all yield antineutrinos. With an organic scintillator, they are detected by inverse-beta decay reaction yielding a neutron and a positron. The threshold is at 1.8 MeV. Uranium and Thorium contribute in this energy range, while the Potassium yield is below it. Only U-238 can be seen.

Radiogenic heat dominates Earth energetics. Measured terrestrial heat flow is of 44 TW, and the radiogenic heat is 3TW. The only direct geochemical probe: deepest borehole reaches only 12 km, and rock samples down to 200 km underground. Heat release from the surface peaks off America in the Pacific and in south Indian ocean. Estimate is of 20 TW from radioactive heat, 8 from U, 8 from Th, 3 from K. Core heatflow from solidification etc. is estimated at 5-15 TW, secular cooling 18+-10 TW.

Kamland has seen 25 events above backgrounds, consistent with expectations.

I did not take further notes of this talk, but was impressed by some awesome plots of Earth planisferes with all sources of neutrino backgrounds, to figure out which is the best place for a detector studying geo-neutrinos. Check the slides for them…

Michele Maltoni, synergies between future atmospheric and long-baseline neutrino experiments ( slides here)

A global six-parameter fit of neutrino parameters was shown, including solar, atmospheric, rector, and accelerator neutrinos, but not SNO-III yet. There is a small preference for non-zero theta_13, coming completely from the solar sector; as pointed out by G.Fogli, we do not find a non-zero theta_13 angle from atmospheric data. All we can do is point out that there might be something interesting, suggest experiments to do their own analyses fast.

The question is: in this picture, were many experiments contribute, is there space left for relevance of atmospheric neutrinos ? Which is the role of atmospheric neutrino measurements ? Do we need them at all ?

At first sight, there is not much left for atmospheric neutrinos. Mass determination is dominated by MINOS, theta_13 is dominated by CHOOZ, atmospheric data dominate in determination of mixing angle, atmospheric neutrino measurements have highest statistics, but with the coming of next generation this is going to change. There is symmetry in sensitivity shape of other experiments to some of the parameters. On the other hand, when you include atmospheric data, the symmetry is broken in theta_13, which distinguishes between normal and inverted hierarchy.

Determination of the octant in $\sin^2 \theta_{23}$ and $\Delta m^2_{31}$. Also, the introduction of atmospheric data introduces a modulation in the $\delta_{CP} - \sin \theta_{13}$ plot. Will this usefulness continue in the future ?

Sensitivity to theta_13: apart from hints mentioned so far, atmospheric neutrinos can observe theta_13 through matter effects, MSW. In practice, the sensitivity is limited by statistics: at E=6 GeV the ATM flux is already suppressed; background comes from $\nu_e \to \nu_e$ events which strongly dilute the $\nu_\mu \to \nu_e$ events. Also, the resonance occurs only for neutrinos OR antineutrinos, but not both.

As far as resolution goes, MegaTon detectors are still far in the future, but Long-baseline experiments are starting now.

One concludes that the sensitivity to theta_13 is not competitive with dedicated LBL and reactor experiments.

Completely different is the issue with other properties, since the issue of the resonance can be exploited once theta_13 can be measured. resonant enhancement of neutrino (antineutrino) oscillations for a normal (inverted) hierarchy; mainly visible for high energy, >6 GeV. The effect can be observed if detector can discriminate charge, or, if no charge discrimination is possible, if the number of neutrinos and antineutrinos is different.

Sensitivity to the hierarchy depends on charge discrimination for muon neutrinos. Sensitivity to the octant: in the low-energy region (E<1 GeV), for theta_13=0 the excess of $\nu_e$ flux for theta_23 in one or the other side. Otherwise, there are lots of oscillations, but the effect persitst on the average. It is also present for both neutrinos and antineutrinos. At high energy, E>3 GeV, for theta_13 the MSW resonance produces an excess of electron-neutrino events. Resonance occurs only for one kind of neutrino (neutrino vs antineutrino).

So in summary one can detect many features with atmospheric neutrinos, but only with some particular characteristics of the detector (charge discr, energy resolution…).

Without atmospheric data, only K2K can say something on the neutrino hierarchy for low theta_13.

LBL experiments have poor sensitivity due to parameter degeneracies. Atmospheric neutrinos contribute in this case. The sensitivity to the octant is almost completely dominated by atmospheric data, with only minor contributions by LBL measurements.

One final comment: there might be hints of neutrino hierarchy in high-energy data. If theta_13 is really large, there can be some sensitivity to neutrino mass hierarchy. So the idea is to have a part of the detectors with increased photo-coverage, and use the rest of the mass as a veto: the goal is to lower the energy threshold as much as possible, to gain sensitivity to neutrino parameters with large statistics.

Atmospheric data are always present in any long-baseline neutrino detector: ATM and LBL provide complementary information on neutrino parameters, information in particular on hierarchy and octant degeneracy.

Stavros Katsanevas, Toward an European megaton neutrino observatory ( slides here)

Underground science: interdisciplinary potential at all scales. Galactic supernova neutrinos, galactic neutrinos, SN relics, solar neutrinos, geo-neutrinos, dark matter, cosmology -dark energy and dark matter.

Laguna: aimed at defining and realizing this research programme in Europe. Includes a majority of European physicists interested in the construction ove very massive detectors realized in one of the three technologies using liquids: water, liquid argon, and liquid scintillator.

Memphys, Lena, Glacier. Where could we put them ? The muon flux goes down with the overburden, so one has to examine the sites by their depth. In Frejus there is a possibility to put a detector between road and train tracks. Frejus rock is not hard but not either soft. Hard rock can become explosive because of stresses, and is not good. Another site is Pyhasalmi in Finland, but there the rock is hard.

Frejus is probably the only place where one can put water Cherenkov detectors. For liquid Argon, we have ICARUS (hopefully starting data taking in May), others (LANNDD, GLACIER, etc.). Glacier is a 70 m tank, with several novel concepts. A safe LNG tank, developed for many years by petrochemical industry. R&D includes readout systems and electronics, safety, HV systems, LAr purification. Must think about getting an intermediate scale detector.

The physics scope: a complementary program, a lot of reach in Memphis in searches for positron-pizero decays of protons, better for kzero in liquid argon. Proton lifetime expectations are at 10^36 years.

By 2013-2014 we will know whether sinsquared theta13 is larger than zero.

European megaton detector community (3 liquids) in collaboration with its industrial partners is currently addressing common issues (sites, safety, infrastructures, non-accelerator physics potential) in the context of LAGUNA (EUI FP8) Cost estimates will be ready by July 2010.

David Cowan, The physics potential of Icecube’s deep core sub-array ( slides here)

A new sub-array in ice-cube, called deep-core: ICDC. Originally conceived as a way to improve the sensitivity to WIMPs. Denser sub-arrays to lower the energy threshold, they give one order of magnitude decrease in the low-energy reach. There are six special strings plus seven nearby icecube strings The vertical spacing is of 7 meters, with 72 meter horizontal interstring spacing: a x10 density with respect to IceCube.

The effective scattering length in deep ice, which is very clear, is longer than 40 meters. This gives a better possibility to do a calorimetric measurement.

The deep core is at the bottom center. They take the top modules in each string as an active veto for backgrounds coming from muon events going down. On the sides, three layers of IC strings also provide a veto. These beat down the cosmic background a lot.

The ICDC veto algorithms: one runs online, finds event light intensity, the weighted center of gravity, and the time. They do a number of things and come up with a 1:1 S/N ratio. So ICDC improves the sensitivity to WIMPs, neutrino sources in the southern sky, oscillations. For WIMPs, an annihilation can occur in the center of the Earth or Sun. Annihilations to bbbar pairs or tau-tau pairs gives soft neutrinos, while ones into W boson pairs yield hard ones. This way, they extend the reach to masses of less than 100 GeV, at cross sections of 10^-40 cm^2.

In conclusion, ICDC can analyze data at lower neutrino energy than previously thought possible. It improves overlap with other experiments. It provides for a powerful background rejection, and it has sufficient energy resolution to do a lot of neutrino oscillation studies.

Kenneth Lande, Projects in the US: a megaton detector at Homestake ( slides here)

DUSEL at Homestake, in South Dakota. There are four tanks of water Cherenkov in the design. Nearby there’s the old site of the chlorine experiment. Shafts a km apart.

DUSEL will be an array of 100-150 kT fiducial mass Cerenkov detectors, at 1300 km distance from FNAL. The beam goes from 0.7 MW to 2.0 MW as the project goes along. Eventually add 100 kT of argon. A picture below shows a cutaway view of the facility.

Goals are accelerator-based theta_13, look at neutrino mass hierarchy, CP violation through delta_CP. For non-accelerator, the program includes studies of proton decay, relic SN neutrinos, prompt SN neutrinos, atmospheric neutrinos, and solar neutrinos. They can build up to 70m-wide tanks, but settled to 50-60m. The plan is to build three modules.

Physics-wise, the fnal beam has oscillated and disappeared at energy around 4 GeV. Rate is of 200,000 CC events per year assuming 2MW power (no oscillation, raw events). Neutrino appearance (electron kind) for nu and antinu as a function of energy gives oscillation, and mass hierarchy.

Reach in theta_13 is below 10^-2. For nucleon decay: looking in the range of 10^34. 300 kT per 10 y means 10^35 proton-years. Sensitive also to K-nu mode of decay, at the level of 8×10^33.

DUSEL can choose the overburden. A deep option can go deeper than Sudbury.

US power reactors are far from Homestake. Typical distance is 500 km. The neutrino flux from reactors is 1/30 of that of SK.

For a SN in our galaxy they expect about 100,000 events in 10 seconds. For a SN in M31 they expect about 10-15 events in a few seconds.

Detector construction: excavation, installing water-tight liner… Financial timetable is uncertain. At the moment water is being pumped down. Rock studies can start in September.

And that would be all for today… I heard many other talks, but cannot bring myself to comment on those. Please check http://neutrino.pd.infn.it/NEUTEL09/the conference site for the slides of the other talks!

## Neutrino Telescopes XIIIMarch 8, 2009

Posted by dorigo in astronomy, cosmology, news, personal, physics, science, travel.
Tags: , , , ,

The conference “Neutrino Telescopes” has arrived at its XIII edition. It is a very nicely organized workshop, held in Venice every year towards the end of the winter or the start of the spring. For me it is especially pleasing to attend, since the venue, Palazzo Franchetti (see picture below) is located at a ten minute walk from my home: a nice change from my usual hour-long commute with Padova by train.

This year the conference will start on Tuesday, March 10th, and will last until Friday. I will be blogging from there, hopefully describing some new results heard in the several interesting talks that have been scheduled. Let me mention only a few of the talks, pasted from the program:

• D. Meloni (University of Roma Tre)
CP Violation in Neutrino Physics and New Physics
• K. Hoffman (University of Maryland)
AMANDA and IceCube Results
• S. Enomoto (Tohoku University)
Using Neutrinos to study the Earth
• D.F. Cowen (Penn State University)
The Physics Potential of IceCube’s Deep Core Sub-Detector
• S. Katsanevas (Université de Paris 7)
Toward a European Megaton Neutrino Observatory
• E. Lisi (INFN, Bari)
Core-Collapse Supernovae: When Neutrinos get to know Each
Other
• G. Altarelli (University of Roma Tre & CERN)
Recent Developments of Models of Neutrino Mixing
Next Challenge in Neutrino Physics: the θ13 Angle
• M. Cirelli (IPhT-CEA, Saclay)
PAMELA, ATIC and Dark Matter

The conference will close with a round table: here are the participants:

Chair: N. Cabibbo (University of Roma “La Sapienza”)
B. Barish (CALTECH)
L. Maiani (CNR)
V.A. Matveev (INR of RAS, Moscow)
H. Minakata (Tokyo Metropolitan University)
P.J. Oddone (FNAL)
R. Petronzio (INFN, Roma)
C. Rubbia (CERN)
M. Spiro (CEA, Saclay)
A. Suzuki (KEK)

Needless to say, I look forward to a very interesting week!

## Comet Lulin is a naked-eye object!February 19, 2009

Posted by dorigo in astronomy, news, science.
Tags: , , ,

Comet Lulin (C/2007 N3) is approaching the minimum distance from our planet – the conjunction will occur on February 24th at a distance of 61 million kilometers- and is already a naked-eye object in the sky, glowing at a visual magnitude of +5.6 with what is described as a bright green colour. The coma has a diameter of 20 arcminutes (two-thirds of the Moon’s diameter). As you can see from Jack Newton’s picture below, the comet shows both a tail and an anti-tail, with a bright oval coma.

The conjunction is very convenient given the absence of any moonlight, and its position in the sky, almost perfectly in the opposite direction with respect to the Sun.  A pair of binoculars, even low-power ones, will reveal the comet easily from your back yard even in light polluted areas, while under dark skies you should be able to detect the comet even with the unaided eye; a telescope should be used with low magnification to show the comet in all its glory.  The object moves quickly in the sky, and its apparent motion is easy to detect if you have patience to observe the comet for a while.

You can find the comet in Libra today and tomorrow (check the map below -click to enlarge), while at conjunction on Feb 24th it will be in Leo, just a few degrees due South of Saturn. In a few days its brightness could increase by another magnitude (magnitudes in the chart are not necessarily correct).

For a beautiful gallery of images of this beautiful comet, I advise you to visit the Spaceweather site.

## Guest post: Marco Vedovato, “Jupiter: a little analysis about the GRS-LRS encounter”February 15, 2009

Posted by dorigo in astronomy, news, physics, science.
Tags: , , ,

Marco Vedovato, in his daily life, is a structural engineer. As an amateur astronomer, when his children allow him to do this, his main interest is the atmosphere of Jupiter, the giant planet of the Solar System, and he partecipates, as a measurer, to the Jupos Project, an international investigation about Jupiter. He is also the vice-manager of the Jupiter Program for the Italian Amateur Astronomic Union. When I saw his extremely interesting analysis of the Jovian atmosphere I begged him to write about it for this site. You can find the resulting piece below.

Last year I amused myself to analyze one aspect of the encounter between two Jupiter spots. For this aim, I used WinJupos, a software for measuring the Jupiter images (see here). In the following picture, a map composed by using some very good images, the reader will be able to meet the protagonists of this tale (click on the picture to get the full image!):

The first one is the famous Great Red Spot (GRS), a long-lived anticyclonic circulation, centered around -22,5ø South latitude, existing at least since the second half of 18th century. The second one is a smaller reddish spot (LRS, Little Red Spot), probably born around the end of 2007 and the beginning of 2008, a residue of a previous “Tropical Disturbance”, observed during the 2007.

It is well-known that the GRS has a 90-days oscillation around its mean motion in the Jupiter outer atmosphere; having a look to the map above, GRS is moving very slowly in longitude (with the same latitude) from left to right, forward to the increasing longitudes (retrograde motion). This lazy motion is not constant but presents an oscillation around the main drift. In the following graph the red points are the GRS center, the ones on the left (blue) and right (green) side are the ends of GRS; it is easy to note a period close to 90 days.

Instead the LRS moved in the opposite direction (prograde motion) than the GRS (and with higher speed), so an encounter was inevitable. In the the picture below, a graph I obtained before the encounter, using few points but from very good images (i.e. those of C. Go, F. Carvalho, A. Wesley, G. Grassman and others). I noted, also in the LRS case, an oscillation around the interpolating line.

After the encounter the LRS was quickly destroyed. The following graph documents the collision.

I was interested to see if this LRS oscillation were similar to the GRS one, with the same period and if in phase or not. So I matched “in parallel” the relative motions (by using a modified reference system for the LRS, artificially changing its speed, to have more or less the same slope for both the drifts); a light correlation between the two oscillations seems noticeable. I do not know whether the effect is casual or if it is real. In this last case, are the two oscillations determined by a same cause, hidden in atmospheric currents embedded in deeper layers?

John Rogers, Jupiter director of the British Astronomical Association, wrote me this comment: “Very interesting. Perhaps the oscillation of the GRS has an effect on the nearby LRS? Or perhaps the synchrony is a coincidence — it is difficult to say!

I’ll have to prepare further analysis when there will be similar opportunities.

(Marco Vedovato)

## What’s hot aroundFebruary 10, 2009

Posted by dorigo in astronomy, Blogroll, cosmology, internet, italian blogs, mathematics, news, physics, science.
Tags: , , , , ,

For lack of interesting topics to blog about, I refer you to a short list of bloggers who have produced readable material in the last few days:

• The always witty Resonaances has produced an informative post on Quirks.
• My friend David Orban describes the recently-instituted singularity University
• Stefan explains other types of singularities, those you can find in your kitchen!
• Dmitry has an outstanding post out today about the physics of turbulence, with four mini-pieces on the Reynolds number, viscosity, universality and intermittency. Worth a visit, if even just for the pics!
• Marco discusses the long winter of LHC. Sorry, in italian.
• Peter discusses the same issue in English.
• Marni points out a direct explanation of the Pioneer anomaly with the difference between atomic clock time and astronomical time. Or, if you will, a change of the speed of light with time!

## CMS and extensive air showers: ideas for an experimentFebruary 6, 2009

Posted by dorigo in astronomy, cosmology, physics, science.
Tags: , , , , , , ,

The paper by Thomas Gehrmann and collaborators I cited a few days ago has inspired me to have a closer look at the problem of understanding the features of extensive air showers – the phenomenon of a localized stream of high-energy cosmic rays originated by the incidence on the upper atmosphere of a very energetic proton or light nucleus.

While the topic of cosmic rays, their sources, and their study is largely terra incognita to me -I only know the very basic facts, having learned them like most of you from popularization magazines-, I do know that a few of their features are not too well understood as of yet. Let me mention only a few issues below, with no fear of being shown how ignorant on the topic I am:

• The highest-energy cosmic rays have no clear explanation in terms of their origin. A few events with energy exceeding $10^{20} eV$ have been recorded by at least a couple of experiments, and they are the subject of an extensive investigation by the Pierre Auger observatory.
• There are a number of anomalies on their composition, their energy spectrum, the composition of the showers they develop. The data from PAMELA and ATIC are just two recent examples of things we do not understand well, and which might have an exotic explanation.
• While models of their formation suppose that only light nuclei -iron at most- are composing the flux of primary hadrons, some data (for instance this study by the Delphi collaboration) seems to imply otherwise.

The paper by Gehrmann addresses in particular the latter point. There appears to be a failure in our ability to describe the development of air showers producing very large number of muons, and this failure might be due to modeling uncertainties, heavy nuclei as primaries, or the creation of exotic particles with muonic decay, in decreasing order of likelihood. For sure, if an exotic particle like the 300 GeV one hypothesized in the interpretation paper produced by the authors of the CDF study of multi-muon events (see the tag cloud on the right column for an extensive review of that result) existed, the Tevatron would not be the only place to find it: high-energy cosmic rays would produce it in sizable amounts, and the observed multi-muon signature from its decay in the atmosphere might end up showing in those air showers as well!

Mind you, large numbers of muons are by no means a surprising phenomenon in high-energy cosmic ray showers. What happens is that a hadronic collision between the primary hadron and a nucleus of nitrogen or oxygen in the upper atmosphere creates dozens of secondary light hadrons. These in turn hit other nuclei, and the developing hadronic shower progresses until the hadrons fall below the energy required to create more secondaries. The created hadrons then decay, and in particular $K^+ \to \mu^+ \nu_{\mu}$, $\pi^+ \to \mu^+ \nu_{\mu}$ decays will create a lot of muons.

Muons have a lifetime of two microseconds, and if they are energetic enough, they can travel many kilometers, reaching the ground and whatever detector we set there. In addition, muons are very penetrating: a muon needs just 52 GeV of energy to make it 100 meters underground, through the rock lying on top of the CERN detectors. Of course, air  showers include not just muons, but electrons, neutrinos, and photons, plus protons and other hadronic particles. But none of these particles, except neutrinos, can make it deep underground. And neutrinos pass through unseen…

Now, if one reads the Delphi publication, as well as information from other experiments which have studied high-multiplicity cosmic-ray showers, one learns a few interesting facts. Delphi found a large number of events with so many muon tracks that they could not even count them! In a few cases, they could just quote a lower limit on the number of muons crossing the detector volume. One such event is shown on the picture on the right: they infer that an air shower passed through the detector by observing voids in the distribution of hits!

The number of muons seen underground is an excellent estimator of the energy of the primary cosmic ray, as the Kascade collaboration result shown on the left shows (on the abscissa is the logarithm of the energy of the primary cosmic ray, and on the y axis the number of muons per square meter measured by the detector). But to compute energy and composition of cosmic rays from the characteristics we observe on the ground, we need detailed simulations of the mechanisms creating the shower -and these simulations require an understanding of the physical processes at the basis of the productions of secondaries, which are known only to a certain degree. I will get back to this point, but here I just mean to point out that a detector measuring the number of muons gets an estimate of the energy of the primary nucleus. The energy, but not the species!

As I was mentioning, the Delphi data (and that of other experiments, too) showed that there are too many high-muon-multiplicity showers. The graph on the right shows the observed excess at very high muon multiplicities (the points on the very right of the graph). This is a 3-sigma effect, and it might be caused by modeling uncertainties, but it might also mean that we do not understand the composition of the primary cosmic rays: yes, because if a heavier nucleus has a given energy, it usually produces more muons than a lighter one.

The modeling uncertainties are due to the fact that the very forward production of hadrons in a nucleus-nucleus collision is governed by QCD at very small energy scales, where we cannot calculate the theory to a good approximation. So, we cannot really compute with the precision we would like how likely it is that a 1,000,000-TeV proton, say, produces a forward-going 1-TeV proton in the collision with a nucleus of the atmosphere. The energy distribution of secondaries produced forwards is not so well-known, that is. And this reflects in the uncertainty on the shower composition.

Enter CMS

Now, what does CMS have to do with all the above ? Well. For one thing, last summer the detector was turned on in the underground cavern at Point 5 of LHC, and it collected 300 million cosmic-ray events. This is a huge data sample, warranted by the large extension of the detector, and the beautiful working of its muon chambers (which, by the way, have been designed by physicists of Padova University!).  Such a large dataset already includes very high-multiplicity muon showers, and some of my collaborators are busy analyzing that gold mine. Measurements of the cosmic ray properties are ongoing.

One might hope that the collection of cosmic rays will continue even after the LHC  is turned on. I believe it will, but only during the short periods when there is no beam circulating in the machine. The cosmic-ray data thus collected is typically used to keep the system “warm” while waiting for more proton-proton collisions, but it will not be a orders-of-magnitude increase in statistics with respect to what has been already collected last summer.

The CMS cosmic-ray data can indeed provide an estimate of several characteristics of the air showers, but it will not be capable of providing results qualitatively different from the findings of Delphi -although, of course, it might provide a confirmation of simulations, disproving the excess observed by that experiment. The problem is that very energetic events are rare -so one must actively pursue them, rather than turning on the cosmic ray data collection when not in collider mode. But there is one further important point: since only muons are detected, one cannot really understand whether the simulation is tuned correctly, and one cannot achieve a critical additional information: the amount of energy that the shower produced in the form of electrons and photons.

The electron- and photon-component of the air shower is a good discriminant of the nucleus which produced the primary interaction, as the plot on the right shows. It in fact is a crucial information to rule out the presence of nuclei heavier than iron, or the composition of primaries in terms of light nuclei. Since the number of muons in high-multiplicity showers is connected to the nuclear species as well, by determining both quantities one would really be able to understand what is going on. [In the plot, the quantity Y is plotted as a function of the primary cosmic ray energy. Y is the ratio between the logarithm of the number of detected muons and electrons. You can observe that Y is higher for iron-induced showers (the full black squares)].

Idea for a new experiment

The idea is thus already there, if you can add one plus one. CMS is underground. We need a detector at ground level to be sensitive to the “soft” component of the air shower- the one due to electrons and photons, which cannot punch through more than a meter of rock. So we may take a certain number of scintillation counters, layered alternated with lead sheets, all sitting on top of a thicker set of lead bricks, underneath which we may set some drift tubes or, even better, resistive plate chambers.

We can build a 20- to 50-square meter detector this way with a relatively small amount of money, since the technology is really simple and we can even scavenge material here and there (for instance, we can use spare chambers for the CMS experiment!). Then, we just build a simple logic of coincidences between the resistive plate chambers, imposing that several parts of our array fires together at the passage of many muons, and send the triggering signal 100 meters down, where CMS may be receiving a “auto-accept” to read out the event regardless of the presence of a collision in the detector.

The latter is the most complicated thing to do of the whole idea: to modify existing things is always harder than to create new ones. But it should not be too hard to read out CMS parasitically, and collect at very low frequency those high-multiplicity showers. Then, the readout of the ground-based electromagnetic calorimeter should provide us with an estimate of the (local) electron-to-muon ratio, which is what we know to determine the weight of the primary nucleus.

If the above sounds confusing, it is entirely my fault: I have dumped here some loose ideas, with the aim of coming back here when I need them. After all, this is a log. a Web log, but always a log of my ideas… But I wish to investigate more on the feasibility of this project. Indeed, CMS will for sure pursue cosmic-ray measurements with the 300M events it has already collected. And CMS does have spare muon chambers. And CMS does have plans of storing them at Point 5… Why not just power them up and build a poor man’s trigger ? A calorimeter might come later…