jump to navigation

Good stuff around August 15, 2008

Posted by dorigo in Blogroll, computers, cosmology, games, humor, internet, news, physics, science.
Tags: , ,
comments closed

Here are a few links that you might be interested to follow. They lead to posts in blogs I read and you should too:

  • Louise has news of the appearance of a new puzzling “ghost” green galaxy.
  • Marco explains the disappearing of a ghost propagator instead.
  • Jester unexplains the unhiggs in an undeniably understandable, yet unserious way.
  • Alex challenges you to pilot a submarine -and to find a windows bug while you’re at it.
  • Bee explains the equivalence principle and why general relativity is sexy.
  • Roberto offers some astounding pictures of the most astounding thing ever built.

To xp or not to xp July 2, 2008

Posted by dorigo in computers, personal.
Tags: ,
comments closed

I recently upgraded my Sony vaio laptop to a newer model – the old one had taken some beating and the DVD reader was not working anymore, preventing the crucial task of installing new useless software.

sony vaioBeing a Sony fan, the choice fell on another 11″ ultra-light model, the Sony Vaio TZ31 MN. It is a nice little thing, with a really bright display and a core duo processor. At less than 3 pounds of weight, it allows me to carry it everywhere without straining my back. And the battery really lasts seven hours straight! Ok, enough advertisements.

The bad thing is that the machine came with Windows Vista Business already pre-installed. It also had a XP downgrading disk included in the package, but I decided I’d give the newer operating system a try.

Bad idea.

Vista sucks. It asks too many questions, which are tough to inhibit with settings hard to locate. It slows down any operation you try to do by performing obscure actions which leave you staring at a blank screen every time you launch a program or try to close a window by clicking the x sign on the top right. It makes things hard for you in several ways. I admit I might just be inept and the cause of my troubles with Vista might be just my own ignorance: but if Micro$oft wanted to create an easy-to-use system, then I’d have to say it failed miserably.

Now, my problem is that as much as I hate this new system, I hate even more to uninstall all the software I have taken the pains to put in the new computer. It took me well over a day of work to copy my folders, reorganize them in a better logical structure, add mwsnap, irfanview, winhttrack, the worldwide telescope, ghostview, cygwin, firefox, blitzin, ssh, open office… Plus, I had to fix IP addresses, register the computer with the wireless network of my department, add printers, save passwords, blah, blah.

Computers… I love them, but they make my life miserable at times! To xp or not to xp ?

…And please don’t say you owe a Mac. I hate macs even more!

The Worldwide telescope May 13, 2008

Posted by dorigo in astronomy, computers, cosmology, internet, science.
Tags: , , ,
comments closed

Jeff pointed out to me today the remarkable world wide telescope, a site where you can download a software created by Microsoft to browse the heavens as if you were commanding a powerful telescope. The constellations are not maps, but actual pictures, into which you can zoom as much as the images of the digital sky surveys (SDSS and others) allow.

My jaw dropped as I started using the software, which you can download and install on your computer, and which works pretty much like google Earth – downloading the region you are visualizing from the internet. A nice feature is the appearance of a frame of thumbnail pictures around the zoomed area, highlighting the most interesting celestial objects present there. If you click once on each pic the relevant object is highlighted on the map; clicking twice will allow you to download full-resolution image of the object directly from the online databases, including Hubble images.

What I find amazing, however, is the fact that browsing the night sky becomes a thrilling experience at your fingertips in front of the computer. The realism is perfect – these are pictures, in pure google earth style. However, while we never have the need to find a feature on the Earth surface by hovering over it in our real life, that is exactly what we do when we observe the night sky: so the learning experience provided by the program for a user who wants to get better at locating celestial objects is invaluable.

Above you can see a screenshot of part of the WWT window, which I centered on the Deer Lick group of galaxies – NGC7331, a milky way-like galaxy which is the largest member of the group, is on top. Below you can see Stephan’s quintet – a group of five small galaxies of 13th-14th magnitude which is among my favorite targets in deep-sky observing sessions. By zooming in (below), you get to see stars fainter than 18th magnitude, at a resolution comparable to that of  a meter-class instrument. Amazing!

I highly recommend downloading the software. Learning to locate objects will become a wonderful pastime!

SCI(bzaar)NET April 15, 2008

Posted by dorigo in Blogroll, computers, internet, italian blogs, news, personal, physics, science, travel.
comments closed

I have been invited by David Orban, a friend and fellow blogger, to speak on the divulgation of Science next May 17th at the Scuola Politecnica di Design in Milano, at a meeting called SCI(bzaar)NET. The event, organized by Gianandrea Giacoma, is described in its web site as (my translation)

Subjects active in the net meet in a new way to ponder on the challenges that Internet poses to scientific divulgation, production of knowledge, and Open Culture in the academic world.”

The meeting will have three main threads:

  1. The hunger of scientific outreach: scientific research and the fast technological evolution are increasingly becoming, as is evident to all, among the main factors of change in the world and in our daily life. For these reasons a growing number of people, fascinated and awed, feel the need to understand and make their own opinion on the matter.
  2. Production of knowledge: if internet is historically connected to the academic world, on the other hand one cannot claim that the majority of researchers as indivudials and the italian University institutions have adopted these new instruments for a more advanced presence online and a more effective handling of knowledge, students, researchers, and professors.
  3. Open Culture: the growing impact of legal, economical, organizational and cultural scenarios of a diffusion of Open Culture in Universities under the pressure of internet.

I will contribute with a video, because I unfortunately cannot be there in person… On the following morning I am leaving to New Mexico for PPC 2008;. I am planning to post the video here, with a transcription (the language of the meeting is Italian…). The subject of my talk will be “Fare divulgazione scientifica con un blog: opportunita’ e limiti” (doing scientific outreach with a blog: opportunities and limits).

UPDATE - the name of this post has been modified according to the request of G.Giacoma on 4/23, reflecting the final name of the event.

The Say of the Week February 14, 2008

Posted by dorigo in Blogroll, computers, physics, science.
comments closed

Rather than assuming quantum behavior as a nuisance to shield from, we have to accept it as an intrinsic and powerful element of reality, and learn to exploit it in our designs. Once we do that, everything in engineering is going to change radically: from design, to construction, to building, to project management.”

David Orban

The Say of the Week February 7, 2008

Posted by dorigo in computers, games, humor, physics, science.
comments closed

One reason we like supersymmetry is that we haven’t seen any of the particles

Michael Weinberger (interviewed here). What can I say… “So far so good” :)

Scientific Bang for the Buck January 5, 2008

Posted by dorigo in computers, mathematics, news, physics, politics, science.
comments closed

A concept worth a preprint, specifically Bruce Knuteson’s “A Quantitative Measure of Experimental Scientific Merit“, physics.data-an/0712.3572v1. And certainly a preprint worth a look, if only for making up one’s mind on the scientific merit of working at MIT. It came out on Christmas day on the ArXiv.

Jokes aside, I found the paper quite entertaining, and at times indeed surprising. While I find Bruce’s approach to the problem of assessing the scientific merit of a proposed experiment or analysis rather dangerous, and the explicit formulation of priors for the probability of discovering new physics in this or that experiment vaguely reactionary, I admit the paper brings home a point, which is however its premise rather than its thesis: review committees, as well as search committees, move in the dark. I am still in doubt on whether the exercise of endlessly debating over priors is a valid substitute to good-old preconceptions and biases. 

Bruce is quite up-front from the very beginning in stating what is the main purpose of his study:

“In the context of determining which research program to pursue, review committees often must decide the relative scientific merits of proposed experiments. Within large experiments, deciding which analyses to emphasize requires similar decisions”.

Which gets me to raise the first objection – or rather a comment: It is remarkably radical to talk about “which analyses to emphasize”. I find that the concept, in fact, is a bit a too business-like way of doing physics in a large experiment. At the Tevatron we certainly need to emphasize the top mass measurement, the B mixing, and the Higgs searches these days, but we do not need a computation of entropy decrease to know it; emphasizing other analyses (which means, please note, de-emphasizing others) because of some pre-arranged prior (the estimated probability that a gluino is there, for instance) smells of a covert way of depriving scientists working in the collaboration of their wonderful inventiveness, of their freedom to be guided by their nose, by their intuition.

It is not a chance, it seems, that Knuteson is one of the authors of a complex automated machinery for new physics searches, a device producing hundreds of histograms of kinematical variables describing any combination of physics objects (high-Pt electrons and muons, jets, missing Et, photons, etcetera) in search for discrepancies with the standard model: is number-crunching winning its battle with scientific minds as much as it has won the chess challenge with our best grandmasters ?

The paper starts with a definition of the surprise content of the result of an experiment. It does so by using information theory, arriving at the wanted measure of the merit of an experimental result as the entropy decrease in the state of knowledge relative to the particular physics question investigated. Here is the synopsis of the discussion up to Section II, in Knuteson’s words:

“The essential thesis of this article is summarized in two sentences.

  • The appropriate quantification of scientific merit of a proposed experiment or analysis (before it is performed and its outcome is known) is the reduction in information entropy the experiment or analysis is expected to provide [...].
  • The appropriate quantification of scientific merit of an experiment or analysis after the result is known is the information gained from the result [...].”

Fair enough: if one knew what is the chance of the Tevatron discovering new physics in Run II, or the LHC finding something beyond the Higgs, one could certainly be able to tell how well the money was spent in building those experiments. Using the reduction in information entropy is a principled way to quantify the appropriateness of the investments.

But here, in fact, comes the nice part: the paper goes on to delve with the question by specifically working out priors. In Section III, Knuteson uses priors derived in the Appendix to estimate the “scientific bang for the buck” (SBFB) of existing experiments, and even that of past experiments discovering the Psi, the W and Z bosons, and so on. One learns that the probability of the Tevatron Run II finding new physics is 20%, and that the probability that the LHC will see something new is 90%. 

Using those numbers and the cost of the experiments, the SBFB of the LHC is computed at a mere 0.001, while the Tevatron stands a giant at 5.0! Also worth noting is the specific search for single top production at the Tevatron, which – due to the low surprise factor – has a SBFB of 0.00001. Ironically, in the same table Knuteson includes the SBFB of the experiment of flipping a coin: the SBFB of the experiment is zero, not that different from the global search for new physics at the LHC!, although, to be fair, zero and 0.001 are indeed quite different when you take the logarithm.

As far as completed experiments go, one learns instead that the tau discovery stands at a SBFB of 5.0, soundly beating runner-up J/psi discovery at 0.2, with the top quark discovery at an amateurish 0.0004. The table is long, and you can search for your favorite HEP result, and judge for yourself on whether the Nobel Prize to Rubbia was’t indeed a bit hasty.

In earnest, the summary of Bruce’s paper is very direct in clarifying the rather limited scope of the proposed quantification method:

“Use of information content or information gain to evaluate the scientific merit of experiments requires the estimation of the probabilities of qualitatively different outcomes, and the reader may object that the problem of quantifying an experiment’s scientific merit has simply been reformulated in terms of the estimation of the probabilities of possible experimental outcomes. At worst, this reformulation significantly changes and focuses the discussion. The fact that there is not a well-developed literature to point to for the justification of these a priori probabilities emphasizes the fact that until now the importance of these probabilities has not been properly recognized [...]“

However, he argues that

“The reader may object to the very idea of constructing an explicit figure of merit [...] Such a reader misses the point that this is done (implicitly, if not explicitly) every time a decision of resource allocation is made. It is surely in the field’s best interest for such evaluations to be made in the sharpest, most open, most quantifiable, and scientifically best motivated framework possible”.

Which, to my biased ears, sounds like, “come on, we all know that the allocation of funding to science is made by fools, so let’s give ‘em some only partially random numbers to base their decisions upon and we will contain the damage”.

I do not mean to criticize the paper too much. It is a quite principled and tidy study of the problem. I think one cannot do much better in terms of finding a suitable figure of merit than what Knuteson did. I disagree with the very concept, though. But maybe I am too old-fashioned and I miss the point: scientific funds are not allocated wisely. On that, I think, we all agree.

Update: being away on vacation obviously does not help one staying in touch with what happens elsewhere on the web. I only now got aware of two other posts on this same topic: one at Superweak and one at Collider Blog. Backreaction also discusses it shortly.

Update 2: a detailed discussion of the statistical aspects of Knuteson’s paper is also available at Deep Thoughts and Silly Things.

Meccablog September 29, 2007

Posted by dorigo in computers, internet, italian blogs, news, physics, science.
comments closed

Just two lines to mention a new entry in my blogroll, Meccablog: the blog of the Mechatronic group in Trento. They discuss automated systems for driving assistence, mechanical studies for LISA, and other research topics about projects their group is involved in. Among the contributors is Mauro Da Lio, a fellow amateur astronomer and the head of the department of mechanical engineering in Trento.

Light pollution and visual astronomy September 19, 2007

Posted by dorigo in astronomy, computers, internet, personal, science, travel.
comments closed

I have been an amateur astronomer for most of my life, ever since the late Giancarlo De Carlo, a world-class architect and a good friend of mine, sent me a very nice map of the Heavens he had gotten with a 1970 copy of the National Geographic magazine (see below). 

I was seven years old then, and the starry sky at night was glorious  to stare at with the unaided eye, even from the terrace of my house in the city of Venice. I quite well remember I could see the milky way back then… Now, a third of a century past, such a view would require a massive black-out: I am sure many youngsters have not ever seen our galaxy in all the glory it can display under a moonless, non-light-polluted sky, in fact.

Pollution of our atmosphere by artificial lights might look to you like a ridiculous issue if compared with the problems of our waters, air, and soil: I could not agree more. On the other hand, a more efficient use of luminous energy would have several beneficial effects to the environment. And one of these would be a happier bunch of amateur astronomers.

If you look at a map of the intensity of luminous pollution over north-eastern Italy (see above), you notice that there are by now really few regions close to where I live which have so far been spared from a intense pollution during the night. The picture has been obtained with Google Earth, which allows you to map the nighttime appearance of the Earth from satellites, and with it you get some indication on the amount of light pollution from cities and roads.

Better, however, is the map on the left [P. Cinzano, F. Falchi (University of Padova), C. D. Elvidge (NOAA National Geophysical Data Center, Boulder), copyright 2001 ISTIL, Thiene, Reproduced from www.lightpollution.it], which has been generated by taking into account the combined effect of known sources, secondary scattering, and terrain elevation – including screening from natural obstacles such as mountains – in a model which has been shown to rather accurately predict the darkness of a site at night. The model has however the drawback of a scarce spatial resolution (about 3 km on the ground). Small-scale screening effects are thus ignored, while they can make locally a lot of difference!

Cinzano’s maps are a valuable instruments to visual astronomers, who need to locate the most proficuous sites for deep sky observations – the best compromise between the darkness of the site and the ease of reaching it with their equipment. But they cannot tell the whole story, because of the importance of the screening effects above mentioned. Direct measurements on the ground have to be made.

Enter the Sky Quality Meter (SQM, see picture on the right), a XXIst century instance of the good-old exposimeter our dads used to take a picture. A well-calibrated SQM should tell, with an error not larger than about 10%, the visual magnitude of the sky per square arcsecond: a number on a logarithmic scale which describes the actual darkness of the sky. A reading of 22.0 on a SQM is a virtually perfect sky; a reading of 21.0 is a dark sky with moderate to low light pollution; a reading of 20.0 is already wanting for observing deep sky objects, and lower readings mean one has better watch TV. The SQM is used by pointing to the sky and pressing a button: duh! However, the device has a sensitivity in a rather wide cone (about 80 degrees), so that one has to be careful to avoid including in it parts of the surrounding scenery, which can significantly alter the result. Also, the inclusion of significant parts of the milky way may change the readings by as much as 50%.

So the SQM is a useful tool, but it is not enough. Expert amateurs know how to estimate the visual limiting magnitude of stars, by counting their number in well-defined patches of the sky. A visual magnitude near 7.0 is expected if the sky has a SQM reading of 22, and a rather linear relationship exists between those two numbers. However, the visual magnitude is affected by layers of clouds, even very thin ones, which – if not lighted from below- have no influence on the SQM readings. Transparency of the sky, that is, is a critical factor that has nothing to do with light pollution, although haze catches light impinging on it even from very far sources, and may cause a dramatic brightening of the background.

One way around the small spatial resolution and lack of sensitivity to small-scale screening of Cinzano’s map is to use it in combination with Google Earth to produce an aerial view of a site, overlaid with the color-coding of the terrain. By placing the view about 3000 meters above ground, one thus “looks” at the site from a point of the atmosphere which may receive or not receive light from far sources. You thus get to look at the parts of the surrounding terrain which contribute to lighting up a point at the zenith of the observing site, and the color coding does allow one to understand better how much light will affect the sky darkness. The picture below shows my favourite observing site in the Dolomites, Casera Razzo, from a point above it. You see that far south there indeed are strong sources of light -evidenced by yellow of the terrain, while green is mostly dark and blue is quite dark- but most of them are screened by near mountains (courtesy Mauro Da Lio – source here).

As you can see, amateur astronomers these days have all sorts of toys to play with to plan their observations and discover promising dark sites. In the end, however, the best judge is an instrumental test: you take your favourite telescope, bring it to the site on a clear night, and try a few faint objects. But the variability of sky clarity and observing conditions (among them, the turbulence of the atmosphere, which “defocuses” pointlike sources, decreasing the signal-to-noise and the detection threshold) mean that you will need multiple tests in each site. A full-time job!

Intelligent cars: automated systems on display in Versailles September 15, 2007

Posted by dorigo in computers, internet, news, science, travel.
comments closed

I received from my friend Mauro da Lio, who is director of the Department of Mechanical Engineering in Trento, a press release on the new drive assist systems they have helped develop, and which will be on display in Versailles (France) next week. I thought it useful to make it available here, so I proceeded in translating it. Please find it below.

PReVENT: “the intelligent” automobile. The University of Trento among the partners of the European project.

The best european brains in a combined plan on sensors and technologies of the future for driving safety. A support to the driver.

Trento, September 14th 2007 – More than 40 thousand deaths per year in street incidents in the 15 member states and a social cost that reaches 2% of the GNP of the European Union. A budget that has currently brought the topic of street and transports safety -also the object of a recent white book published by the European Union- to the daily agenda in the priority programs of research and technological development of the VIIth framework program.

The European project PReVENT, started in February 2004, addresses this issue. Its main objective is of giving life to a system, inside the vehicle, supporting the driver with the aim of preventing street incidents or mitigating their effects. Among the 52 partners is the University of Trento, with its laboratory of automatic mechanics of the department of mechanical and structural englneering , which has developed technics of automatic drive planning for the european project. The 32 vehicles that will demonstrate live the various technologies of active and preventive safety developed in the plan will be on display in Versailles from the 18th to the 22nd of September.

The new generation of transport means that take advantage of the automatic driving systems have the following goals: inform the driver in real time of impending danger, prevent errors, suggest corrected actions, assist actively or even take over control before the incident. In particular, the new technologies will facilitate keeping a safety distance and safe speed during curves and with respect to other vehicles, they will help driving within a lane, assist during lane changes, help avoid hitting pedestrians or cyclists and improve safety in crossings, they will anticipate the action of lifesaving systems (airbag, safety belts etc.) in the case of imminent collisions.

The intelligent systems developed in PReVENT are organized in three functional blocks: understanding, planning, action. The understanding of the surroundings is realized in the phase of reconstruction of the scenery, in which information coming from a large number of sensors is “fused” in a model of the area. The planning consists in the elaboration of the driving plan most suitable to the situation. The action consists above all in the realization of suitable warnings to the driver in the case of discrepancies with respect to the ideal behavior (most of all with tactile interfaces) or directly in an automatic intervention of the system.

For the reconstruction of the scenery the new systems will take advantage of various types of sensors (inertial, acoustic, proximity, virtual -that is, connected with those of other vehicles -, with maps and systems such as GPS), infrared beams, radars or video cameras. These technologies will allow to read out in real time the surrounding area, detecting the presence of obstacles, pedestrians or others potential dangers, also by collecting information (on traffic jams, incidents, work zones, speed limits) via wireless from other vehicles or from the same infrastructures (roads and street signs endowed with sensors). In the planning phase, once the scenery is determined, the risks are evaluated and a safer driving plan is foreseen, which is transmitted to the system interfacing with the driver. The last link foresees an interaction with the driver: a still unavoidable part, which must be handled in the most immediate and natural way. To that aim tactile interfaces are unsed (an increase in the pressure on the steering or on the gas pedal, a vibration of the seat belt) and/or visual and acoustic (led or luminous pointers on the dashboard or the mirrors) that will inform the driver of the danger.

The PReVENT project has been divided, in its first phase, in a number of subprojects each addressing a specific part of the safety plan:

  • long distance safety: project WILLWARN (communication between vehicles, obstacles behind a curve, poor visibility, road work);
  • medium distance safety: project SASPENCE (to keep a safety distance and speed, longitudinal dynamics), SAFE LANE and LATERAL SAFE (to avoid undesired changes of lane, and/or to assist in the intentional changes of lane)
  • pre-incident safety and pedestrian collisions: projects APALACI, COMPOSE and UseRCams sensors (to amplify the braking, pre-tension of belts, pre-alert of airbags)
  • safety in the crossings: plan INTERSAFE with sensors aboard the vehicle (communication with street lights, prevention of hazardous maneuvers).

The second phase of the PReVENT project, which is still open, has aimed instead at the integration of all these areas of intervention. Specifically, in the INSAFES project several measures designed to increase longitudinal and lateral safety of vehicles tuned in several previous studies have been integrated. The developments to which researchers are working will allow the drivers, for instance, to stay on the road, overtake, and change lane in full safety.

For more detail contact Mauro da Lio, Department of mechanical and structural englneering – Laboratory of mechanics and automation, mauro.dalio(at)ing.unitn.it

IP Exhibition, Versailles: see http://www.prevent-ip.org/en/prevent_subprojects/horizontal_activities/ip_exhibition/ip_prevent_exhibition.htm PReVENT Plan: http://www.prevent-ip.org

Follow

Get every new post delivered to your Inbox.

Join 101 other followers