jump to navigation

Guest post: Paolo Schiavone – “Who said Deep-Sky astronomy is impossible from urban sites?” May 30, 2008

Posted by dorigo in astronomy, internet, science.
Tags: , , , , , ,
trackback

Paolo Schiavone was born in Luino, near Varese, on August 18th, 1963 from a lombard mother and a father from Lucania (often it is by mixing different things that interesting things arise). He has always been an amateur astronomer, and he started at age 14 with a Newton 4.5″ scope. In 1982 he got a diploma in telecommunications, and he now works in the public administation. He lives alone with a low-income salary in a condo.

Paolo’s hobby is astro-imaging. He always considered important to share the knowledge one gains by experience, so for that reason, even without being a real expert, he realized a few short guides, freely browsable in two web sites, which are useful for who starts picturing the sky (“Viaggio nel mondo dell’astrofotografia”, “Guida alla scelta del telescopio”, “Guida di IRIS in italiano”). Paolo participates actively to astro-imaging forums. He does not like dogmas in science and prefers to cultivate doubt and curiosity as tools to interpret the reality around us.

I fell in love with an image of M13 he posted in a astronomy forum last week (see below – click here to get to the full-res image), and I asked him to describe how he takes these wonderful pictures. So let us hear it from him!

Astronomy fascinated me ever since my childhood, and in particular I have always had a soft spot for galaxies. Galaxies are elusive targets – besides, until the early twentieth century the real nature of galaxies had not been understood, and these celestial objects were erroneously defined nebulae. The nature of this mistake was mainly due to the limitation in the instruments used to observe them. Those instruments did not allow to determine the stellar composition of what appeared just as a faint nebulosity. Despite this limitation, every time I could detect those faint nebulosities by looking into an eyepiece, I would experience a particular euphoric enthusiasm. Unfortunately I lived in a city, and to detect the nebulosity of galaxies I had to carry my telescope on vacation under much darker skies, because from my home the brightness of the night sky would devour these elusive subjects, if one excludes M31 and very few others.

I still remember with enthusiasm the sight of Messier 51 from the island of Giglio, off the coast of Tuscany, in the eyepiece of my 8” Schmidt-Cassegrain Meade telescope, at 100x magnification. It showed the nuclei of both galaxies and a hint of the bridge of matter which connects them. Such enthusiasm, however, was dampened every time I looked at published pictures of the galaxy just observed, since these showed what despite my efforts continued to remain invisible and devoid of significant detail.

(Right: a detail of the previous picture showing the center of the globular cluster Messier 13)

About four years ago I finally decided to add astrophotography to my visual approach to the sky’s wonders. After reading several books and articles on the technology of digital sensors –just to try and understand the roots of the problems with which I would have to deal- I bought a digital reflex camera (a Canon EOS 300D) and an adaptor to connect it to the telescope.

After familiarizing with the camera and with some software for the processing of astronomical images (I use IRIS, which is very complete besides being free) I started to take my first pictures to planets and the Moon. Living in a city, I thought that deep sky was confined to those very few weekends when I could combine a clear sky with the time needed to reach, after long and winding roads, mountainous places offering decidedly darker skies.

(Left: NGC891 and satellite galaxies in Andromeda, taken by Paolo. Click to get the full-res image.)

After the first deep-sky images obtained during a vacation in the park of Cilento, when finally galaxies started to show their true appearance, curiosity made me wonder if really galaxies would represent an impossible challenge for places seriously compromised by light pollution. Every time I saw a starry sky outside, with my telescope acting as a piece of furniture because of the hateful vice fed by our civilization to destroy the marvels that surround us, by illuminating what is only visible in the darkness, I felt a heart fit. This pushed me to make a few “experiments” to verify what could be obtained from such tough skies.

I have better clarify from the outset that for those who love deep-sky observations a dark sky is mandatory, and those who have the luck to live under dark skies should care for them. Now, before getting to the point of the problems to deal with to make deep-sky observations, it is fundamental to demystify another legend: one does not need custom, costly equipment!

Two words on the equipment needed

To produce a deep-sky image I personally use the following equipment:

  • A modified Canon EOS 300D (its original filter has been substituted), connected to the telescope with a T2 adapter to direct focus;
  • A Schmidt-Cassegrain telescope, diameter 8”, F/10, with an 8×50 finder;
  • A motorized Vixen GP mount, without automatic pointing or auto-guide;
  • A 0.63x focal reducer, which brings the focal length of the telescope from 2000mm to 1260mm;
  • A LPR filter to reduce light pollution.

The above is not a really costly and fancy equipment, and it goes without saying that mine is only one among the many possible configurations one can use. The choice of instruments depends, besides one’s financiary means, on what one wants to obtain: wide-field pictures of extended objects or details of galaxies and nebulaes, for instance. Thus it is better to remind some general rules useful to understand what to expect from one’s instrument.

(Right: a detail of M57 and a nearby galaxy. Click to get the full-field image).

First of all, the telescope must have a focal point sufficiently extended past the tube to allow mounting to direct focus of the digital devices. Of course, the larger the telescope diameter, the higher is the amount of light we collect. The maximum resolution that can be obtained also depends on diameter; to compute it one can use the following formula: R = 116/D, where D is in millimeters and R has units of arc-seconds. For my instrument I thus get R = 116/203=0.57”. The focal length one uses determines magnification power and field of view of the sensor. By applying a focal reducer 0.63x I obtain a focal length of 1260mm on my instrument (2000 \times 0.63=1260).

To compute the sampling and the field of view one uses the following simple formulas: starting from F_{eq}=d_p \times 206265/C, where F{eq} is the focal length, d_p is the physical dimension of the side of a single pixel, 206265 is the number of arcseconds of one radiant, and C is the sampling, or the number of arcseconds of sky that is sampled by a single pixel. Thus we get C=d_p  \times 206265/F_{eq}. The side of a pixel of the Canon 300D measures 0.0074mm, so for my equipment the equivalent focal length of 1260mm yields C=0.0074 \times 206265/1260=1.2”. Finally, to obtain the field one then multiplies the value of C by the number of pixels of the sensor. With a raw image composed of 3088 by 2056 pixels one thus gets 3088 \times 1.2=3705.6” or 1°1’45”, by 2056 \times 1.2”=2467.2”, or 41’7”.

For the resolution that can be achieved with the above equipment, the Nyquist cryterion tells us that a point must fall on at least two pixels, so the effective resolution will be about 2.4”. However, it is important to note that with pictures taken with exposures of a few minutes, the average seeing is usually of about 3” to 4”, so a value of 2.4” is good enough, since a better value would be made useless by atmospheric turbulence.

Besides the focal reducer, two critical accessories are a LPR filter and a good finder scope. LPR filters have the function of cutting off some light frequencies typical of artificial lighting, at the cost of blocking also a part of the light from the stellar sources, galaxies included. Despite this drawback, in images of sites strongly polluted as mine there is no choice, or in a 180 second exposure the sky’s brightness would drown any celestial object. As for the finder, if you are fascinated by deep-sky and especially if your telescope lacks an automatic pointing system, I strongly advise at least an 8×50 finder, which allows to locate deep-sky objects by star-hopping.

As far as digital sensors go, the dimension of the sensor and of individual pixels is very important, as well as the quantum efficiency and the ability to limit the noise. Even if any digital camera can be mounted to a
telescope one way or another, if one wants to reach some depth in the images it is mandatory that these allow pictures with no time limit, and the possibility to change lenses. There are many digital SLR in the market, but not all of them can take pictures in RAW format (i.e., unmodified by the internal firmware); they should also give the possibility of substituting the internal original filter placed in front of the sensor,
which usually cuts a part of the red spectrum. Even if this latter feature can be noticed mostly in images of emission nebulae (which emit mainly in H-alpha, strongly affected by the original filters), the filter significantly worsens the signal to noise ratio of the red channel.

Besides optics, the other fundamental component required to take deep-sky images is the mount, which has to be sufficiently stable and precise with respect to the weight it must carry, and motorized. The longer the focal length used, the more precise the motorized system will have to be. Automatic pointing systems are comfortable, but certainly not necessary, while the possibility of autoguiding is a great invention, which for now I cannot afford (I should then change also the mount).

The galaxy M33 in Andromeda. Click to get a full-resolution image.

Evaluating problems

I live in a condo next to the center of a town with 50,000 inhabitants, and I only have a small window facing West. Zenith is right above my head, but in-between there’s my roof, and even the polar star is not visible. Due ovest, 5km away, there is the Malpensa airport, so the challenge is a tough one! The impossibility of using the polar scope for alignment was easily overcome by using a telescope stationing system which is mandatory in astrophotography and much more precise than polar pointing, which is known as Bigourdan method.

A precise stationing, if one wants to take many images with exposure above a minute, is fundamental. After I review directly, grease, and train the motion of the mount, I manage to take images with 180 second exposures at 1260mm of focal length without tracking, although of those images two thirds will have to be discarded due to blurring of the image.

To make stationing easier I traced some signs on the window, so whenever I mount the telescope the alignment with the north celestial pole is already fair, and it is thus easier to complete the operation using the Bigourdan method. The real problem is however the sky brightness, which considerably reduces the reachable magnitude, both in visual and in photographic observations. It is a complex problem, only partially solvable. To begin with, the limiting magnitude does not depend only on the brightness of the sky, but on its trasparency as well. Even in the same site the latter may change from a night to the next – from a slightly veiled to a clean sky swept by winds, which moreover take away also part of dust which reflects artificial light, increasing the sky’s brightness. Moreover, the brightness of the sky, in northern italy, is by no means uniform. As we get closer to the horizon it increases considerably.

Since when one tracks an object this moves westward, during the course of several hours of tracking images will show a non-uniform sky, which tends to brighten as one nears the horizon. This is however a problem which can be solved during post-processing of the images. The true limit which cannot be overcome is the absolute brightness of the sky background. In simple terms, one can say that in a single image a point source to emerge from the background must possess a brightness larger than the latter; this is still a rosy prediction, since in reality it is mandatory that the variation between these two levels of brightness is detectable above non-uniform response of different pixels. The heart of the matter in the end is this: how to get as close as possible to that limit, which changes from night to night.

It is also possible to express how large this brightness variation has to be, to be considered a real signal. To understand this it is necessary to discuss what is defined as signal to noise ratio. Every image contains both real information (from the object one is portraying) and background (which has many sources: thermal, electronic, etc.). Between information and noise there is a band of uncertainty hardly attributable to one or the other, but which is estimable. For instance, let’s assume that the sky background has a mean value of 100 ADU: the typical spread of this value is given by its square root \sqrt{100}=10, so to be considered containing a signal, a luminous point has to measure at least 110 ADU.

How to solve problems

If we decided to take a long exposure image from an urban sky (say 20 to 30 minutes) we would only obtain a flat field, a grey image where the sky brightness has a very high level or has even saturated the sensor. To solve this problem, thanks to the linearity of response of digital sensors, it is possible to take a series of images with shorter exposures, and then add them up when processing the image. The duration of the exposure, besides the precision of tracking by the mount, depends on the amount of time that the sky and thermal noise take to wash out unrecoverably a part of the objects that one wants to portray.

Using too short exposure times (10 or 15 seconds) one risks instead to not perceive the faintest parts of the deep sky objects, which remain below the noise threshold. Of course, even adding up a thousand images, what is below noise threhold will not appear in the final image. With the LPR filter, my technique is to take 180-second images with a pre-set sensitivity of 800ISO, but I could probably extend the time to 300 seconds if the mount allowed it. The choice is thus not absolute, but the most reasonable compromise given the instrumentation. There is instead no limit on the number of images to take, which can be collected also during different nights (being careful to orient the sensor with the same angle and center the object in the same position, or as close as possible to it). Actually, the more images one has, the better the final result.

For those who use digital SLR cameras, especially during nights with temperatures above 15°-20° C, it is unadvisable to exceed in sensitivity (beyond 400 or 800 ISO), because then it becomes extremely hard to reduce the thermal noise. I usually take 12 dark and 12 flat fields, to calibrate images during processing. In my opinion, especially if one does not possess cooled CCD sensors at very low noise, it is unthinkable to work at the limit of sky background without having at least subtracted dark noise to images.

Processing

In this phase one can solve many of the problems that have occurred. First of all, one needs to calibrate the images (subtracting dark and offset, and applying a flat field), after which one has to align them before adding them up. To perform these operations I use IRIS, but there are many possible software packages for astrophotography that perform the same operations.

Adding many images, besides the dynamical range the signal to noise ratio increases, but it is important to understand what happen by executing that simple mathematical operation. Even if images have been calibrated, in reality a part of noise present in the single images survives the operation. In fact, a part of the noise is systematic and thus subtractable, while another is random and thus it does not get canceled by the subtraction of the “dark master” (the average of the 12 dark images collected). By adding a sequence of aligned images, the signal from the deep-sky object adds up coherently, while background contributions add themselves only when they overlap by chance with the random noise from another image. It is thus almost intuitive to understand that the more images we use the better the signal to noise becomes.

As one improves signal and reduces its average fluctuations, even objects confused with the background light start to emerge, increasing above the standard deviation of the background. Let us make an example. In a single image the background measures on average 100 ADU counts, while the arms of a distant galaxy measure 102 ADU. The Poisson fluctuations of the background is \sqrt 100=10, so the galaxy does not reach a brightness sufficient to be perceived atop the background. By stacking 100 images, however, the background reaches the value of 10,000 ADU with a standard deviation now equal to 100 ADU, while our galaxy arms are now 102×100=10,200 ADU, and are now distinguishable from the background! It is quite evident that with some more effort even details that in the original single image showed only 101 ADU (only one count from the signal, plus the average background value of 100 ADU) will be able to emerge.

(Right: Messier 81 and Messier 82, in Ursa Major. Click to get the full-resolution image).

Despite the limitations from the sky brightness, we can still try to trick it. When one works with a sequence of images where the sky varies its brightness during the exposures (as when the moon is rising, or when the object gets closer to the horizon, or when transparency varies) the sky does not have the same value in all images. The trick consists in trying to use as a background limit the one most favourable, that is the background value of the image with a darker sky and higher signal. To visualize this there is a dedicated command in IRIS. To do this, before adding the images, it is advisable to flatten the value of background sky of the sequence to be added, using as a reference a value slightly lower of the background detectable in the best image (on IRIS the command “offset normalization of sequence” from the Processing menu). I usually perform this operation for each color channel. With such procedure, even if some signal is lost, the background of all other images will result in a value smaller than the signal of the best image, thus preventing it from washing out an object at the limit of detection.

Another small trick consists in the sum of many images, since if the background presents a varying degree of brightness in the images it is partially subjected to being reduced. This however also determines a sky image which does not show a good uniformity in the final image. With IRIS, to solve the problem I use the function “remove a gradient”, by applying a binary mask to protect the deep sky objects.

Finally, before adding many images it is important to adopt some precautions to avoid saturation of some of the brightest zones in the image. Before adding images, one can normalize the gain of the sequence by choosing a rather low value, which allows to reduce the saturated zones (only acceptable, in my opinion, in the center of bright star images). The rest of processing depends mostly on one’s experience and one’s taste, and it in any case makes the images unusable for astronomical measurements, like studies of variable stars and other photometric determinations.

After the addition of images I use a RL2 filter (a variation of the Richardson-Lucy, which protects nebulosity and non-stellar objects) to correct small defects due to seeing or tracking. This very powerful filter acts in the frequency domain, and has to be used with caution: it often introduces a certain amount of electronic noise by amplifying it. After the RL filter a low-pass filter (blur) may be used to soften the image. Usually, I process separately the images and the galaxies, with which I apply different values of stretch. To recompose the images and calibrate the colors I use mostly the Photoshop functions, and the technique of levels. Some final touches can be made at the end, and finally, even from the window of my apartment I can satisfy my passion and observe images of many galaxies and other deep-sky objects.

When imaging galaxies, one may be struck by the occasional chance of imaging some particularly bright new supernova. In that case, it would be very bad to fail noticing it. Therefore, after processing an image, it is always useful to compare it with that of some map or other images of the same region of sky, in order to identify the objects portrayed (especially the most elusive ones). To this aim, I find very useful this interactive map, with images of the requested region, browsable for free at the following address: <a href=”http://server1.wikisky.org”>wikisky</a&gt;.

I hope this report of my experiences may result useful to inspire other amateurs to avoid transforming their telescope in a piece of furniture, using it only those few times they manage to take it under dark skies, and rather use it from their home window!

A collection of other images taken by Paolo is available here.

Comments

1. goffredo - May 30, 2008

great post Paolo. Admire your images! Thanks Tommaso for hosting him.

2. John DiNardo - April 7, 2009

Hi,

I’m trying to estimate the position of Planet X, Nibiru below the ecliptic. Ancient historians wrote of fireworks seen on Jupiter prior to Nibiru’s passage by Earth. I suspect that the Sun is currently in an unusually quiescent state because Nibiru is drawing electrical discharges to one of the outer planets. I wonder if sightings of storms or pole shifts on one of the outer planets during 2008 would be a clue as to X’s position. Anyway, it’s a long story, but my next email contains an essay I wrote which begins to describe the complexity of this subject and the most grave implications Nibiru has for planet Earth. I know virtually nothing about astronomy, and so I am seeking help from someone who can view a map of the Solar System relative to the Zodiac. The late Dr. Robert Harrington, chief astronomer of the U.S. Naval Observatory had determined that Planet X is intruding into our Solar System from the constellation Scorpius, and that it is coming in from below the ecliptic at an angle of about 40 degrees to the ecliptic. I’m wondering if, during the past year, if it has been beneath either Jupiter, Saturn, or Uranus. Thank you for any help you might offer. I’m forwarding to you my article which further describes my quest.
John DiNardo
http://adsbit.harvard.edu/cgi-bin/nph-iarticle_query?bibcode=1988AJ…..96.1476H

3. John DiNardo - April 7, 2009

Planet X: Sun Inhaling for a Major Blow
——————————–

Public awareness of danger fosters thoughtful preparation, and
thus prevents some loss of life. I have not yet contemplated
protective measures against the solar scorching which appears
to be on the horizon, because the immediate priority would be
to elevate public awareness of this coming solar blasting – to
show evidence that the Sun will become more like a blowtorch
than a heat lamp. Scientists today appear to be strangely quiet
and contented with the recent drop off in solar hyperactivity.
Yet, (tell me if I have missed any explanatory reports from the
science community) why don’t I hear them explaining WHY our
Sun — which has been described, for years, by one conspicuous
scientist as “behaving like a popcorn popper” — has suddenly
and markedly calmed down? No one is thinking of the “why,”
but rather, they are enjoying complacent relief from the past
decade’s threatening solar hyperactivity. Today’s scientists are
like the foolish children who ran to scoop up fish from the bared
ocean floor as the tide dramatically drew back in preparation
for a mountainous tsunami sucker punch? Because we live in
the age of the approach of Planet X, solar hyperactivity is
paradoxically the norm, while this current (and, I think,)
transient state of solar quiescence is an aberration. Therefore,
it is imperative that we strive to solve this solar mystery
which would seem to portend the gravest imaginable threat to
humankind: a scorching global annihilation.

Let’s look at this issue logically. We have a vast cluster of
celestial objects being drawn into our Solar System, and, over
the past decade or so, extraordinary earthly events have
consistently resulted from this celestial shotgun blast into the
belly of our Solar System: events geological, meteorological,
geomagnetic, seismic, volcanic, biological, and solar, all
extraordinary in magnitude, and all in the glare of a Sun so
hyperactive that new and greater classes of flares had to be
appended to the charts, just to describe and document these
unprecedented flares. Then, suddenly, the Sun dies down.
And everyone says, “hooray,” or “no need for concern anymore,”
or “if it ain’t broke, don’t fix it.” Yet, Planet X and its massive
field of gravitationally gripped comets, meteors, and asteroids
has not turned around and headed back out into deep space.
No, the X-gang is still being gravitationally drawn toward our
Sun, and is, in fact, approaching at an ever-increasing velocity,
as basic Newtonian physics principles would confirm.

There is a reason for everything that happens. I surmise that
Planet X is in a fleeting interlude in its incoming travel, wherein
it is gravitationally, electrically, and electromagnetically
interacting with Jupiter, Saturn, or Uranus, thus diverting its
previous fire away from the Sun, but only temporarily.
Remember, the late Dr. Robert S. Harrington, chief astronomer
of the United States Naval Observatory, determined that Planet X
is coming at us from beneath our table, so to speak, and off to
the side of our Solar System’s ecliptic plane (our revolving table),
at an angle of about 40 degrees to the ecliptic. This means that,
each and every day, that bullet is traveling closer to the underside
of our table, and therefore, its flirtation with the Sun ought to be
growing ever more torrid. Where are the astronomers and celestial
physicists out there who will address the issue of this mysterious
aberration of solar quiescence, and offer information as to a current
positional interaction between Planet X and our Solar System’s outer
planets, an interaction based upon the assumption that X is coming
in from the direction of the constellation Scorpius,
as Dr. Harrington had calculated.

http://adsbit.harvard.edu/cgi-bin/nph-iarticle_query?bibcode=1988AJ…..96.1476H

John DiNardo

4. dorigo - April 7, 2009

John,

sorry but your unsolicited explanation does not seem to hold much water ,and your lamentation against scientists makes the standard crackpot alert sound. If you want to understand the world, you have to learn to digest the language it speaks. It is tough, but there are no shortcuts. If you have experimental data to support your theories, do present them in a coherent way and you might be considered, otherwise it will remain a dead letter.

Cheers,
T.

5. Neptune's Mass Recalculated - Bad Astronomy and Universe Today Forum - January 1, 2010

[…] I came across it while reading one of two off-topic comments posted by John DiNardo to the "A Quantum Diaries Survivor" […]


Sorry comments are closed for this entry

%d bloggers like this: