Saturday, July 12, 2014

M82 - Image Stacking

Now that I have a successful reading routine to read the images, I can process them. I have written some home image stacking routines to show how relatively simply this can be done to an amateur.

This post will be long. If you want to see the end result, just scroll to the bottom.
Please click the images to see the actual photo in full quality.

The problem
Astronomers are always starved for light. We want to capture as much light (signal) as possible to help reduce the noise. It turns out the first easy solution to capturing as much light as possible is just to leave the camera exposed to the image for hours on end. However, this does not work for two reasons:
  1. Tracking systems are imperfect and the image will have shifted during the course of the measurement. This will end up with the well known star trail like effects. I will explain this soon.
  2. Your detector actually saturates at a maximum value. You can think of each pixel in your camera as a mini detector. Now imagine as a bucket for light. It can only capture so many photons of light until the bucket starts to overflow. After this point it makes no sense to fill your bucket further because you won't be able to count the extra photons you've caught.
Now let's visit each of these caveats before we move on to the solution.

Tracking systems aren't perfect
The first thing all amateur astronomers learn is that, tracking systems aren't perfect. If you want to have pristine tracking, it takes a lot of work (even with the more modern GOTO mounts). Time and resources are not a leisure for hobbyists and this is a known problem.
Take a look at M82 here. The second image was taken about 1-2 min after the first. I tried my best in perfecting the tracking and I used a Celestron C8 AVX which is an equitorial GOTO mount. Still no luck, there's always a shift... You can see them below.

M82 - one image

M82 - another image. The shift relative to the previous image is marked by the arrow.
So we see there is a shift and we'd like to correct for it. It turns out this shift is not that big, approx 1/5th of a pixel per second. Thus if I expose the camera for 10 seconds, the image will have moved at most 2 pixels or so. This can actually be corrected for quite easily. Before explaining the correction, let's discuss the second problem: that of image saturation.

Image Saturation
A good example of image saturation (that isn't a star) is one our our planets: Saturn. The planets are some of the brightest objects in the sky. No wonder the word comes from the greek word astēr planētēs which means "wandering star". They're one of the brightest "stars" in the sky but oh... such rebels!

So what happens if I overexpose the image? See for yourself (units are arbitrary):
 
Overexposed Saturn Image

A cross section of the overexposed image.
 Take a look at the image. Near the center of the object, all the values look to have the same color. Even worse, by looking at a cross section, you see that they are the same! This is the result of overexposure. The little "buckets" of light in your camera can only gather so much energy before it spills over. Each little bucket here could only hold so much before it could not report any more signal. So how do you correct for this issue? Easy, you read out multiple images before these buckets "spill over," as will be described next.

How do you correct for this?
Now how would one go about correcting for this? The correction is actually a two-piece process:
  1. Take multiple images of shorter exposure before the light buckets spill over, and before the image shifts appreciably.
  2. Re-shift the images back in position and then average them.
The first piece is simple. I have taken here 10 second exposures. This is enough to avoid both saturation and star trails. The second piece is explained in two steps:
1. The shifting process
2. The averaging process

First Step: The image shifting process
One method is simple, you can manually shift each image so that it looks like it overlays on top of the image. But how do you know when you've shifted by just the right amount? Easy: Just take the absolute value of the difference of the two images. Here is a result of the absolute value of the difference of two M82 images shown earlier and look at just one star:




M82 difference of two images, zoom in on a star.

The shift is now pretty clear. It looks to be 5 pixels in x and 23 pixels in y. Now, let's shift these images by this amount and take the difference again:


Difference of two images of M82, where the second is shifted by 5 pixels in x and 23 pixels in y.
We can see the image now looks almost to be zero (although not perfect). Now, a sensible solution would be to have a program try to take the absolute value of the difference of images for a few values, and check to see where there is a minimum. Where you find your minimum will be the region that the image has most likely shifted. A sensible solution.

It turns out there is a slightly better and quicker trick using fourier transforms, but I have run out of time. I will post it later with code and examples. Anyway, so for now we have some way of determining the shift of the images numerically.


Second step: average them!
So now, you have a sequence of images, and you also have a method of shifting them back in place. In other words, you have a sequence of the same image. So what next?
Images will always contain noise, like it or not, which will amount to graininess in your images. There is an easy way to reduce this graininess. It is well known in statistics that to reduce the noise in any measurement, all you need to do is make the same measurement again and again and average the results together. Say you averaged 10 measurements together. If each measurement was truly independent, then your noise (graininess) will be reduced by a factor of 1/sqrt(10) where "sqrt" means "the square root of".  If you have N images, then your noise is reduced by 1/sqrt(N). Remember, this only holds for independent measurements.

Here is an example. I take a cross section of M82 and I plot it below. I then take the average of 20 independent measurements of M82 and plot the cross section in red. The graininess (noise) appears to have been reduced by 1/sqrt(20) or approximately 1/5th. So it works!

 



The Final Result
So there you have it. Now let's see what happens when we apply this to M82 as below:

M82 - One image 800ISO 10 s exposure with Nikon D90, converted to grayscale, byte-scaled between 60 and 100 ADU. Click to see the actual photo!

 And shift+stack it:

M82 - Stacked images at 800ISO 10 s exposure with Nikon D90, converted to grayscale, byte-scaled between 60 and 100 ADU. Click to see the actual photo!
Voila! A much nicer image. Note that this has been done with only 20 images, in grayscale and an extremely light polluted (McGill Observatory) area. It would be interesting to compare the same measurements (on the same day, with same moon phase) both at the heart of light polluted montreal and its darker rural skies.

No comments:

Post a Comment