Thursday, September 25, 2025

Why Sunrises And Sunsets Change At Uneven Rates

TL;DR: Sunrises and sunsets don't change at the same rate, a phenomenon affected by Earth's axial tilt and its elliptical orbit. This blog presents an intuitive picture to understand why.

The Problem

Just a few days ago we had the autumnal equinox. The autumnal equinox is when the rate of change in daylight is at its maximum. Here in NYC, we are losing about 3 min a day. However, if you look at the schedule, you'll notice something peculiar:


Sunrise and sunset times for NYC. Note that sunrises move later slower than sunsets move earlier.



At this time, we're losing about 1 min a day of morning (later sunrise) and 2 min of evening (earlier sunset). Why are our evenings getting shorter much faster?

Existing explanations mention axial tilt and elliptical orbit, but don't provide an intuitive understanding. This post will.

The Plan

In order to gain an intuition of sunrise and sunset times, we'll need to go through a few topics in order:

  • How the sun moves across our sky per day
  • How the sun moves across the sky per year
  • How this translates to sunrise and sunset changes

How Sun Moves Across Our Sky Per Day

We all know how the Earth revolves around the sun. But not all of us have had experience with how this translates to motion of stars and the sun in our sky. As a day goes by, an observer on Earth will see the sky rotate around a point. This is due to the Earth's rotation around its axis, and rotation around the sun. It will look like this:

The rotation of stars across the night sky, as seen from the Equator. Over an hour, the star Vega has moved along a circle (green arrow), whose center is Earth's axis of rotation.

You can see here, over about an hour, the star Vega has moved along this line. The axis of rotation is the North Pole. Note this was also what would be observed at the Equator.

For a city that is not at the Equator but anywhere else, this axis of rotation simply moves by the latitude. Here is New York City, at a latitude of about 40° for example:

The rotation of stars across the night sky for NYC. The motion is the same as one would observe at the Equator, with the center of rotation shifted. This is because Earth's axis of rotation for this observer has moved.

All objects very far from Earth will follow this path.

What About The Sun?

The sun's position relative to the background stars shifts over the year, while the stars themselves appear to be in fixed positions relative to each other. This is because the sun is close, relative to the stars. However, for short times (about a day), we can assume that it follows the same path as the stars:

The sun's path over the course of a day on Sept 23rd in NYC, showing how it follows a circular arc centered on the Earth's axis of rotation. The Sky has been removed for effect (obviously, it's daytime, so you wouldn't normally see these stars!).

Wherever the sun is in our sky, you can safely assume that it will follow a circle around the Earth's axis of rotation.

What Is A Day?

If you actually studied the motion of the stars, you'd notice a peculiar thing. A full day later, they're not in the same place. This is because they actually rotate every 23h56 min and not 24 hours. Why? Well, that's because Earth undergoes a full rotation every 23h56min! However, our concept of a day is aligned to the sun. Because we orbit it, the motion of the sun is a combination of the earth's rotation around its axis plus its motion along its orbit around the sun. This results in the sun appearing to circle the Earth every 24h. This will be important, because sunrise and sunset times are relative to our notion of a day, not from the actual period of rotation of the Earth.

How much will it move by? Since we're also interested in how the sun will move, we'll select a frame of reference aligned to a day. An easy way to do that is simply take a picture of the sun at the same time of day each day. For a circular orbit, our frame of reference would  roughly be the red arrow:

Every day at solar noon for a circular orbit, we're looking radially inwards.

Our orbit is not circular, but rather elliptical. In order to estimate a day, we need to average the mean position of the sun each day. For low eccentricity (less stretched out) elliptical orbits, this makes sense enough. Note that for Earth, the variation is very small, see here. (Note for the interested reader, calculating where noon is on average and its deviation day to day is called the Equation of Time. Please see M. Tirado's blog post as is mentioned further in this post for more details.)



Sample elliptic orbit. Higher eccentricities stretches it out. Lower eccentricities compresses it until it looks like a circle. Note that the planet moves faster when closer to the sun but slower further away (Kepler's laws).




For this analysis, we assume the Sun's daily motion is consistent, an assumption that holds true for Earth but not necessarily for planets with more eccentric orbits or more chaotic star systems (multiple suns).

"Looks like we're going to be at peak intensity for a few Dubniums" (Dubnium-270's half life is ~1hour. How else would these creatures measure time?) Image courtesy of Gemini

How The Sun Moves Across Our Sky Per Year

If you took a photo of the sky at noon every day for a year on Earth, you'll notice that the sun moves. As a matter of fact, it will move like this:

Sun's path over a year. Position of sun taken on the 22nd of each month at NYC latitude at 11:56 EST (12:56 EDT during daylight savings).

This picture in particular is called an analemma. This is what you would see if you were to take a photo of the sun at the solar mean (at noon). You can see plenty of them online, and here is a nice example.

This is taken from the latitude of New York City, but note that this would look similar at any other point on earth (well, except for places like the north pole in the winter where the sun is hidden behind Earth itself). Don't worry about exactly how this is caused. We'll get to that later. Let's first take a look at how this motion would tie into changes in sunrise and sunset.

How Does This Tie Into Sunrise and Sunset Time Changes?

I presented this analemma without really describing how we can use it to understand how our sunrise / sunset times might change. This section will explain why we care about it.

If you notice, the sun can move up/down (also called declination) or left/right (also called right ascension). What happens to sunrises/sunsets in each case?

Vertical Motion: Day Shortens/Lengthens As Sun Moves Down/Up

Imagine that the sun's apparent position in the sky moved up or down. What would happen? Well, if it moved down, the day would obviously get shorter as it would follow a smaller arc. Jump down and the day would get longer.

The day shortens when the sun moves down. This is because the angle decreases. 

The diagram above shows an example where the sun moves down in the sky. Because the sun will always follow a circle centered on the Earth's axis (green and blue arcs), by moving down, it will clear a smaller angle above the horizon. Note that if we were at the equator (so the horizon intersects with the point of the axis of rotation), $\theta_1$ would equal $\theta_2$. This is exactly what is expected since the length of a day is always 12 hours at the equator!

Horizontal Jump: Day Lengthens/Shortens And Shifts

If there was horizontal motion, we would both see a change in the length of a day and the day would shift.

The former has again the effect of increasing or decreasing the sunrise and sunset times. However, the latter has the effect of increasing/decreasing the sunrise/sunset times in tandem.

When the sun moves right, it moves to a new arc of rotation where it is not at the maximum. This arc is larger, so the length of day increases. However, it also shifts, since the day will end faster.

Putting It All Together

We've seen now that the motion of the sun at the mean solar noon tells us what happens to sunrises and sunsets. For vertical motions, the length of day lengthens/shortens and for horizontal motions, the length of day lengthens/shortens and also shifts earlier/later.

So what can cause uneven sunrise/sunset times? It comes from both a change in day duration and a shift in day. Let's say the length of day is reduced by $\Delta_{\mathrm{length}}$ and the day is shifted by $\Delta_{\mathrm{shift}}$, the shifts in sunrise/sunset will be:
  • $\Delta_{\mathrm{sunrise}} = - \frac{\Delta_{\mathrm{length}}}{2} + \Delta_{\mathrm{shift}}$
  • $\Delta_{\mathrm{sunset}} = \frac{\Delta_{\mathrm{length}}}{2} + \Delta_{\mathrm{shift}}$
Let's put some numbers to get a feel for this. Let's say the length of day decreased by 4 min ($\Delta_{\mathrm{length}}= -4 \mathrm{min}$), but also shifted earlier by 2 min ($\Delta_{\mathrm{shift}} = -1 \mathrm{min}$). 
We would then have:
  • $\Delta_{\mathrm{sunrise}} = 2 \mathrm{min} -1 \mathrm{min} = +1 \mathrm{min}$
  • $\Delta_{\mathrm{sunset}} = - 2 \mathrm{min} - 1 \mathrm{min} = -3 \mathrm{min}$
We see that sunrise shifts later by 1 minute, while sunset shifts earlier by 3 minutes.

So What Would Happen If...

Now we understand how the movement of the sun affects sunrises and sunsets, let's get a feel for what would happen in a few extreme examples. We can do so thanks to M. Tirado's Analemma calculator here (please also read their blog post if you're mathematically inclined, it's worth a read and complementary to this post).

Circular Orbit

A circular orbit is just a point. This makes sense as at every day the sun never moves.

The analemma for a circular orbit with no axis tilt. As you can see, the sun's apparent position does not change over the year.



An observer in this case would never see sunrises and sunsets change!

No Tilted Axis, Just An Ellipse


The analemma for an elliptical orbit with no axis tilt. As you can see, the sun's apparent position only moves left and right.

 

This case is interesting. All we would see is movement left and right. This is because, unlike a circular orbit, the sun will move faster/slower depending on where it is in the orbit (for a good reference, see here). However, this still results in lengthening/shortening of days as well as shift in time. So we would still observe sunrise/sunset asymmetries.


Tilted Axis But Not Elliptical Orbit

If Earth only had an axis tilt and a nice circular orbit, its analemma would look like a nice symmetrical figure 8 shape.


The analemma for a circular orbit with an axis tilt. As you can see, the sun's apparent position changes in a figure 8 shape. An observer here would see sunrises and sunsets change in an asymmetric way as we observed here.
The up/down motion can be easily understood by thinking of what happens in the summer or winter. In the summer, the sun is higher in the sky than in the winter. This can be easily seen by looking at the angle the sun makes with respect to Earth's axis for an observer in the summer or winter.
The angle with respect to earth's axis in the summer ($\theta_s$) versus winter ($\theta_w$).



The sun in the summer versus winter for NYC. The sun moves down from summer to winter.

The left/right motion can be understood by changing the frame of reference to be oriented with the axis tilt:
The orbit of a planet around the sun with an axis tilt. The orbit is assumed to be circular.

If look at this orbit from the top, it begins to look elliptical!

The projected circular orbit as viewed overhead.

The stretched section will appear to move slower, and the non-stretched portion will appear to move faster. This will result in the shift of the solar mean (left/right shift) as explained in the ellipse section. Note that this is different from an elliptical orbit, where the sun would not be in the center and the regions of the planet moving faster are closer to this sun.


Earth's Case

Earth's case is a combination of both. The elliptical orbit warps the analemma for a tilted axis a little: 

The analemma for Earth. As you can see, the sun's apparent position has a figure 8 shape, and it is asymmetric (due to the elliptic orbit).

Notice this looks just like the simulation (except warped).


Conclusion

We were able to gain an intuitive understanding on how sunrise/sunset times were affected by Earth's orbit. We did this by translating the complex motion of the Earth's orbit into an analemma, and using that analemma to understand what might happen to sunrise/sunset times. As we can see, the Earth's tilt is enough to cause this phenomenon. However, the Earth's non-circular orbit does contribute to changing it.

We did not delve into calculating the numbers as this would have made this blog post too long. However, I hope this gives you an intuitive enough understanding to be able to explain why sunrise/sunsets change, and the nuances involved next time someone may ask!

Addendum

Originally, I asked Gemini to answer this question. It was unable to give me a satisfactory answer. See my interaction here. For example: 
Because the solar day is longer, the time of local noon shifts, and this shift is not divided equally between sunrise and sunset.
 
Given Google likely trained Gemini on the whole web (and more), this suggests to me there likely isn't a good public explanation (please comment if you know of one thanks!).
 
For fun, asked ChatGPT (GPT-5) as well. You may see the interaction here. It's answer was quite circular, for example:

3. How the pieces fit

  • Imagine the Sun’s path as a straight line crossing the horizon at an angle.

  • As the Sun’s declination drifts southward in fall, the morning intersection of that line with the horizon slides only a little each day (≈1 min), while the evening intersection slides a lot more (≈2 min).

  • The combined result is the 3-min/day shrink in daylight.


It even went out of its way to draw a confusing diagram (please correct me if you think this makes any sense):

ChatGPT GPT-5 Response

 

Don't get me wrong, I heavily use AI to help me with researching topics, with work and with learning new things. It's a wonderful force multiplier when used as a co-intelligence. But to me this serves as a reminder to remember be grateful for all the writers who spent months/years of their life to write something cohesive that makes sense, that these models have been trained on.

I also hope the next models can be trained on this and hopefully provide a better explanation.

Sunday, March 19, 2017

Sun Photos

Testing my new AVX mount. With a level bubble, compass and GPS coordinates, I was able to get it aligned fairly accurately, enough to capture the sun and have it remain in the field of view on the order of minutes.

The setup. A 127mm Maksutov-Cass on a Celestron AVX mount, with a Baden solar filter. The camera used is a DSLR Canon Rebel T3.
When I took image, I was a little disappointed, there were some defects (which at first I thought were sunspots :-( ). This is due to dust motes.
Sun. You can see defects in image
Unfortuntaley, I didn't take a flat field. However, I did have a few images of the sun that were shifted, since my tracking wasn't perfect. So I masked out the dust motes:

Masking defects

And chose two images decently shifted apart, and re-shifted them. There are sophisticated methods out there for this but for this application, I decided just to shift manually and look at the absolute difference of the image and the second image reshifted. When they align, you should see a uniform noisy sun. (Else, you'll see some bright regions which represent the overlap)
Taking two images, top left and top right are two images where the sun moved in field of view. If I shift them back by the correct amount and take difference, we see a nice noisy image as lower left figure.
After creating a mask and knowing the shift, you can then write some quick code to average the two together, as below:
def combinephotos(imgs, shifts, mask):
    imgresults = np.zeros_like(imgs[0],dtype=float)
    imgcts = np.zeros_like(imgs[0],dtype=float)

    for i in range(len(imgs)):
        imgresults += np.roll(np.roll(imgs[i]*mask,shifts[i][0], axis=0), shifts[i][1],
                axis=1)
        imgcts += np.roll(np.roll(mask,shifts[i][0], axis=0), shifts[i][1],
                axis=1)

    imgresults /= imgcts

    # parts that average more than one image, add poisson noise
    #w = np.where(imgcts > 1)
    #imgresults[w] = np.random.poisson(imgresults[w])
    return imgresults, imgcts

shifts_array = [
        [0,0],
        [-28, -51],
        ]

img_final, imgcts = combinephotos([Grey1, Grey2], shifts_array, mask)

There are much more efficient methods of course, but this allows more control over your data, and better understand exactly what it is you want. For example, I don't like inpainting. This method does not require any inpainting or blurring whatsoever. This is the raw data. The only potential issue is the loss of pixel resolution from the error in the estimation of your shifts. However, there are ways to actually get subpixel resolution, something I'll ignore here because this image is already quite large!
The resultant number of images averaged perpixel. White represents two while black represents one. There are no zero regions.

The final image after combining.

Image saved to a JPEG.

Getting Hands Dirty in Python

I learned python about a year and a half ago and it's been a huge time saver. I've moved from a language where you had to write a library for everything to a language where a library for everything already exists. Here's something I wrote quickly this morning. Very easy, and you can do it too. If you're interested in better understanding your astronomy data for educational purposes, this post will help get you started.
Before we begin, you'll need to have installed these libraries:
Python
pylab
rawkit
Pillow
LibRaw (Without this library, we couldn't do this thanks to developers for all this hard work!)
I'm happy to provide more instructions if need be.
I finally bought a mount (Celestron AVX) and I'm beginning to take my own images.



Here is some quick code to do this. Note, some steps are not very 'pythonic'. I'm a strong believer in breaking conventions to better understand data, and worry about conforming to them when packaging code.
This allows for much quicker analysis and gaining a deeper understanding of the data, which is *much* more important than the code!!! (coding is easy, making sense of data is the challenge...)

In this code, you'll learn to:
1. Extract the raw data from your image
2. Extract each color from your image (if needed), and make a grey scale image
3. Rescale your image in different ways to try to better emphasize features without brining out the noise

This code should be fairly straightforward and is meant just as a motivational exercise. There are better ways to do these things, and nice statistics you can use to better understand your data (like histograms). I hope this motivates you in the same way it excites me to have so much control over my data!
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
''' 
    Astronomy grey image analysis
'''
# This imports functions into global namespace
# useful for quick hacking, NOT recommended for writing programs
from pylab import *
# makes plots appear if in interactive mode (comment out if not)
ion()

# used for saving images
from PIL import Image
# used for reading Raw images
from rawkit.raw import Raw



# choose your filename here
import os.path
filename = os.path.expanduser("~/pictures/17-03-18-custer-whirlpool/IMG_3787.CR2")


#  Grab the data, really easy
im = Raw(filename)
data = np.array(im.raw_image())


# The data is in a Bayer Color Matrix here for Canon, so let's
# just quickly extract each color by choosing the right pixels
# quick way to get from Bayer matrix. Note there are way more
# sophisticated algorithms out there than this method... But this
# yields the rawest untouched data possible
#
# On Rebel T3, pixels are:
# [ R G ]
# [ G B ]
# so index every other row/column from the right start (0 or 1)
# start:end:skip means start at start, end to end and skip every other row
# if "end" not specified, then just go to end of array
# so 0::2 means start at 0, go to the end, and skip every other row/column
# data is indexed as data[row, colum]a (row goes up/down in image, column goes
# left/right)
R = data[0::2,0::2]
# for green, you average the two together
G = (data[0::2,1::2]+data[1::2,0::2])*.5
B = data[1::2,1::2]

# sum colors to get greyscale (note this isn't 100% correct, you need
# to weight colors appropriately, but this is just messing around first)
Grey = R + G + B
# take logarithm, sometimes this could look better
LGrey = np.log10(Grey)
# try other combinations
Grey_squared = Grey**2

# so now, let's plot the image, plot it first without "vmin" and "vmax"
# and mouse over the plot to see what values you see
# when happy, choose a good "vmin" and "vmax", this worked for this image


set_cmap("Greys_r")
figure(4);clf();
imshow(LGrey[200:900,600:1300],vmin=3.8,vmax=3.9)

figure(5);clf();
imshow(Grey[200:900,600:1300],vmin=10**3.8,vmax=10**3.9)

figure(6);clf();
imshow(Grey_squared[200:900,600:1300], vmin=4e7, vmax=6e7)

# now, if we want to save image, it needs to have its pixel values go from
# 0 to 2**8-1 (255). You don't want a larger range since your eye can't really tell 
# the difference beyond that, roughly

# normalizing function:
def normalize(img, mn, mx, logimg=True):
    ''' normalize to a uint8 
        This is also known as byte scaling.
    '''
    dynamic_range = 2**8-1
    img = img-mn
    img /= (mx-mn)
    img *= dynamic_range
    img = np.minimum(np.maximum(img, 0), dynamic_range)
    img = img.astype(np.uint8)
    return img


img_grey = normalize(Grey[200:900, 600:1300], 10**3.8, 10**3.9)
limg_grey = normalize(LGrey[200:900, 600:1300], 3.8, 3.9)
img_grey_squared = normalize(LGrey[200:900, 600:1300], 4e7, 6e7)

# Image comes from the PIL library (imported at top)
img = Image.fromarray(img_grey)
limg = Image.fromarray(limg_grey)
img_grey_squared = Image.fromarray(img_grey_squared)
# saving 
img.save("../storage/01-whirlpool-oneimage.png")
limg.save("../storage/01-whirlpool-log-oneimage.png")
limg.save("../storage/01-whirlpool-squared-oneimage.png")
Finally, you can run the code by running it in ipython (interactive mode):
$ ipython
[1] : %run mycode.py
or just python:
$ python run mycode.py
The whirpool galaxy. 5 min exposure taken with a Canon Rebel T3 on an Orion Maksutov-Cass 127mm on a Celestron AVX mount, autoguided.
The whirpool galaxy, logarithm of image. 5 min exposure taken with a Canon Rebel T3 on an Orion Maksutov-Cass 127mm on a Celestron AVX mount, autoguided.

The whirpool galaxy, square of image. 5 min exposure taken with a Canon Rebel T3 on an Orion Maksutov-Cass 127mm on a Celestron AVX mount, autoguided.


Right now, all images look the same, but there will be cases where different transformations help (this is similar to playing with the linearization curve). We'll mess around with colors later.





Saturday, December 5, 2015

Quick optics 101

Here's a quick fun thing to do with a telescope on a rainy day.

Try this:
- point your telescope at something bright
- remove the eyepiece and place a piece of cardboard where it should be. Move the cardboard back and forth and you'll resolve an image!



Scroll over to 30 seconds or so to see the end result.

How does it work? We know from simple optics that if you have an image a distance d0 away, that another object will appear d1 away:



where f is the focal length of your telescope. Here is a schematic of the setup. The object is labeled "O" and the image "I".

Analyzing my setup, we see that d0 = 2.5m, and f is 700mm. This means with some algebra that we should expect an image at d1 = 1.03 m.
The object is at S1 and the image at S2+S3

What happens with images from the sky?


Images from the sky are actually located so far away that d0 is very large, and as a consequence 1/d0 goes away. This means then that:
1/d1 = 1/f

So if you did the same thing with the moon, you could use it to find the focal length of your mirror!



Sunday, July 19, 2015

Weather Data - Part II : The reader

So how do we extract cloud data? One method is simply by observation. However, why observe when we have the satellites to help us.

The data

The data can be found here although I currently see that the server is down (hopefully this will change soon). 

The data format

The data format is sort of summarized in this document here.

The reader

The reader written in C can be found here. However, beware, this code only works for a big endian machine. For little endian, make this substitution:
data.Preptr->npix becomes ntohl(data.Preptr->npix)


The reader (Python version)

At a recent hackathon in Montreal, I took the liberty to rewrite the library in python just to help with the visibility of this data, and also well, learn python.
Here is the source.

I would like to point out that there is also a java version written by Andre Mas here. We wrote the library in parallel just to challenge ourselves and better understand this data at a lower level.

Anyway, so what's the result?
Here is a quick result on the infrared radiance.

data set: ISCCP.DX.0.GOE-7.1991.01.01.0000 the infrared radiance measured.
Why did I choose the radiance? Because infrared is visible during the day or night. (For those who are not familiar with infrared, click this wiki link. It is like light, but longer wavelength and radiation we can't see with our eyes. But your camera can see it.) Anyway, so the satellite data also does contain data showing what the sky would look like with our eyes (visible spectrum), but then I would be limited to data that was taken during the local daytime of each place measured.


So now we have a database that can tell us the cloudiness of a certain region. Will be be able to do anything more with this? We'll see

Saturday, July 18, 2015

Weather Data - Part I

(This will be a very transparent post but will be useful for the next post involved in extracting this mentioned number of prob of success)

As amateur astronomers, we all share the problem of unwanted "nebulosities" in the sky, yes I'm talking about clouds. Well, actually, if you live in a truly light pollution-free environment, then maybe it's an unwanted bok globule.

Even worse, if you want to introduce and share your passion with others, you will find these others quickly dissuaded by your failed attempts to show them the night sky because a combination of these unwanted objects and Murphy's law.

This is where statistics comes in. First let's introduce the problem and see how it is related to a simple probability distribution.

Binomial Distribution
Let's say you have observed over a few years that in the month of January, it is cloudy 2/3rds of the time. If you organize an astronomical event, you have a 1/3rd (33%) chance of success. 

Rain Date
That's difficult because most likely you'll fail. How do you improve the odds? Let's say you choose a rain date. Your chance of success is now:
(chance of success 1st day)  + (chance of failure first day) *(chance of success second day) = 1/3 + 2/3*1/3=55%

You have almost doubled your chance but not quite. What about a 2nd rain date?
Chance success = 1/3 + 2/3*1/2 + 2/3^2 *1/3 = 70%. Your chances are increasing in diminishing returns.


The importance here is that by doubling your dates, you haven't doubled your chances. It's obvious, but we need to formalize it for the next step.
Probability of success for n-1 rain dates where perecentage success for one day is 10% (black), 33% (red), 50%(green), 75%(blue) and 80%(magenta). The horizontal black line is 99% chance of success.


So let's calculate this for multiple rain dates. The calculation is done here for n-1 rain dates above for probability of success 1/3 (red curve). The flat black line at the top of the curve is the chance of 99% success. We see that for region with 33% chance of no cloudiness, we would have to schedule 9 rain dates (total of 10 days) to beat Murphy's law with a chance of success! You see that even with 50% success you need 6 rain dates to achieve this confidence! And you can forget about it if your chance of clear skies is 10%.


What does this all mean? Basically, if you're an amateur astronomy club in a region where the chance of clear skies is 50% or lower, you'll have to think a little carefully before planning an event.

side note
I would like to note that this is an extreme simplification of the problem and that two things are important: 
1. The cloudiness on dates chosen are independent of one another. If they're dependent, this just worsens your chances. 
2. This prob of success p will probably vary month to month. The easiest simplification to this is assume the change is not large and base your calculations on the worst of the chances.

The solution
Basically, the solution to this problem (a depressing one) as an astronomy club organizer is to make sure to hold at least the number of events throughout the year that would guarantee you one event with a 99% chance of success... Holding 10 rain dates (in Montreal, where we believed the success prob to be at worst 33% for any given month) isn't really popular for any club. It's best just to hold 10 separate events throughout the year.


If you're doing this for your friends, well at least now you have a chance of warning them what they're getting into.

I'll look next at how to estimate this value of the chance of success, which I'll just call P1.

Feel free to look at code below just to see how easy Yorick is (nice alternative to matlab). However, the user base is quite small so I would recommend using python if you're just starting. (I use Python for long code I share with others, yorick when I want to quickly program something.)


Yorick code (using source code formatter): 
1:  func binom(p,n){  
2:      /* DOCUMENT Compute the binomial prob for each  
3:      *    element n with prob p  
4:      */  
5:      res=array(0., numberof(n));  
6:      for(j = 1; j <= numberof(n); j++){  
7:          for(i = 1; i<=n(j);i++){  
8:              res(j) += p*(1-p)^(i-1);  
9:          }  
10:      }  
11:      return res;  
12:  }  
13:  n = indgen(20);  
14:  window,0;fma;  
15:  plg,binom(.10,n);  
16:  plg,binom(.33,n),color="red";  
17:  plg,binom(.5,n),color="green";  
18:  plg,binom(.75,n),color="blue";  
19:  plg,binom(.8,n),color="magenta";  
20:  pldj,n(1),.99,n(0),.99;  
21:  lbl,,"n","P(n)";  

Saturday, July 12, 2014

M82 - Image Stacking

Now that I have a successful reading routine to read the images, I can process them. I have written some home image stacking routines to show how relatively simply this can be done to an amateur.

This post will be long. If you want to see the end result, just scroll to the bottom.
Please click the images to see the actual photo in full quality.

The problem
Astronomers are always starved for light. We want to capture as much light (signal) as possible to help reduce the noise. It turns out the first easy solution to capturing as much light as possible is just to leave the camera exposed to the image for hours on end. However, this does not work for two reasons:
  1. Tracking systems are imperfect and the image will have shifted during the course of the measurement. This will end up with the well known star trail like effects. I will explain this soon.
  2. Your detector actually saturates at a maximum value. You can think of each pixel in your camera as a mini detector. Now imagine as a bucket for light. It can only capture so many photons of light until the bucket starts to overflow. After this point it makes no sense to fill your bucket further because you won't be able to count the extra photons you've caught.
Now let's visit each of these caveats before we move on to the solution.

Tracking systems aren't perfect
The first thing all amateur astronomers learn is that, tracking systems aren't perfect. If you want to have pristine tracking, it takes a lot of work (even with the more modern GOTO mounts). Time and resources are not a leisure for hobbyists and this is a known problem.
Take a look at M82 here. The second image was taken about 1-2 min after the first. I tried my best in perfecting the tracking and I used a Celestron C8 AVX which is an equitorial GOTO mount. Still no luck, there's always a shift... You can see them below.

M82 - one image

M82 - another image. The shift relative to the previous image is marked by the arrow.
So we see there is a shift and we'd like to correct for it. It turns out this shift is not that big, approx 1/5th of a pixel per second. Thus if I expose the camera for 10 seconds, the image will have moved at most 2 pixels or so. This can actually be corrected for quite easily. Before explaining the correction, let's discuss the second problem: that of image saturation.

Image Saturation
A good example of image saturation (that isn't a star) is one our our planets: Saturn. The planets are some of the brightest objects in the sky. No wonder the word comes from the greek word astēr planētēs which means "wandering star". They're one of the brightest "stars" in the sky but oh... such rebels!

So what happens if I overexpose the image? See for yourself (units are arbitrary):
 
Overexposed Saturn Image

A cross section of the overexposed image.
 Take a look at the image. Near the center of the object, all the values look to have the same color. Even worse, by looking at a cross section, you see that they are the same! This is the result of overexposure. The little "buckets" of light in your camera can only gather so much energy before it spills over. Each little bucket here could only hold so much before it could not report any more signal. So how do you correct for this issue? Easy, you read out multiple images before these buckets "spill over," as will be described next.

How do you correct for this?
Now how would one go about correcting for this? The correction is actually a two-piece process:
  1. Take multiple images of shorter exposure before the light buckets spill over, and before the image shifts appreciably.
  2. Re-shift the images back in position and then average them.
The first piece is simple. I have taken here 10 second exposures. This is enough to avoid both saturation and star trails. The second piece is explained in two steps:
1. The shifting process
2. The averaging process

First Step: The image shifting process
One method is simple, you can manually shift each image so that it looks like it overlays on top of the image. But how do you know when you've shifted by just the right amount? Easy: Just take the absolute value of the difference of the two images. Here is a result of the absolute value of the difference of two M82 images shown earlier and look at just one star:




M82 difference of two images, zoom in on a star.

The shift is now pretty clear. It looks to be 5 pixels in x and 23 pixels in y. Now, let's shift these images by this amount and take the difference again:


Difference of two images of M82, where the second is shifted by 5 pixels in x and 23 pixels in y.
We can see the image now looks almost to be zero (although not perfect). Now, a sensible solution would be to have a program try to take the absolute value of the difference of images for a few values, and check to see where there is a minimum. Where you find your minimum will be the region that the image has most likely shifted. A sensible solution.

It turns out there is a slightly better and quicker trick using fourier transforms, but I have run out of time. I will post it later with code and examples. Anyway, so for now we have some way of determining the shift of the images numerically.


Second step: average them!
So now, you have a sequence of images, and you also have a method of shifting them back in place. In other words, you have a sequence of the same image. So what next?
Images will always contain noise, like it or not, which will amount to graininess in your images. There is an easy way to reduce this graininess. It is well known in statistics that to reduce the noise in any measurement, all you need to do is make the same measurement again and again and average the results together. Say you averaged 10 measurements together. If each measurement was truly independent, then your noise (graininess) will be reduced by a factor of 1/sqrt(10) where "sqrt" means "the square root of".  If you have N images, then your noise is reduced by 1/sqrt(N). Remember, this only holds for independent measurements.

Here is an example. I take a cross section of M82 and I plot it below. I then take the average of 20 independent measurements of M82 and plot the cross section in red. The graininess (noise) appears to have been reduced by 1/sqrt(20) or approximately 1/5th. So it works!

 



The Final Result
So there you have it. Now let's see what happens when we apply this to M82 as below:

M82 - One image 800ISO 10 s exposure with Nikon D90, converted to grayscale, byte-scaled between 60 and 100 ADU. Click to see the actual photo!

 And shift+stack it:

M82 - Stacked images at 800ISO 10 s exposure with Nikon D90, converted to grayscale, byte-scaled between 60 and 100 ADU. Click to see the actual photo!
Voila! A much nicer image. Note that this has been done with only 20 images, in grayscale and an extremely light polluted (McGill Observatory) area. It would be interesting to compare the same measurements (on the same day, with same moon phase) both at the heart of light polluted montreal and its darker rural skies.