[Question] How to produce a timelapse with fixed stars and "moving" earth?

Recently I took multiple shots of a starry sky, for a school project that my daughter volunteered to. We want to show the effect of changing the reference viewpoint: the earth, or the sky.

For the first part, having the camera fixed on a tripod, it’s easy to make a timelapse video from images with the stars “moving” across the sky.
But for the second part, “fixing” the stars is trickier than I thought. Ideally, I would put the camera on a tracking mount so that the camera “follows the stars”. But with the camera fixed on tripod, I failed.

I have tried Siril, but the problem is that it requires a large number of stars to make the registration process, and my images don’t show as many stars as needed for many frames.
I’ve tried with Hugin as well, but the result wasn’t good (it has trouble detecting stars).

Do you guys have any idea, tool, or advice?

Random idea: how about video stabilizing algorithms? There should be techniques on fixing a certain (group of) object in position.

Bonus: I have seen a lot of wacky stabbot use on reddit.

How many exposures, at what interval between each? What was the focal length, eg 50mm equivalent for 35mm cameras?

The camera is fixed, so we need rotation but not scaling or translation, and certainly not a perspective transformation. You will be rotating images about an axis that has the same coordinates in each image (eg the north pole star). If the interval is close to constant, then the rotation will be an integer multiple of the individual angular interval.

If the interval is not constant, you can find each angle by finding the location of one star in adjacent images. You know approximately where it is, so searching should be quick. Then some trigonometry tells you the rotation angle required to make one image match the other.

A couple of sample images might help.

(EDIT: With hindsight, this was entirely wrong. Rotation doesn’t help. A perspective transformation is exactly what we need. See below.)

I don’t quite see how vid.stab can help me for this.

The images were shot with a Samyang 12mm lens on APS-C (so there’s some lens distortion).
It’s not as easy as simple rotation. The camera was shooting towards the south, but being in the northern hemisphere, the celestial south pole is below the horizon and outside of the frames.

The frames can be found here for those who want to see: https://drive.google.com/open?id=1C7xE_KBZ4e7HjuZaycnWdw9Pq1wYeqz5 (licensed under Creative Commons BY-NC-SA).

Anything done between images (registration) should theoretically be possible in video. E.g., see: https://www.mathworks.com/help/vision/examples/video-stabilization-using-point-feature-matching.html

Maybe try labeling the stars and making them visible to Siril or Hugin. What I mean by that is artificially make them brighter or larger or draw constellation lines. Align them and then apply this alignment data to the real images. That way the algorithms won’t go in blind.

G’MIC has register_nonrigid register_rigid. I have never used either however.

Here is one version, aligning two stars so they don’t move:

Sadly, the video is too small to actually see the stars.

Upthread, I said we could simply rotate the images (about an axis that is below the image). That was wrong, because an angular distance between two stars at the centre of the image is a smaller pixel distance than the same angular distance when the stars are at an edge.

Put it another way, stacking to get the lightest pixel from all images shows that, to the camera, stars do not describe arcs of concentric circles, so simple rotation cannot be used to “freeze” all stars:

So we need some transformation that is beyond my brain-power today. I’ve ignored that problem.

The movie does a simple affine transform that aligns (“freezes”) two stars: the brown one that starts near top-left, and the blue one that starts near the centre. To make the movie, I did two chain searches to track the motion of those stars.

(In a chain search, we crop a subimage from the first image in the series and search for it in the second. Then we crop around that point in the second image and use that subimage to search the third. In this way, we track the feature through the series of images even though it changes slightly in each image.)

Each star doesn’t move far between frames, so we only search a small area. I did a supersampling search (aka sub-pixel search), at a factor of 4, to get the locations at a precision of 0.25 pixels. Without this, the small errors accumulate over the images, and we eventually search the wrong area. With this supersampling, there is still a small drift, but only by a couple of pixels.

Hey Alan,

I’m impressed by your result, the movement of the frames is really smooth.
However, I still don’t understand how you did it: which tool, or script?

Yeah I forgot to mention that the celestial equator is in the frame, meaning that stars depict different arcs above and below this equator line.

Hi @sguyader , maybe Blender’s object tracking can help you - fix the camera on a star:
Lots more on YouTube.

1 Like

I used ImageMagick, with some Windows BAT scripts. The scripts do the chain search, (searching forwards and backwards from any frame), and do the alignment using IM’s “-distort”.

Making the “lightest” image (which I’ve realised you did, on another thread):

del lightest.miff

for %%F in (src\*.jpg) do (
  echo %%F

  if exist lightest.miff (
    %IMG7%magick ^
      lightest.miff ^
      %%F ^
      -compose Lighten -composite ^
  ) else (
    %IMG7%magick ^
      %%F ^

%IMG7%magick ^
  lightest.miff ^

For the movie, the overall process is:

call %PICTBAT%csFiles src\ DSCF*.JPG . venice

echo 722 834 >venice_features.lis
echo 3142 1809 >>venice_features.lis

set csSUPSAMP=4
set csDEBUG=1

call %PICTBAT%chainSrchMult venice venice_features.lis 25x15 51x31

md outframes

set venice.OutFrames=outframes\

call %PICTBAT%csaAlign venice

The two stars are at (722,834) and (3142,1809) in the first frame. I chose to align all frames after the first to the first frame. I could have chosen instead to align to a central frame, or the final frame, or whatever.

Now we have the distorted and aligned frames. For the movie, we make smaller JPEGs, numbered from 0000:

set I=0
for %%F in (outframes\DSCF*.MIFF) do (
  echo %%F

  set LZ=000000!I!
  set LZ=!LZ:~-4!

  rem copy /Y %%F %%~dpFforffm_!LZ!.jpg

  %IMG7%magick %%F -resize 600 %%~dpFforffm_!LZ!.jpg

  set /A I+=1

From those JPEGs, make the movie:

ffmpeg -ioutframes\forffm_%%04d.jpg venice.mp4

The BAT scripts are part of a larger package for performing and using chain searches. I haven’t yet written this up. Meanwhile, here is a zip of the scripts used here:

Here are three full-size distorted frames (1725, 1825 and 1935), so the two stars align:

The solution to this problem was shown in the link I posted in your other thread.

I would try the following: Point your camera to the celestial north pole i.e. to Polaris, since you are in the northern hemisphere. If north pole and horizon are both on the images you should be able to rotate the images around the north pole in such a way, that the stars stay put with respect to the frame. But then the horizon will move, demonstrating the rotation of the earth.


@Jossie, the link you mentioned is about using a tracker with an equatorial mount. I don’t have such a thing. My question is thus about how to achieve a similar result with alignment on the stars.

Also, I chose to shoot towards the south, because as I mentioned in my other post, this a school project and I wanted to take advantage of being in Venice to use the San Marco campanile as a reference viewpoint, where Galileo made his first public demonstration of his telescope (just to add history to the story).

Thanks a whole lot for that, I’ll take a look!

Shooting to the south makes things complicated.

If you point towards Polaris, this star should remain fixed to a given pixel position in the frames. The degree of rotation can be calculated easily from the time difference between the shots. So it should be straight forward to do rotation and alignment in a script.


I don’t do astrophotography, so this might be wrong, but I think the only way for the stars to describe concentric circles on the sensor is by placing the sensor perpendicular to the earth’s axis. That way, as the earth spins, the sensor remains in the same plane. Pointing north or south doesn’t matter as such, but an ordinary camera would need to point either up or down.

The overall problem is easier than I thought: align four stars instead of just two. Then all the stars will align.

The movie is bigger this time, so we can see some stars.

snibgo.com/imforums/veniceb with http at the start and .mp4 at the end.

How does that work? We would like the sensor to be parallel to the earth’s equator. If it were, and we ignore the earth’s radius (because it is a tiny fraction of the distance to the stars), then the camera merely rotates about its axis.

In theory we can transform each source frame to what the camera would have photographed if the sensor had been parallel to the earth’s equator. This is a simple perspective transformation, equivalent to rotating the camera by some angle about some axis. It need four points of alignment. It would freeze all the stars. But we need a different transformation for each image, and sadly we don’t know what any of the transformations are.

But here’s the insight: if we did those unknown perspective transformations, and then applied any constant perspective transformation to each result, the stars would still be frozen. And we can do this double transformation, by aligning four stars in each image with their positions in the first frame (or any frame we want).

So we modify the script I gave, adding two lines:

echo 722 834 >venice_features.lis
echo 3142 1809 >>venice_features.lis
echo 168 1966 >>venice_features.lis
echo 1991 347 >>venice_features.lis

Here are the full-size sample frames as before:

The hardest part to automate would be the selection of the four stars. I used the “lightest” image to find four trails that were far apart, and had visible starts and ends, without being occluded by buildings etc.

Thanks for sharing this problem, @sguyader. I’ve learned stuff doing this.