Does Earth count as a planet? Terrestrial use of stacking and registration

I shoot landscape astro, so there is almost always a foreground. The foreground images are often shot when sky glow, or the moon, is present to illuminate the scene. I typically have a large sequence of these. I want to process them in Siril.

Mostly this is convenience - I am using Siril for the stars, why not the foreground. But it is also true that terrestrial photography does not have great tools for registering and stacking lots of images.

I think the way to do this is to use the “Enhanced Correlation Coefficient Maximization” method in Registration. It says it is for “planetary surfaces” and Earth is a planet.

I assume that to use this I will need to select a region that only includes foreground.

Yes any of the two planetary method should work if you have enough detail or pattern in the landscape.

But if you’re using a tripod you should not need to align them.

If siril doesn’t work, process your foreground images and align them with Hugin’s align_image_stack then median stack them using imagemagick or gmic. This should also produce nice results.

2 Likes

Imagemagick’s stacking tends to get a little problematic for large numbers of images.

I’ve found myself having to do weird things like stack in groups of 8-16, then stack the results, etc - and most imagemagick builds still have fairly low internal precision.

siril is VASTLY superior for stacking. When on a tripod, I stack without performing any alignment in siril, and then convert the resulting (still bayer-CFA due to not doing demosaicing or alignment) FITS to a DNG and postprocess in RawTherapee.

Last time I tried to do alignment/correlation in siril it didn’t work out so well for me, but I may do another try at some point as it would be beneficial for some use cases to be able to do some alignment - although usually if I am not on a solid tripod I’m handheld, and then burst stacking algorithms like Google’s HDR+ (I’ve linked to an opensource implementation of that a few times) usually works better. (Unfortunately, Tim Brooks’ implementation does not handle rotation well)

I have wanted to have an efficient way to do long exposure via multiple stacked short exposures, so it is great to hear that Siril does this!

I have a question about this technique. Instead of taking a long - say 10 minute exposure, I could take N pictures of M seconds each.

Do you have any rule of thumb on how many photos it takes to achieve a smooth long exposure look? Also, does it matter if the individual shots are also long-ish to get blur?

Here is an example. Instead of the hypothetical 10 min exposure, I could take 600 shots each at 1 sec each. That would exactly equal the singe 10 min exposure, so this ought to work.

Except for the fact that you must process all of the images there are several convenience aspects of doing the multiple shots. You don’t need a super dark 10 stop ND filter, or an intervalometer. You can use the outlier rejection algorithm in Siril to throw away true anomalies.

But I wonder whether you can get by with fewer shots? Or shorter exposures? Or both?

600 shots at 1/100 sec each would only be 6 seconds of total exposure, but if they were taken over 10 minutes they might average to something that is visually similar. Even if each shot is sharp, having 600 of them ought to average out to being smooth for most subjects.
An average of 600 images which ought to make differences between individual shots hard to detect.

What about 100 shots, each at 1 sec exposure, but taken over a period of 10 min?

Obviously it is not possible to answer this in a very detailed way for all subjects. Static items (rocks, buildings etc) should be the same in all approaches Very fast moving things (cars on freeway, fast moving water, flying birds) would presumably be quite different than slow moving things (slow water current, tide coming in).

But I am still curious as to what you have found.

If your target has a landscape and the sky, you’ll quickly have the problem of a part of the image being fixed and the other being moving, which prevents you from using an averaging stacking, otherwise stars will disappear.

If you shoot only something not moving, or on which you can realign, then you can split your 10 minute exposure into several shorter exposures. The rule of thumb is, to get the same signal/noise ratio, you need square root of the number of images times more images.
Like, 100 images of 1s will get you roughly the same quality as a 10s exposure.

The fixed foreground versus moving stars is a problem I know well. You basically need to process it twice - once for the stars, once for the foreground, then combine.

The question about simulating a long exposure is really a daytime-only issue. If you want to get a certain dreamy look on moving water, you can use a really long exposure. Another scenario is you want to shoot a street scene but have the people disappear.

One way to do this is a 10-stop or 15 stop neutral density filter. But those are a pain to use (you can’t see through them!), so must be applied after composition and you must use care (and black gaffer’s tape) to make sure there are no light leaks. You need to have an intervalometer because most cameras won’t time a 10 min exposure for you. And you need to have the filters and intervalometer with you at the time.

An alternative to a single long exposure is to take multiple shorter exposures and then average them.

My question was about what combination of exposure gives a similar time integration effect. That isn’t about SNR it is about how fast things move in a frame and how it winds up looking.

In those cases SNR is not really the issue - you have plenty of light.

There is no good software for doing image averaging of terrestrial shots so Siril would be a very valuable tool for that. I was excited to see that Entropy512 has tried this and was asking him about the various trade offs.

Ok I didn’t understand very well. Imagine you want to make a picture of a waterfall. If you take 100 shots of 1/100 sec, you will have in each of them some different water patterns in the main stream of the waterfall and some droplets that are in different positions all around it. If you use an outlier rejection, you will likely remove all the droplets that are outside the main stream, that you normally see in the minutes long exposure. Using the average stacking without rejection should work fine, and there should not be a lot of outliers in daylight images anyway. If you still want to remove dead pixels, the cosmetic correction filter of siril will just do that.

That may indeed work quite well :slight_smile:

1 Like

Yes, that’s a good scenario.

The reason to use outlier rejection is to remove things that are well outside the normal statistical variation.

So, if you have a waterfall, each pixel in the waterfall will be some shade of blue to white, in a statistical distribution. The final averaged picture will be the average tone.

If a brightly colored bird flies in front of the waterfall briefly, those pixels should be outliers in the same way that a satellite or airplane track is for astro photos. The outlier rejection should find and toss them.

In the main stream it is as you say, but a 1/100 exposure will freeze the statistically less likely to happen droplet on a few pixels that in all other images may be the colour of the rocks or moss, quite different. In that case it’s the droplet that you wanted to keep that will be removed. Special care must be taken to choose appropriate rejection parameters, but that’d be interesting to see!

I agree - the interactions are complex, and you might not always want rejection.

Pure long exposure has no rejection, of course, but just averages everything. And that might be more desirable from an aesthetic stand point. I think a lot of experimentation is required.

An even stranger thing happens for HDR of moving water. The highlight exposures are short and tend to freeze motion. The exposures for dark areas are long. So this puts a weird look on the moving water - the ability to freeze motion becomes a function of tonality.

This can look awesome, or really bad - depends on the circumstance. I have an approach for that.

The other thing with stacking many short images compared to one long shot is that you get a higher precision with the histogram transformation for the final result. If you have 12-bit RAW images for example you’ll get 16 bits in the stacking result, or in the new siril even 32 bits floating point.

Yes that is true!

Siril is built to do stacking of many frames accurately which is just what is needed.

This is also why HDR is best done by Siril. If you shoot at one exposure, then the sensor will clip off the very highest exposures (when the sensor well is full of electrons).

Basically, the sensor is only good at the math of adding up photons within the limited range where it is linear - outside that it clips.

If instead you take multiple exposures at different exposure values then multiply them by factors, (i.e. the weighted average) and do the combination in Siril with 32 bit floating point math it will work for a much larger range at values.

Sorry for not responding earlier, I’m on vacation so this will have to be short.

The sort of workflow you describe is exactly how I took a few of my waterfall shots on vacation last year - so far I have only gotten one this year for various reasons (bad weather, too crowded, family too impatient, etc.)

For a long-exposure waterfall shot, I did as follows:
An ND filter is still needed most of the time, but the requirements are significantly relaxed - more later
Set up the camera to preserve highlights in the shot - lowest ISO, narrow aperture
Ideally, the exposure time is at least the camera’s continuous drive framerate, if not longer, because for this use case, you want the shutter duty cycle to be as close to 100% as possible. When the shutter closes between shots, you run a risk of some artifacts
In addition, if you’re simulating a REALLY long exposure, you want the exposure time to be long enough that the camera is able to write shots to SD faster than it takes them (e.g. buffer never fulls up)
As a result a 3-stop (ND8) may not be enough, you may need a 5-stop (ND32) for specular highlights off of water in full sunlight
BUT - a 5-stop that doesn’t have magenta casts is a LOT less expensive than a 10-stop, you can use them, and don’t have to worry nearly as much about light leaks

You also need a way to keep the camera’s shutter going, so you at least need a remote cable release, preferably one with a “bulb” latch, unless your tripod is MASSIVELY overspecced for your camera - because you’re effectively doing a “bulb” shot, but the camera will be splitting your “one” bulb shot into many (by being in continuous shooting mode instead of bulb mode)

Take the shots, and use siril to stack them in averaging mode. In this use case you should be on a solid tripod so you don’t need to use any of the alignment modules - and in fact can strictly average while retaining a Bayer-mosaiced (CFA) image on the output

Take the FITS output and convert it to TIFF
Take that TIFF and rename to DNG, then apply appropriate metadata for your camera (ColorMatrix1 and the Bayer CFA pattern metadata at a minimum) - I’ll dig up a link to an example script to do this when I get back from vacation
At least last year, for this workflow, I couldn’t find any way to have siril output in a format other than signed int16, which is non-ideal when you’re averaging a lot of frames. I need to revisit that to see if I can figure out a way to get better precision, but for my use case, even the DR recorded by a few frames was more than enough and I was primarily shooting more frames for motion smoothing.

Load that resulting DNG into RawTherapee or other processing software, and tonemap/etc to your heart’s content. The end result will be an image similar to what I posted in another thread that you’ve participated in.

Again - I’ll try to post more when I get back from vacation

Alternative workflows, depending on use case, are:
HDRMerge (excels for merging bracketed shots)
Tim Brooks’ implementation of Google’s HDR+ (excels for merging shorter handheld bursts, ESPECIALLY with motion within the frame, due to Google’s tiled align-and-merge approach) - might be able to find this just with “Tim Brooks HDR+” on Google, otherwise - will dig up a link when I get home

In general, I’ve always found RawTherapee’s DRC module to meet 95%+ of my tonemapping needs. At some point I may play with LuminanceHDR for the remaining 5, or resume my work with variations on exposure fusion but within RT instead of darktable. (I’m not sure exactly what trick they used, likely just having the right saturated pixel metrics, but their variation on enfuse handles subjects lit with monochromatic LED light better than almost anything else I’ve tried and I have yet to reproduce it with any postprocessing flow.)