averaging aligned shots

I have been experimenting with averaging multiple images as a substitute for long exposures. The idea is that I take 20–50 images in a tripod of a subject with a stationary background and some moving feature which I am trying to smooth out (eg water, or people on a busy square).

When I have developed the images in Darktable, I would like to take a simple mean of pixels of all images. I found that I can do this with the composite module, but it is somewhat tedious, as I can only add one image, so I combine two with 50% opacity, then the third with 33% opacity, etc. This does not scale.

Is there a FOSS alternative that would perform this operation, either using scene-referred (preferred) or display-referred images? A command line tool would be ideal. Eg I could export linear TIFFs, of the same size, and the tool would spit out a linear tiff, which I could then edit.

(Also, other similar operations would be great, eg min, max, or even quantiles). If there is no such tool I can write it myself, just thought I would ask.

(Just to clarify: the images are aligned, no need for that. This is not about HDR, all images have the same exposure parameters).

1 Like

Hugin can perhaps do this.

Imagemagick will do it: convert img*jpg -average avg.jpg

4 Likes

The GMIC suite has filters for both average and median of a bunch of layers.

2 Likes

Thanks! I found a blog spot about it after I knew what I was looking for. It is quite fast too, handles 50–100 images (which I typically use to get blur) in a few seconds. convert handles both linear TIFF and jpg.

2 Likes

There is a lua-script, image_stack in contrib, that will do this. It offers several different modes. Imagemagick is used behind the scenes to do the actual processing.

4 Likes

@Entropy512 has this:

Post: Color Calibration: Remove tint from ND filter - #16 by Entropy512
Package: pyimageconvert/pystack.py at master · Entropy512/pyimageconvert · GitHub

1 Like

there’s an accum node for a simple mean (not median) in vkdt:

here it’s averaging one of andabata’s focus stacks, so it kinda does the opposite of what you’d want for this kind of application :slight_smile:

1 Like

So, if anyone is interested:

  1. my preferred workflow became developing a single image in Darktable (lens correction, denoise, basic color correction, but w/o sigmoid), copy over the style to all images, and export into 16-bit linear TIFF. I find it much nicer to deal with color information from the camera this way.

  2. convert from ImageMagick works fine, but I had to raise the resource limit. A 20Mp image is 2032 = 120MB in memory. 100 such images are 12GB, etc.

  3. Unless the images themselves have blurring (depending on subject, but slower than at least 1/5s–1/10s exposure), I found that to get nice smoothing of ripples on water and similar is to use around 100–120 images, taken with a 1s or 2s gap (more if you want to blur clouds). The processing is fast enough for my desktop, but quite tedious on a laptop (around 20min total). Most of the time is export by Darktable.

  4. Handheld images work if you pre-align them. So, “long exposure” w/o tripod is possible.

So, I am satisfied with the experiment: now I can take “smooth water” photos on sunny days, where the camera would suggest a 1/4000s exposure at f/5.6.

But I wonder if one could take a shortcut. What I am thinking of is the following: take around 20 images, then for each pixel calculate the mean and the standard deviation. Use the latter to create a raster mask, and blur using that.

Is it possible to import a raster mask in darktable from a file? Or maybe from the same file, if it is saved as an alpha channel?

Not that I’m aware of.

1 Like

ouch. the vkdt video above combines the images in real time, full res, from raw, while you watch. also it averages the data image by image, so never stores them all in memory at the same time, only two.

and then do temporal averaging, or integrate the resulting gaussian distribution analytically to smooth it more? this is different from spatial blur, and what we do for noise reduction in real time rendering (temporal variance guided filtering). since i have this code, it would be interesting to repurpose it for your application there… sounds like a nice use case! one difference is that in denoising you use the variance estimate to keep sharpness, where you want to introduce blur. estimating the mean of a gaussian is just the simple average of the values you collect, so i’m unsure temporal gaussians would give you much (maybe need spatial after all).

2 Likes

By coincidence, I learned yesterday that that is how the Sigma ‘fp L’ camera can go down to ISO 6! … ‘extended’ indeed!

Otherwise, nothing to offer beyond what has already been said, sorry.

1 Like

So the result should be different from a “global” or “classic” average ?

No, there is a simple online algorithm for calculating averages.

No, what I am thinking of is much simpler: blur the pixels with high variance among themselves, with neighboring ones. This should be sufficient (I hope) for water (lakes, waterfalls), where I am going for smoothing. I have yet to figure out the details.

Some time ago, I wrote a little script for stacking a series of hand-held shots for denoising/upscaling (the idea is to scale images up, align and then average them, so similar scenario).

I think I tested both ImageMagick and Hugin for the averaging part back then and ended up using Hugin’s hugin_stacker command:

hugin_stacker --mode=winsor --output=out.tif input1.tif input2.tif ...

It supports different averaging methods (“mode”), which you may want to experiment with. Depending on the exact application, ordinary mean might not be the best option. For my application (with a fully static scene), I found the Winsor mean to yield best results. It filters out extreme outliers, which results in less artifacts if the alignment is not perfect.

2 Likes

Is it posible to do that with Lua, even if It takes very long? Stacking directly from raw files gets mutch better resulta in signal / noise ratio

Just to clarify, I am stacking the linear input after lens correction and demosaicing, no additional noise is introduced.

Even with a tripod, I don’t think images are aligned at the pixel level, so it makes sense to demosaic IMO.

Indeed in astrophotography, where stacking is usually done, stars do move (apparently) between one photo and a registration-alignement is to be done before stacking

Using Lua, the files are exported to a non-raw format, then stacked with an external program.

I see… Thank you!