I have been experimenting with averaging multiple images as a substitute for long exposures. The idea is that I take 20–50 images in a tripod of a subject with a stationary background and some moving feature which I am trying to smooth out (eg water, or people on a busy square).
When I have developed the images in Darktable, I would like to take a simple mean of pixels of all images. I found that I can do this with the composite module, but it is somewhat tedious, as I can only add one image, so I combine two with 50% opacity, then the third with 33% opacity, etc. This does not scale.
Is there a FOSS alternative that would perform this operation, either using scene-referred (preferred) or display-referred images? A command line tool would be ideal. Eg I could export linear TIFFs, of the same size, and the tool would spit out a linear tiff, which I could then edit.
(Also, other similar operations would be great, eg min, max, or even quantiles). If there is no such tool I can write it myself, just thought I would ask.
(Just to clarify: the images are aligned, no need for that. This is not about HDR, all images have the same exposure parameters).
Thanks! I found a blog spot about it after I knew what I was looking for. It is quite fast too, handles 50–100 images (which I typically use to get blur) in a few seconds. convert handles both linear TIFF and jpg.
There is a lua-script, image_stack in contrib, that will do this. It offers several different modes. Imagemagick is used behind the scenes to do the actual processing.
my preferred workflow became developing a single image in Darktable (lens correction, denoise, basic color correction, but w/o sigmoid), copy over the style to all images, and export into 16-bit linear TIFF. I find it much nicer to deal with color information from the camera this way.
convert from ImageMagick works fine, but I had to raise the resource limit. A 20Mp image is 2032 = 120MB in memory. 100 such images are 12GB, etc.
Unless the images themselves have blurring (depending on subject, but slower than at least 1/5s–1/10s exposure), I found that to get nice smoothing of ripples on water and similar is to use around 100–120 images, taken with a 1s or 2s gap (more if you want to blur clouds). The processing is fast enough for my desktop, but quite tedious on a laptop (around 20min total). Most of the time is export by Darktable.
Handheld images work if you pre-align them. So, “long exposure” w/o tripod is possible.
So, I am satisfied with the experiment: now I can take “smooth water” photos on sunny days, where the camera would suggest a 1/4000s exposure at f/5.6.
But I wonder if one could take a shortcut. What I am thinking of is the following: take around 20 images, then for each pixel calculate the mean and the standard deviation. Use the latter to create a raster mask, and blur using that.
Is it possible to import a raster mask in darktable from a file? Or maybe from the same file, if it is saved as an alpha channel?
ouch. the vkdt video above combines the images in real time, full res, from raw, while you watch. also it averages the data image by image, so never stores them all in memory at the same time, only two.
and then do temporal averaging, or integrate the resulting gaussian distribution analytically to smooth it more? this is different from spatial blur, and what we do for noise reduction in real time rendering (temporal variance guided filtering). since i have this code, it would be interesting to repurpose it for your application there… sounds like a nice use case! one difference is that in denoising you use the variance estimate to keep sharpness, where you want to introduce blur. estimating the mean of a gaussian is just the simple average of the values you collect, so i’m unsure temporal gaussians would give you much (maybe need spatial after all).
No, there is a simple online algorithm for calculating averages.
No, what I am thinking of is much simpler: blur the pixels with high variance among themselves, with neighboring ones. This should be sufficient (I hope) for water (lakes, waterfalls), where I am going for smoothing. I have yet to figure out the details.
Some time ago, I wrote a little script for stacking a series of hand-held shots for denoising/upscaling (the idea is to scale images up, align and then average them, so similar scenario).
I think I tested both ImageMagick and Hugin for the averaging part back then and ended up using Hugin’s hugin_stacker command:
It supports different averaging methods (“mode”), which you may want to experiment with. Depending on the exact application, ordinary mean might not be the best option. For my application (with a fully static scene), I found the Winsor mean to yield best results. It filters out extreme outliers, which results in less artifacts if the alignment is not perfect.
Indeed in astrophotography, where stacking is usually done, stars do move (apparently) between one photo and a registration-alignement is to be done before stacking