Best workflow for stacking raw images to be developed in darktable

Darktable’s philosophy for now is to not mess with combinations of images. Therefore I have sometimes the issue that I have to do parts of my workflow outside of darktable. Today my task is to align several handheld shots with a moving object to be able to stack them and let the moving object appear several times.

My workflow for this includes hugin/align_image_stack for alignment, gimp for stacking, and darktable for … And here the question starts.

The roundtrip I imagine is darktable (basic development to have something to feed into hugin) :arrow_forward: hugin (alignment) :arrow_forward: gimp (masking and stacking) :arrow_forward: darktable (final development). I did this before in several ways, but I wonder what is the best way to deal with this:

  • What file formats would be best for exchange? As grading etc. would come at the end, I think a linear, high bit depth/float representation should be kept, but that’s just an assumption …
  • Which steps to do in first darktable step? I imagine that everything that corrects the single frame should be done, e.g. demosaicing and lens correction. What about denoising? Something else?

I guess that, since this is something pretty usual, there must be some material that covers my questions already and I was just too stupid to find it. Any pointer welcome.

1 Like

I did something similar recently, except that I was stitching a panorama instead of stacking. I’m not sure if this is the ideal way to do this, but it seemed to work well enough.

In Darktable I made a style and export preset to prepare the individual images. The style includes:

  • raw black / white point
  • white balance set to “camera reference”
  • highlight reconstruction if needed
  • demosaic
  • lens correction if needed
  • input color profile set appropriately for camera and working profile linear Rec2020 RGB.
  • output color profile set to linear Rec2020 RGB

I export the images as 16 bit linear tiffs, stitch those in Hugin, output a 16 bit tiff, and open that tiff in Darktable and do the rest of the processing. At this point color calibration is added to properly white balance the image.

2 Likes

I do the same as @paolod but I also correct the exposure and set the white balance before exporting. If you do that, copy-paste the same history stack to all images.

You may also find it easier to match the images, if you do an export in sRGB first because the linear file might be little dark. If you align the sRGB images first, you can then re-export them in linear Rec2020 and use the same .pto file and (re)run the Hugin batch.

Did you know that you can use the masks in Hugin to include/exclude certain parts of the image. That could save you the trip to GIMP.

2 Likes

I am using Hugin 2015.0.0.cdefc6e53a58 btw, in case this is relevant. Yes, I do need to upgrade my computer, but that’s another topic …

@paolod, I tried your workflow and I ran into exactly the issues @Juha_Lintula mentioned, and some additional ones.

  • The pictures are dark, and it is difficult to set masks.
  • It is difficult to get exposure correct for the result file from hugin. I had to use a combination of exposure and brightness in the basic adjustments, plus an additional RGB curve, and filmic, and still the result is not what I want. unfortunately I do not get it to something I like, whereas even the OOC jpeg of a single frame looks great in comparison.
    image
    This is the curve I need in addition to filmic to get it at least a little closer. All other settings that make better use of the dynamic range of the image without the curve are either way too dark, or are leading to clipping on the bright neon yellow bicycle helmet of my son …
  • What about other file formats, hugin itself suggests exr instead of tiff? And if tiff, which compression would work best for the steps i have here? All the files are extremely heavy, and maybe there is some possibility to save some space …

Other than that, hugin did an incredible job with the masking. This was a great hint, it entirely removed gimp for this task :smile:.

Unfortunately I cannot show the pictures here because my children are shown and I don’t want to share them publicly before they can decide themselves, but hopefully I will shoot some extra frames next time for having some basis for tests …

One question, when you import the TIFF from Hugin to dt, have you changed your Input color profile? If I recall it right, you have to change manually the input profile from standard color matrix to Rec2020.

1 Like

Oh, could have thought about this by myself … Yes, that did the trick :smiley:. Thank you!

Hm, here’s the next issue:

The masking leaves edges in the image. While all images were shot with the same settings, there’s still some difference in the noise structure, see e.g. here:

Desoising helps a bit, but still the edges are visible:

Any idea what would be the best way to deal with that? Denoising the input images would probably help, but the edges may be there anyway.

It’s difficult to understand that there is so much difference between those images regarding noise characteristics, a comparison of the two source images is given below:

I think there’s not much difference in exposure and noise characteristic of them, any idea what’s going on here?

If you’re certain there’s no exposure difference between the frames, there is then the possibility that stacking wasn’t performed in that area for some reason (false detection of a moving object?) and you’re seeing the noise of a single (reference) frame rather than the “denoised” average of the surrounding flat area.

1 Like

This makes perfect sense, as I told hugin to “include” the relevant areas, and I did not tell anything about the rest of the image. This leaves 4 options:

  1. Avoid averaging in hugin. Is there some possibility to do this? Does it make sense?
  2. Do the masking in gimp.
  3. Denoise before hugin such that averaging does not have so much impact.
  4. Denoise the final composition.

While I have my doubts with 4, 3 may also have issues depending on the input images. 2 would again complicate the workflow tremendously. And 1? Maybe together with smooth masking? Is there a possibility? What option would you use?

Suggestions

1 Denoise the source image(s) that contribute to this particular region of noise. You could do selective denoising on them but no guarantees that that won’t cause other artifacts. Experiment!

2 Is there median blending instead of average? I forget: I haven’t used hugin or enfuse in ages…

3 Double non-noise contributing images as input. There is no rule that you can’t, and the quality of the output will increase. This is a quick and easy way to change the weight distribution of the algorithm.

1 Like

I would also experiment with denoising the source images first. I would suggest you to try denoise (profiled) > wavelets chroma only.

You can also try the two Blender options in Hugin Stich tab (the last option). You can also try to experiment with the different enblend options, but with a quick look I didn’t get an idea what to use.

1 Like

I’ll try denoising of source images. However, if there are options to rather denoise the final image, I would prefer that, because it depends a lot on the processing if and how much noise is disturbing in a particular image. I may even like to keep (some) noise, e.g. in b/w landscapes.

I may be wrong here, but to my understanding, median blending would change the statistical characteristics of the noise as well and I cannot see for this particular example why it should change things. I have a mask that selects a (different) part of each of 5 images, which is the moving object (my son doing his very first trick on in-line skates). The rest of the image results from blending, which means changed characteristics of noise. Probably manual blending is the best option unless I can tell hugin to only blend in the transition region, and maximum 2 images. Anyway, I would test median, but cannot find an option.

I am not sure if I understand correctly. I understand that I can change the weight distribution by doubling single images, but it would need a lot of duplicates of one image to counter the other four. Thinking about it, there must be an option in hugin to change weights …

I already tried without success.

Yes, something I have to look into as well.

I like ext. It is 32bit to instead of 16bit and lossless.

1 Like

I’m just getting started in macro and photo stacking, so I’ve been experimenting quite a bit. For right now I select one image to apply white balance, exposure, Filmic and local contrast and possibly make minor changes using Color Balance. Everything is global and without parametric masks, as my objective is make each image as uniform as possible. I haven’t applied sharpening or denoise, but that might change.

Then, I’ll copy those same settings to the rest of the stack, export as TIFF, then perform stacking on some other software and export as TIFF once again to finish in DT.

I haven’t had much luck with Hugin/Enfuse or any of the other FOSS offerings, so I’m testing Affinity, Zerene, and Helicon.

1 Like

Only denoise (globally or locally) the ones giving you issues in the output.

The algorithm will normalize the weights you set, which is harder to factor than inserting more than one copy and letting that act as weighting. Both methods work but I find the latter easier.

If my suggestions don’t work then you can say you have at least tried. It depends on what settings or tools you use. Hugin’s GUI changes according to what you select. For certain options, I think there is a way to input the command options.


My issue with Hugin is colour management. I haven’t used it in a long time but something was amiss last time. That and bit depth or value range.

(Edited for clarity.)

1 Like

Why so much fiddling before stacking? I convert my RAW to TIFF (or if not important, like product pix for eBay, I set the camera to JPG) and then stack with Affinity - save as TIFF (only if the image has importance, mostly landscape) and then do further edits with darktable.

Before that I tried Hugin (terrible results) and Picolay (free and quite good). However with Affinity I can edit individual stack layers to remove artifacts.

1 Like

My theory is that I have better control over the basic image while in raw, particularly for white balance. It’s really not that much effort… a few minutes on one pic and then copy and paste to the others

I just couldn’t get Picolay to work consistently. Affinity is ok and I do like the editing features, but it seems like I get a lot of haloes. Have you tried Zerene or Helicon?

1 Like

I tried the total Hugin/Enfuse fails with Picolay and they where perfect with default settings. But my understanding is you always have to start from zero if you want to change a setting. I have not tried the last two, I think they are rather expensive for the occasional stacker.

I use it mainly for product pix for eBay etc., very small parts, so I don’t have extreme high requirements. I haven’t seen halos in Affinity, but you can always go into the halo causing layer in the stack and edit that. Once that is good, I Tiff it go on with darktable if there is a need for image edits beyond stacking.

Exr I guess? I’ll try it :smiley:.

That’s exactly what I want to avoid, as I try to do as much editing as possible on the final image, which makes cumbersome loops unnecessary. Maybe you can try the workflows described by @paolod, @Juha_Lintula and others above as well to see if there is some benefit for you.

I prefer free/libre/open source software for many reasons, and one of them is definitely to avoid lock-ins. And they do not run on linux, which makes them impossible to use for me.

And how to do this right was (part of) the initial question :smiley:.

Picolay is only free as in beer which is a double no-go for me. But I wonder what issues you had with hugin. Maybe you can post a question about your particular issues such that others could benefit from possible resolutions :smiley:.

This is possible with gimp or krita as well :smiley:.

That’s true, but it becomes a lot of effort to change something and do the stacking again, as export of tiffs from darktable takes a lot of time (depending on the edit and the computing power, of course), and processing of the stack takes much time as well.

I guess I need to come up with an example, as I have issues with all of them :wink:.

I do not even find a way to set weights. The problem is that I want hugin to not compute a stack but for each position select a pixel from only one of the base images. I tried to follow your advise, but at least with my rather old hugin version I do not find a way to do so.

About duplicating source images, I would have to insert the base picture I want many times, correct? What would be a good number? I have 4 other pictures that shall be overruled by the base picture. The more I insert, the longer it takes on my old hardware, and an educated guess about a good starting point would be extremely helpful.

TIFF can be 32-bit float and lossless as well, but I don’t know if Hugin/PanoTools/Enblend support that.

With EXR you have to be a bit more careful with color management. By default I think sRGB/Rec709 primaries and D65 white point are (silently) assumed…

An added benefit of TIFF is the ability to carry color profiles and other metadata that make color management a bit more convenient.

Btw, both EXR and TIFF can also carry 16-bit “half float” (as opposed to 16-bit integer) data that still provides plenty of dynamic range at half the storage space. Again, don’t know what’s been implemented in Hugin/PanoTools/Enblend, but dt can read them if need be.

1 Like