Noise Reduction in HDR workflow for digitising slides

This question likely overlaps with another active thread on here but my purpose is different so rather than divert the other thread…

I’m working on a setup for digitising my collection of mainly Kodachrome 35mm transparencies. My current setup uses a Canon EOS 70D DSLR, bellows and an 80mm enlarger lens.

Previous attempts using two different flatbed and one PlusTek film scanner and have all produced disappointing results - none of the scanners has been able to capture the detail in the darker areas in spite of my messing with exposure settings. From reading others’ experiences I am not alone.

The DSLR setup has already proved that it is able to do much better by using bracketed exposures and HDR techniques. I’m now trying to optimise an HDR workflow to produce decent results without requiring so many steps that it becomes impractical to do more than a handful of slides.

Noise is still evident when zooming in to the dark areas of the HDR output (but still much better than the scanner results).

Some sites advise that it is best to perform noise reduction on the original images before starting the HDR merge. This reduces the likelihood that the HDR process will treat the noise as ‘valuable detail’.

I’m not clear whether this is possible with Darktable. I’m trying both Darktable’s built in HDR and also an external merge using Luminance HDR.

Using the Darktable merge… if I apply a noise reduction module to the raw files first will the data passed into the HDR merge have the noise reduction applied?

With Luminance HDR, I’m loading the raw files from the file system so I’m pretty sure that these will be untouched regardless of what I do in Darktable beforehand. Is there a better way? Presumably I could export the noise reduced raws but what would be the best file format to use without losing information? Is there a way to do this that doesn’t require lots of manual steps?

I also do some astrophotography and regularly use multiple exposures at the same settings as a way of reducing noise. I have not found many references to using the same technique for non astro photos. Could it be done in Darktable? For example would the HDR merge be able to make use of multiple exposures at the same settings to reduce noise? … or would another module be able to do it?

The darktable HDR function is very basic. You’re likely better off with HDR merge.

1 Like

Thanks. For some reason I thought HDRMerge was not available for Windows but I checked… and it is.

I will put HDRMerge on the list for evaluation against Luminance HDR and Darktable. I also tried out Picturenaut but seem to get odd looking results from that. Plus I’m testing it against an in camera HDR and my best effort to postprocess one of the mid range exposures.

So… same question for HDRMerge… is there a nice workflow for using it with Darktable that allows noise reduction on the individual images before the merge?

I agree with @mica, the HDR functionality in darktable is very basic.

There is a lua script, image_stack.lua, to do image stacking for noise reduction. It works by providing an additional export target, so it exports the selected images, then stacks them using average or mean, and returns the result and imports it to the current collection. You could do a profile_denoise on the images prior to export to further reduce the noise.

Or, you can do a profiled_denoise on the raws, then export them as 16 or 32 bit tiffs and run them through Luminance HDR.

1 Like

Thanks for that. I have not yet tried LUA. Maybe the answer lies in that direction. Could I, for example, use LUA to run a HDR merge using Luminance or HDRMerge?

There is a script to use HDRmerge and a couple to use enfuse

1 Like

There isn’t any need to do noise reduction before blending exposures. Doing noise reduction on the merged image gives the NR algorithm more real detail to work with.

1 Like

I found the page that I was reading regarding NR. I’m certainly convinced enough that I want to try it (i.e. noise reduction as early as possible) for myself.

doh! missed out the link

A tone mapping operator needs noise reduction performed first.

That’s distinct from blending a bracketed series of images, which should be performed prior to noise reduction.

Capture => blending => noise reduction => tone mapping

That link argues that noise reducing individual images is better somehow but when your photo is of finely detailed film grain it’s important to compile all the data first before applying the NR.

I see what you are saying. I’m going to try several approaches on some test images before I decide on a final workflow… or maybe two workflows - a simple one for the majority of slides and a more complex one for working on the more important photos.

I get the best results by using profiled NR in darktable, then merging with LuminanceHDR.

My workflow starts with darktable, using only TCA and profiled NR and adjusting the white balance (no curve/filmic), export to TIFF. Then I’ll merge exposures with LuminanceHDR (triangular weight, linear curve).

If it’s a single image I’ll tonemap; but if I’m stitching with Hugin I’ll save the EXRs and use those for stitching, then tonemap afterwards.

1 Like

Thanks Karl. Good to see someone expressing love for Luminance HDR. At the moment it seems to be doing better than the other options for me and it certainly has more options. I was playing with the merge options yesterday. They definitely do make a difference but I need some more time before drawing any conclusions.

I’m still figuring out Lua but hopefully that will allow me to make basic adjustments on the raws before the merge without requiring a lot of extra manual steps.

CA is another question. Many of my originals were taken with a fairly low end 35mm viewfinder camera (hence my username). The lens performance was not great but I’m not sure how practical it is to correct 40 year old lens deficiencies… there is certainly no lens profile for it.

I also have later slides from my Minolta SLR and earlier ones inherited from other family members… so the collection includes slides from at least 5 different 35mm cameras.

No panoramas though… although it is possible that some may show up. I don’t know what’s in some of the slide boxes.

Hi @Halina3000, I digitised a few thousand slides (mostly Fuji Velvia and Kodak Ektachrome) during the last years, always based on a setup with digital camera (Sony SLT) and macro lens. I focused on how to get a high quality raw file with one single shot. Due to the high dynamic range of the sensors I found no benefit in investing much time to merge bracketed shots. And of course I wouldn’t have had the time to digitse so many slides with that workflow.

Here’s how i handle it - maybe it is of interest for you:
To find the general exposure setting I take several shots on an test-slide containing only few thin lines for proper focusing. I adjust exposure until I reach overexposure warning (watch histogram and overexposure warning on camera) evenly on the full frame. With that high exposure i shoot raw-only and afterwards develop all images in darktable.

The new workflow with filmic, tone equalizer and contrast equalizer as well al color balance and color calibration works perfectly for that purpose and brings back all details in shadows.
I didn’t find any advantage in multi shot digitising when it comes to the darker parts of a slide. In case of obviously darker slides I sometimes adjust exposure to an even higher level to get all details from the shadows.
In general it’s better to “overexpose” a little as cameras show overexposure warning and histogram based on the expected OOC JPEG as far as I know. So my experience is that there’s usually a lot more headroom in the raw file.

1 Like

Thanks Roland. It’s good to have some info from someone who has done it. I will certainly try that - I don’t want my workflow to have extra steps if they are not adding anything but at the moment I do see significant improvements from stacking bracketed exposures.

I have read that Kodachrome is more difficult than other slide films because the dark areas are particularly dense and that certainly agrees with my own experience. I also have a lot of slides where the main subject is underexposed - the camera was basic and I didn’t know much about exposure compensation (…or depth of field …or anything much really!) at that time. On the plus side… my skies have plenty of detail :slight_smile: . The subjects of my earliest efforts are, however, often of the most value to me.

For me I’ll make the adjustments to one photo, then copy the history to the rest of the set. The time-consuming part is creating the values I want, which can’t really be automated.

I’m hoping that I can create a style (or perhaps a small number of them) that will work for the majority of slides. We shall see… that is some way off at the moment.

In other news… the choice of HDR merge is turning into something of a project.

Although HDRMerge provides very few things to tweak I did realise that I was missing a trick in not paying better attention to its preview. This proved that HDRMerge was only using three of my six bracketed frames. Running it with just those three frames produced an equally usable result.

I’ve also been trying different options with LuminanceHDR. This makes a considerable difference to the result. I started off without much clue about what the options do because the docs provide no detail.

Today I found a 12 year old post (with the old name of the tool) which sheds a good deal more light on the options and which ones are worth trying…

Some of my tests were done before I found the above post. My conclusions with 6 stacked frames with the qtpfsgui profile names noted…

  • Triangular-Linear-Debevec (default) - usable but needs a lot of adjustment so it’s easy to mess up the result. ‘Profile 1’
  • Triangular-Linear-Robertson - awful. Really odd gradients in several places. I did not take this algorithm further
  • Gaussian-Linear-Debevec - darker shadow detail. ‘Profile 5’
  • Flat-Linear-Debevec - much brighter shadow detail but looks somewhat overprocessed. Possibly fixable by toning it down afterwards.
  • Triangular-Log-Debevec - came out completely black

The next four use a gamma response curve - preferred by the author of the 2009 post…

  • Triangular-Gamma-Debevec. ‘Profile 2’
  • Gaussian-Gamma-Debevec. ‘Profile 6’
  • Plateau-Gamma-Debevec. ‘Profile 4’
  • Flat-Gamma-Debevec

All of those gave good looking results with varying brightness in the shadowy areas. Flat was the brightest, followed by Plateau, Triangular and then Gaussian. Not much to choose between Plateau and Triangular to my eye. From these I’d probably go for Plateau-Gamma-Debevec

Finally I reduced the input to the three frames shown as being used in HDRMerge. This produced a much lighter result that needed no adjustment in the exposure model, just stretching in Filmic RGB. I just did Profile 4 and Profile 6, of which Profile 6 was the better of the two.

In doing this I also figured out that I’d followed the wrong path in my previous 6 frame test. I had not realised that it was OK to use two exposure module instances to push the exposure by more than 3EV so instead I was doing the extra adjustment in Filmic RGB. I think this was causing the green cast.

On the question of whether to bracket or not… I reviewed the three raw frames from the final tests individually. All of them have clipping in part of the frame so my conclusion is that I do need to stack bracketed frames… at least for slides with lots of dark areas.