The darktable magicians have crafted something again - detail mask

Noise is high-frequency, so details and noise will be masked altogether (though there are some mitigations strategies to somewhat reduce noise sensitivity, but noise is still read as high-freq nonetheless).

Which is actually the whole conundrum of denoising. You bet if it was as easy as masking out legitimate details, we would be feasting on DxO Prime ashes already.


Noise is a broadband random signal. Just like any ground truth feature is a broadband signal. The conundrum of denoising is that it is hard for an algorithm to separate random noise structure from actual (sometimes random) image content, for an algorithm they can look similar.

Thanks for the clarification, and of course, what you say makes sense. A lot of times when I denoise, I’ll manually mask around the subject to preserve details where noise isn’t as apparent.

The photo of the mask around the heron was a perfect example. Still, I appreciate this new development and I’m sure it’s going to help in a lot of ways

You guys are really outdoing yourselves these days.

I don’t understand what you mean by broadband. It happens at sample scale, is has the dimension of a pixel, it gets removed by a low-pass filter, it’s high frequency to me. I’m talking about spatial frequency here.

The spectral power density of noise is not just high-frequency. In case of white noise, it’s constant over the whole spatial frequency spectrum. Pink noise for example has a power distribution that goes with \frac{1}{f}. This is what I mean with broadband.

There are many ‘colors of noise’

A low pass filter (maybe a box blur or a gaussian blur) is a very very crude implementation of a denoiser.


I like the results from this one. Have been playing around with it for the last couple of days.

@s7habo Many thanks for this.
I tested the detail mask before the merge and immediately realized it’s a super cool feature.
I was looking for use cases and indeed you provide convincing ones. I really believe you should contribute to the official DT documentation with real use cases before/after, taken from your fantastic edits. One image is worth a thousand words!

Coming to first case you describe (artificial bokeh), wouldn’t the low pass module be more appropriate than contrast equalizer? The former effectively blurs areas similar to an out of focus lens, the latter only reduce the contrast.

1 Like

We try to avoid very detailed “how to” guides with lots of images in the user manual, but I absolutely want to include some brief use cases in the documentation. Just a few sentences though. As @s7habo has illustrated, there are a number of useful ways to use this module but including them all with before/after would be too much for the manual.

We have other places for step-by-step guides. For example, the darktable blog.

1 Like

I am very excited for this mask, particularly if darktable implements capture sharpening, which I believe is on the way, though not sure which release is targeted.

In regards to contrast equaliser, isn’t it a wavelets module, thus already split into high/low frequency? Ie. If we move the right most point up to sharpen only high frequency detail, then slide the detail mask to effect only high frequency detail, wouldn’t the detail mask be doing very little? Using your bird image as an example. In the first shot you have contrast equaliser 2 turned off. While in the second, you have it turned on. A more apt comparison would be first shot - contrast equaliser turned on with setting to reduce low freq detail. Second shot - contrast eq as per first shot, but with details mask also used). Same goes for flower image and forest image.

This is theoretically more correct, but you will increase halo around the sharp subject:

lowpass with contrast mask

You can do this in GIMP by cutting out the subject, copying it to the layer above the original and replacing the subject with the surrounding background in the original.

By the way, this is also a nice usage - an interesting way to get the Orton effect: :blush:





with harmonic mean blend mode:

Orton3_harmonic _mean

Yes, but it does it over the whole photo. In relation to the photo with heron, if I reduce the coarse part of the local contrast, heron will also be affected, i.e. it will look sharp but very flat because it will also lose the coarse part of the local contrast like the background:


The detail mask - especially in combination with a small amount of feathering - ensures that the sharp areas are not affected at all, i.e. it keeps its sharpness and coarse local contrast.

The same logic applies to the rest of the photos from the demonstration above.

I am ready to provide the content and material for blog entries, if someone would be willing to accompany me technically and correct the content and language.


Thanks @s7habo for posting / explaining and thanks to the dev-team ! Didn’t notice this nice little feature.

1 Like

I’m sure @paperdigits can help with the technical stuff and I’m more than happy to help with copy-editing (and technical stuff if I can). As a starter, you could take a look at the blog post that was made for the last darktable release, here.

Also, we’ll be doing a new blog post announcing darktable 3.6, and some brief examples would be useful there as well.


That’s noise over a temporal wave. Photosites bin photons and integrate the light wave magnitude over time, we don’t have Hz and we certainly are not in Fourier space. Any image denoising algo off the shelf is a low-pass filter, no matter how clever it is.

1 Like

Pure genius. Thanks to those who implemented it.

Advancing in leaps and bounds. With plenty of time on my hands due to lockdown, I have gone over my 2020 photo collections twice already. Each pass has dramatically improved many photos, using proper scene-referred workflow, then using color calibration properly, then using the beta version of colorbalancergb.
Now I’ll have to go back and use the parametric mask with the details threshold slide.

Where will it all end???


Hopefully, it will not end :slight_smile:


I don’t know where to start since this is information theory 101.

Not what I meant. Take matlab or python or whatever, generate a 2D ‘pixel’ grid and fill it with your favourite noise type. Then do a FFT, look at the spectrum of spatial frequencies. Does that plot contain just high frequencies? Remember: a fourier transform is ‘just’ switching the basis of how you represent your measurements. So if you want to, yes you certainly are in fourier space. That’s exactly why I chimed in, as I saw a…let’s say bold statement…regarding spatial frequencies (you changed the basis set in order to make that statement anyway) of noise.

Well. Noise is a random distortion of whatever you want to measure, however you sample it. Spatial frequencies may not be normalized to the unit of seconds but to whatever image dimension you want…picture height, block-size…but the same priciples apply. Take a photo of a flat surface at low light and do a FFT of the resulting image, you’ll see that the noise spectrum contains all frequencies at various intensity levels.

No. Take for example DCT-denoising or Wavelet denoising. They do coefficient shrinkage either in discrete cosine frequency domain, or in wavelet domain (which is a more complex change of basis set than a fourier transform, it’s multiscale by nature o the transform). nlmeans has problems with low frequency noise and that’s one of the reasons why multi-scale denoising schemes are investigated. Multiscale in order to attack not just high frequencies for algorithms that out of the box just chuck away high frequency detail.

The detail mask dicussed in here, is one of many tools in denoising to restrict effects to ‘flat’ areas. In terms of frequency, those are low-frequency. Leaving edges alone is very important because the fourier transform of an edge is…? You guessed right, broadband in spatial frequency terms. You have broadband noise, and broadband features and you need algorithms to distinguish random noise from real structure.

Of course the detail mask is good for many other things as well like we see in this thread (feature selective sharpening and local contrast adjustments).



Very kind to mention me, though I wasn’t involved in the darktable implementation at all. I just implemented it (though a bit different) in RawTherapee a while ago.

Anyway, I’m glad, that @hannoschwalm ports some RawTherapee stuff (the details maks, rcd demosaic and dual demosaic, maybe also capture sharpening) to darktable :slight_smile:


That’s still your idea and it’s available to everyone. Also, your willingness to share, help and be open to people from other projects is for me the best example of how to treat each other in Free Software community and that alone is worthy of praise.


Ingo @heckflosse work (and lots of mails) inspired me to take the road in the right direction :slight_smile: Yes capture sharpening is one of the algos on my agenda although there is so much to do for dt and my time is limeted too :slight_smile: