In darktabIe 2.7, I noticed there has been big changes in the denoise modules and I also noticed a new one, contrast equalizer.
Both offer great results and I must say I’m impressed.
However I didn’t hear nothing about it, they’ve just shown up unnoticed, as far as discuss is concerned (or did I miss a post?)
I’d really appreciate to hear the inside story of these changes, a presentation of what drove them, the drawbacks, these kind of things.
I know devs program and writers write, but it would be great to have some more context on these new/revamped tools.
Anyway, thank you for the devs who put these things into work.
Contrast equalizer is not new, just renamed because there is also tone equalizer now.
Contrast equalizer is just the old equalizer that got renamed because the tone equalizer appeared, so I anticipated questions about their differences, and also because it makes its purpose more obvious.
In the past, many people didn’t got what the equalizer does, and it’s honestly difficult to understand it unless you got some Fourier analysis understanding.
It uses a kind of magic (edge-aware wavelets) to decompose an image into frequencies layers (same stuff as the retouch module, which let you visualize the actual frequencies), and applies a contrast decrease or increase over these layers that results in local contrast modification (thus, percieved sharpness).
In image processing, if you blur a picture, you get a low frequency layer that represents the general structure of a picture (low pass filter in darktable):
Base image / Blurred image:
If you subtract the blurred image from the original, you get the high frequency layer that represents the texture of a picture (high pass filter in darktable):
What the sharpen module in dt does is applying this high-frequency layer on top of the original picture in overlay mode, and let you apply a contrast enhancement over it to increase the percieved sharpness:
But this causes 2 problems:
- you quickely get halos around shapes, because the blurring is not edge-aware, so it blurs everything indiferently, and consequently sharpens everything later,
- you dont get a separate control over the fine texture (higher frequencies), the less fine texture (lower high-frequencies), and structure (low frequency). Way to overdo the setting…
The equalizer splits the image into decreasing frequencies layers upon which you have a separate control, using an edge-aware technic (authored by chief-geek @hanatos IIRC). Demo with 4 high-freq + 1 low freq (in decreasing order):
The equalizer/contrast equalizer allows you to affect the contrast in each frequency range separately, resulting in local contrast enhancements at various scales. (silly settings for the sake of demo):
About the denoise module, the changes are less obvious to explain because there is no such visual way to demonstrate them. Basically, noise is a by-product of the sensor technology that corrupts pixels in a random way. Everywhere a scientist finds random things (we call that stochastics processes), he tries to model the randomness with an average ± a standard deviation, where both are not random anymore but deterministic (assuming you have a very large sample). The randomness is therefore modelled by the standard deviation, aka the square root of the variance, and that variance is nothing but the average square offset between the random data and their average value.
The noise profiles in darktable were essentially recording the measured noise variance for each sensor at each ISO sensibility, thus the variability of the noise, in order to compute accurately the probability of a given pixel to be noisy, and denoise it depending of its probability to be noisy (so we save as much details as possible).
What @rawfiner did there was to change the variance model (make it more general) to account for the possible dependency to light intensity, aka give the ability to boost the variance in very low-lights and correct the 3 RGB channels differently, because every sensor CFA has twice as much green photosites as blue and reds one, so the green channels usually needs less denoising than the others.
There are other things too, but I think we will dive too far into signal processing and stats.
Some do both
Yeah, contrast equalizing isn’t hard to do in GIMP (or G’MIC). I do it sometimes in the Play Raws but often have cleaner options to gravitate to.
But so are the presets, right? There are some amazing ones.
Much appreciated, thanks!
And thanks @rawfiner for the improvement, it’s really good! (so far, though, denoise profiled too cpu-demanding)
I am glad you like it.
I will make a post to explain the changes when I find some time
To precise a bit what Aurélien said: the profiles are made supposing that variance evolves linearly with the light intensity. This seems to be untrue both for very low and very high ISOs. I derived a more generic model of variance in function of light intensity (instead of being var = a * intensity + b, now it is var = a * intensity ^c + b : when c == 1 we get back the old model). I did not have enough time to make the profiling tool work reliably enough with this new model, but enough to see it models better the reality. Thus, for now, the module tries to guess the c parameter from the a parameter (which works quite well).
Apart of that, I also introduced changes to make non local means algorithm more flexible (scattering and details sliders).
Finally, I agree it is too cpu demanding, my next work will try to improve this
Sometimes I have some noise, not because I shot at high iso, but because I underexpose (like -1EV or more) to avoid clipping the highlights, and then I have to push up the shadows.
Does denoise (profiled) takes in account this exposure offset ?
If not, what would be your recommendation ?
There is a new slider to take that into account
You can switch to auto mode, then increase the “adjust autoset parameters” slider (don’t remember exactly the slider name but I think it is something like this). Increasing this slider updates the parameters as if your photo had the noise characteristics of a higher ISO images. Its main use case is for underexposed images.
Ideally, the value of this slider should be the same as the gain applied on the image by latter modules like exposure: set the value to 2 if you add 1EV (which is a multiplication by 2), to 4 if you add 2EV (which is a multiplication by 4), etc
After that if needed, you can switch back to manual mode to adjust more precisely each parameter if needed
That works great.
Regarding NLmeans, Vapoursynth and I think Avisynth have some implementation of it with open-cl. GPL-v3
Maybe this could be useful? Or maybe this is already implemented…I have no Idea actually.
Immediate follow-up question would be: How about an implementation of BM3D for denoising? Even slower than NLmeans…but better as far as science tells us.
MIT license I think.
The original paper:
Indeed, there are cooler kids than nl means but one thing at a time.
Thanks for the links
There is already an opencl implementation in darktable.
In fact, even the cpu implementation is already an optimised implementation. As such, I fear that it would be very hard to get bm3d run faster than that, and I would really like to be able to get denoising quality on par with non local means (or better) but way faster.
Instead of bm3d, I will investigate if we can use recursive bilateral filters (https://link.springer.com/chapter/10.1007/978-3-642-33718-5_29) for denoising, because they are very fast (complexity is O(ND) where N is the number of pixels and D is the dimension, i.e. 3 in case of a color image for a simple bilateral filter, and more than 3 if we get a comparison measure closer to what is done in non local means).
To explain a bit more: non local means is basically a bilateral filter of higher dimension: instead of comparing 1 value (D=1, in case of a gray image) or 3 values (D=3, rgb in case of rgb image), with non local means we compare k*3 values (D=k*3, in case of a patch size of k pixels and an rgb image).
The recursive bilateral filter is very similar to a bilateral filter, except that instead of considering the distance between 2 pixels values, we consider an accumulated distance on the path that links these 2 pixels. In other words, if you have 2 pixels with the same value, separated by a line of pixels of very different value, the bilateral filter will consider them as similar, but not the recursive bilateral filter.
This is at the same time a good and a bad thing imho, but this is how the recursive bilateral filter is built, and what allows us to get a very fast implementation: the complexity of the recursive bilatetal filter does not depends on how far we compare pixels.
We should take this into account when designing a comparison measure: using directly the pixel-by-pixel comparison measure of non local means does not work well with recursive bilateral filters.
Instead, I want to use a measure that is rotation invariant (to be able to follow the curves in the image) and that can consider enlarging or shrinking or areas (to be able to follow the path from a pointy area up to a larger area of the same object). Also, the measure has to consider enough pixels to be robust in case of high noise.
As such, I am thinking about computing several growing means for each pixel, to consider in the measure: the pixel itself, a mean of radius 1, a mean of radius 2, a mean of radius 4, a mean of radius 8, and a mean of radius 16.
To compare 2 pixels, just compare all the rgb means (up to radius 8). To add shrinking and enlarging, compare the means using a shift: (radius 1 compared with radius 0, radius 2 compared with radius 1, etc.).
Even though such filter would maybe work less well than non local means in textured areas, it should perform better on smooth area, and perform good on edges.
The future will tell us how true this is, and how well the filter performs
Very Cool, thanks for that response.
I’ll dig through that paper and try to understand how this works.
For now it seems you aim at improving denoise performance at medium to low compute times and not improve absolute best denoise performance at whatever it takes compute times?
(Your shifting the pareto front at a different section if you plot compute time vs. denoise performance?)
Then I will point out one more paper that might be interesting. I do not want to derail you from your goal though!
In this paper:
standard DCT denoising is reimplemented with, I think, all the general improvement strategies for denoising. Multiscale approach to be feature size invariant, Oracle step to iteratively improve, overlapping window result aggregation…I think they are still missing the ‘denoising in a perceptually uniform colorspace’…whatever. They reach denoising performance close to the best patch based methods (at time of publishing and for additive gaussian white noise) with two orders of magnitude less compute times. With nice samples within the paper.
Again: I don’t want to derail neither the topic, nor you from your goal, but as always it is hard to know what the discussion is at, within a certain community. Also, if this needs to go into a different topic, that’s cool with me.
Apart from that: such a nice community here!
Indeed, it is a nice paper. I think the artefacts that dct denoising exhibits are sometimes too unnatural, but still we can take some ideas from the paper.
Typically, once we have a fast denoising algorithm, we can perform an oracle step (i.e. denoise the image a first time, and use the result as a guide to denoise the image)
From what I have read DCT seems to lower the resolution but guiding is generally the way to go for lots of different filters.
Yes, guiding (Oracle step) is one of the core principles which seems to improve almost any denoiser. I am trying to find these three or four principles…they’ve been mentioned in a nice review paper…that have been found to work universally to improve denoiser algorithms.
I think the others were:
- resolution pyramid (to be effective on more than one scale, NLMeans and BM3D have that particularity that they work really good on high frequency noise but lack at lower frequencies),
- aggregation (denoising a pixel in various contexts to improve prediction) and
- denoising in perceptual colorspace (tayloring to Human Visual System)
when I find that paper containing that claim, I’ll edit it in here.
If my memory is good, I think at least the last one is presented in “Secrets of image denoising cuisine”
That’s it! Spot on. Chapter 4 - Noise reduction, generic tools.
Three of the above mentioned are in that Chapter. And I think as a side note in the paper about multiscale DCT(or the connected paper about their resolution pyramid) they postulate that the resolution pyramid is the fourth generic tool for good denoising.