What does diffuse or sharpen really do?

I’ve watched this video from Aurélien Pierre and read the manual, but I’m still confused as to what the diffuse or sharpen module is really doing.

In particular, here are some things that don’t make sense to me:

  • The manual says the 1st order speed works on the gradient, the 2nd order speed on the laplacian, 3rd order on the gradient of the laplacian, 4th order on the laplacian of the laplacian. It makes sense that you can diffuse the laplacian of a function, since this is also a function, but the gradient is a vector space, not a simple function with a scalar output. What does it mean to do a gaussian blur of a vector space?

  • In the video, Aurélien implies that the 3rd and 4th order diffusion works on a wavelet decomposition of the higher order wavelet decomposition, as opposed to just higher frequency wavelets in the same decomposition. This is kind of surprising because (at least in the video) when he decomposed his picture into just one wavelet layer and a residual layer, the wavelet layer was already quite high frequency. Did I misunderstand what he was saying, or does the module use something much lower frequency that what he was demonstrating with gimp.

  • If you set the anisotropy to 0, is there no difference between the 1st and 2nd order diffusions, and the 3rd and 4th order ones? Or does edge sensitivity still do something even in the isotropic case?

  • What is edge threshold really? The manual says it applies a penalty to low variance areas. In the video, he says it’s good to set this negative if you want to denoise the darker areas of your picture (where you are more likely to have noise). Why do darker areas necessarily have less variance than brighter areas (especially if the dark areas are the ones with more noise), and what does this have to do with edges?

  • The manual defines the luminance masking threshold like this:

    This control is useful if you want to in-paint highlights. For values greater than 0%, the diffusion will only occur in regions with a luminance greater than this setting. Note that gaussian noise will be added in these regions to simulate particles and initialize the in-painting.

    Is this saying that gaussian noise is being added to the whole image if the threshold is 0% and is this a normal consequence of sharpening? Or does this mean that 0% is a special case, and 1% or higher enables some additional functionality of adding gaussian noise?

Thanks for any enlightenment…


A gradient is a rate of change, so a derivative, which can be expressed as a spatial function of the location in the image…

Perhaps try and find the mathematical expressions involved, a video is probably not the best medium to explain maths… Very likely in the video he simplified things to make it easier to explain the reasoning behind the module and the different controls. Math tends to make a lot of people run away

Keep in mind we are working in the scene-referred part of the pixel pipe here, so signal is still linear with light energy. Light always has noise, and the noise level is proportional to the square root of the signal. So the absolute value of the noise will be much lower for shadow areas than for brighter areas. Thus any edge gradients will be much “flatter” (and noise spots cause edges, as you have a change in value…)

On the other hand, the relative value of the noise will be much higher in the shadows:
compare √(4)/4 = 2/4= 0.5) against √(10000)/10000 = 100/10000 = 0.01)

As I read the text from the manual you quoted, Gaussian noise is added only in the regions selected for inpainting, and it is used/needed to “seed” the inpainting.

EDIT: Or look in the code, there are references to several basic papers in there (which can get a bit math-heavy, that’s the nature of the beast)

good questions there. i remember i wanted to reimplement this some time ago and stopped at a similar point. i don’t remember my conclusions so i can’t really answer anything here… but i think i didn’t really find the derivative part conclusive either. the nice smoothness probably comes from the iterated application of very small diffusion kernels (which, as opposed to the a trous wavelets found in other modules, don’t grow larger with iteration).


Maybe this video from ‘A dabble in photography’ can help.

I confess, I watched AP’s video before dipping my toes in the diffuse or sharpen module and I understood just a little and I mean just a little. However, the available presets have allowed me to successfully use the module for possibly some of the best sharpening on offer. Now I would like to revisit the concept of what the black box is really doing more for my interest than necessity. I feel I will try to reverse engineer what he is doing with the presets and why he has set the sliders where he has.


I tried all the presets in this module that supposedly should improve sharpness. Maybe I’m doing something wrong but to my eyes results demonstrated increase in noise (granularity?). Especially compared to excellent sharpness from Capture Sharpening in RT/ART. I used raw files from Nikon Z6 and Sony A7IV. I would be very interested to learn how to increase sharpness in DT to get good results similar to RT/ART.

I use the capture sharpening preset and it works well.


@paperdigits, is that a custom preset or is it part of d&s? I don’t see it on the module drop down list.

Boris explains it really good. https://youtu.be/pAbyORw0mng?si=WGMh_UKY_hLfbUiO&t=82


The no AA filter preset is nice generally… I will slowly up the iterations but often leave it at 1 or 2 and usually not more the 4… I used to use the dehaze preset and for images that were not very sharp it was good but you can also amplify noise if the image is noisy… If I knew the module better am sure I could offset that…

1 Like

I assume @paperdigits was thinking of these?


Yes, those. I also use local contrast preset.


Sometimes I use the “details threshold” of a parametric mask to only sharpen what is already in focus and relatively sharp. I also tend to activate the profiled denoiser module.


I haven’t… interestingly @s7habo recently noticed something like this too - it seemed to be related to color balance rgb.
Scroll down from here…

1 Like

Sorry for the deletion as I thought these specs were caused by d&s, incorrectly. Actually when I switched from Sigmoid to Filmic, the specs disappeared.


I’m was going wiggy. Switching from sigmoid to filmic helped somewhat. It was exactly what @s7habo found in the link above. It was my tweak to the global offset function in the cb 4 way tab which caused the problem.

1 Like

I have the same thing. Sharpening in DT is the only tool that discourages me from using this program. I definitely get faster and better results always in RT. The sharpness in RT is staggering compared to DT.


I think this is most people’s experience. Without the presets, I’m not sure how much the module would be used. Like you, I really focused on understanding it a while back, specifically the parallels with the Contrast Equalizer module because I understand how frequency separation works. I felt I had a decent grasp of it after a while, but it didn’t really help me use the module from a neutral state. In other words, I still mainly use the presets and then just tweak a few of the sliders that I can roughly predict what result they will have.

Its results can be amazing, but it’s perhaps the most “non-intuitive” image processing module I’ve ever used in any software! :slight_smile:


I believe a description of the algorithm behind this module is on his website and a bit on this forum if you search for it. Like his filmic module, it has several things going for on, not just the base algorithm, such as localization and frequency separation.

From what I can recall, the aim of the paper(s) he refers to is to recover lens/optic blur in such a way that a kernel is not necessary. Typical deblurring requires a kernel and iterative filtering. Now, I do not remember weather it is a small window or global filter, but I do recall it is supposed to emulate the reversal of real life blur, so that is why the module is diffuse or sharpen rather than just sharpen. We can go either direction.

I also can recall him modifying and simplifying the paper’s algorithm by using the guided image filtering filter (which I think I popularized here on this forum by constantly harping on its merits). dt has a nice version of the guided filter, codeveloped by several people on the forum. Anyway, using the guided filter, which is a windowed filter, so that probably answers the local vs global question, the module is more efficient in its processing, but it may be at the cost of performance (as in the quality of the algorithm: the resultant image).

As @rvietor noted, we are working with gradients and gradients of gradients, etc., so the smoothing should be graduated and continuous without introducing unnatural kinks in the data.

1 Like

Also @s7habo

Could those ‘hot pixels’ caused by the too high value of perceptual brilliance grading / highlights? I see that Boris used a value of 34.08%. As explained by Aurélien (the module’s developer), the maths is unstable for values over 20%, and setting the white fulcrum on the advanced tab is mandatory. Sorry about the profanities and general abrasive tone here:

Given that this is a dark area, it may be unrelated, but single pixels can be blown up by this issue (literally: the numbers that come out of the algorithm would corresponding to radiation levels you would probably only be exposed to on the surface of the Sun). Then, if highlight reconstruction is turned on in filmic (something that has been turned off by default, exactly because of this problem), filmic tries to ‘smoothen’ the transition, spreading the solar flare into the surrounding areas, making it even more visible. An easy test is to try and turn it on; if you get large bloomed areas, that’s the root cause.