What does diffuse or sharpen really do?

Boris explains it really good. https://youtu.be/pAbyORw0mng?si=WGMh_UKY_hLfbUiO&t=82

2 Likes

The no AA filter preset is nice generally… I will slowly up the iterations but often leave it at 1 or 2 and usually not more the 4… I used to use the dehaze preset and for images that were not very sharp it was good but you can also amplify noise if the image is noisy… If I knew the module better am sure I could offset that…

1 Like

I assume @paperdigits was thinking of these?
image

2 Likes

Yes, those. I also use local contrast preset.

3 Likes

Sometimes I use the “details threshold” of a parametric mask to only sharpen what is already in focus and relatively sharp. I also tend to activate the profiled denoiser module.

2 Likes

I haven’t… interestingly @s7habo recently noticed something like this too - it seemed to be related to color balance rgb.
Scroll down from here…

1 Like

Sorry for the deletion as I thought these specs were caused by d&s, incorrectly. Actually when I switched from Sigmoid to Filmic, the specs disappeared.

Capture

I’m was going wiggy. Switching from sigmoid to filmic helped somewhat. It was exactly what @s7habo found in the link above. It was my tweak to the global offset function in the cb 4 way tab which caused the problem.

1 Like

I have the same thing. Sharpening in DT is the only tool that discourages me from using this program. I definitely get faster and better results always in RT. The sharpness in RT is staggering compared to DT.

3 Likes

I think this is most people’s experience. Without the presets, I’m not sure how much the module would be used. Like you, I really focused on understanding it a while back, specifically the parallels with the Contrast Equalizer module because I understand how frequency separation works. I felt I had a decent grasp of it after a while, but it didn’t really help me use the module from a neutral state. In other words, I still mainly use the presets and then just tweak a few of the sliders that I can roughly predict what result they will have.

Its results can be amazing, but it’s perhaps the most “non-intuitive” image processing module I’ve ever used in any software! :slight_smile:

5 Likes

I believe a description of the algorithm behind this module is on his website and a bit on this forum if you search for it. Like his filmic module, it has several things going for on, not just the base algorithm, such as localization and frequency separation.

From what I can recall, the aim of the paper(s) he refers to is to recover lens/optic blur in such a way that a kernel is not necessary. Typical deblurring requires a kernel and iterative filtering. Now, I do not remember weather it is a small window or global filter, but I do recall it is supposed to emulate the reversal of real life blur, so that is why the module is diffuse or sharpen rather than just sharpen. We can go either direction.

I also can recall him modifying and simplifying the paper’s algorithm by using the guided image filtering filter (which I think I popularized here on this forum by constantly harping on its merits). dt has a nice version of the guided filter, codeveloped by several people on the forum. Anyway, using the guided filter, which is a windowed filter, so that probably answers the local vs global question, the module is more efficient in its processing, but it may be at the cost of performance (as in the quality of the algorithm: the resultant image).

As @rvietor noted, we are working with gradients and gradients of gradients, etc., so the smoothing should be graduated and continuous without introducing unnatural kinks in the data.

1 Like

Also @s7habo

Could those ‘hot pixels’ caused by the too high value of perceptual brilliance grading / highlights? I see that Boris used a value of 34.08%. As explained by Aurélien (the module’s developer), the maths is unstable for values over 20%, and setting the white fulcrum on the advanced tab is mandatory. Sorry about the profanities and general abrasive tone here:

Given that this is a dark area, it may be unrelated, but single pixels can be blown up by this issue (literally: the numbers that come out of the algorithm would corresponding to radiation levels you would probably only be exposed to on the surface of the Sun). Then, if highlight reconstruction is turned on in filmic (something that has been turned off by default, exactly because of this problem), filmic tries to ‘smoothen’ the transition, spreading the solar flare into the surrounding areas, making it even more visible. An easy test is to try and turn it on; if you get large bloomed areas, that’s the root cause.

It is certainly not intuitive to use or teach to students. I have listened to his video, paused the video, took notes, experimented with sliders and then started the video again. I spent hours trying to get my head around the module. All I know is that I get great results, but the presets are the starting point. I use the sharpen demosaicing (AA filter) on all my images as an initial sharpening. I also tend to use one of the lens deblur options for additional sharpening as required. With noisy images I apply the sharpening selectively using the details threshold slider and I may use one of the denoising presets selectively applied using the details threshold. It is a great module. I have a decent computer with a graphics card and apply these corrections early in the processing. But my work supplied laptop has no graphics card so I apply these settings at the end of editing as the diffuse or sharpen module is very resource intensive.

image

2 Likes

I think that’s correct, you can try yourself, results are identical. You can also set speed1 to 100% and speed 2 to -100% and it does nothing. My understanding are that the 4 speed works as follow :

Not totally sure but I would need to read the code carefully to be sure. In any case the interface doesn’t make a very good job at communicating it.

Thanks, @jonathanBieler. Wish they would put something like that diagram in the documentation. (Yes, the words say it, but it’s easier to refer back to a picture.)

My hunch is that the interface is the main problem with the module. Everyone seems to love the results, but very few seem to know how to actually use the module from scratch. I have absolutely no idea what kind of interface would be better, so this isn’t a very useful comment. But I do sometimes wonder if an interface overhaul would generate less confusion. Is it just how the sliders are named (with abstract comments like “speed”), or is it that there are simply too many moving parts and it’s too hard to visualize how they all interact?

4 Likes

It’s not too bad if you play around… But I think your right there are a lot of moving parts…

1 Like

All of the above!

2 Likes

As it stands, it is incomprehensible to the average innumerate non-scientist like me. I did watch one video that started to make sense of it all for me.

Darktable Episode 51: diffuse and sharpen module in practice

When I have time I’ll watch it a few times. Who knows! sometimes today’s incomprehensible becomes tomorrow’s obvious. If only in practice rather than theory.

3 Likes

Well, for starters there’s kind of a “unit error” that might mess with people’s intuition. You see 4 similarly labeled sliders about speed and anisotropy, which I think leads to an intuition that they all do something similar, just possibly at different frequencies. But as far as I understand, two of them are operating on a scalar function (a laplacian) and two of them are working on a vector space (a gradient). So even if you wrap your head around what sliders 2 and 4 do, it won’t really help your intuition about sliders 1 and 3.

In some sense, I can kind of get sliders 2 and 4. Like the laplacian, which is the divergence of the gradient, represents how much the gradient vector space flows out of each point. So it will be high for dark spots surrounded light spots. Since it’s a scalar, you can do some kind of blur on it, and then subtract the blur from the original and amplify or attenuate that difference. The greater the blur, the less steep you will make the gradient, so the less local contrast there will be.

On the other hand, I have zero intuition for sliders 1 and 3. What on earth are these sliders doing to the gradient of your image, and why is that useful??? I mean I know it looks good in certain cases, I just don’t even understand at any level what those things are doing. Are they somehow blurring the gradient vector space? What would that even do to your image and why?

1 Like