Diffuse module is now in the master.....

I simply never expected such a brilliant result (figureatively and literally) and was quite amazed :slight_smile:

I am really glad you are more into physics than stamps :slight_smile:

1 Like

I know, but the way I see it, something can be accurate and physical, and still mind-blowing :smile: Don’t get jaded! It’s almost as bad as believing it’s magic.

Blending dehaze in lightness or some other mode will usually avoid the cast…though I think some people enjoy the boost in saturation…I know the dehaze in RT is a very strong effect but I think it does also offer to use a luminance mode…Funny I like the dehaze preset in diffuse module but to me it does not give the traditional appearance of dehaze…it adds a lot of detail and actually seems to brighten the image a bit as opposed to darkening it …at least in my experience…

It does not explicitly brighten or darken, it just increases the magnitude of gradients. So the result depends on how shadows/highlights are distributed in your pic.

It’s very effective…and for me I like the result better…I guess I could say imo it’s not the traditional look of a dehaze operation but for me that’s a good thing…

I don’t think I would have ever arrived at the first instance that you used… might be useful on flat images… final result is really nice. I thought for sure at some point you might have used a multiply blend mode but the deep darkening comes from that first instance… Thanks for sharing…

1 Like

I must admit that in the beginning, before watching @s7habo 's video, I hat a lot of difficulties with this module
I am using it but I think I don’t understand everything about the way it works

but I guess you have surpassed yourself with it, again

1 Like

I have a question concerning the sliders iteration and speed. I get that more iterations may slow down the processing but, when sharpening or increasing local contrast, is it better to have more iterations or more speed - in terms of the end result (image quality)? what exactly is the difference? Or is it quality wise more or less the same just more speed means more time to calculate? I mean both sliders do increase the effect, don’t they?

Without getting into what the module does (we have experts for that), I will speak more generally about filters and processing. A lot of it comes down to two issues:

  1. The degree
  2. The accuracy

Often iteration increases an effect. For instance, I can drink multiple cups of water a day. With each cup, I receive more hydration. I could change the size to pitchers but if I had to drink the entire volume at once, that would adversely affect the usability!

Other times, in the name of speed, we use faster less accurate methods. One simple example is the box blur. It is fast. We usually use it for the guided image filter. But it is an approximation of the Gaussian. Repetition allows box to converge to a Gaussian. This may not always be true or pat (as in perfect - not Pat David - sorry, bad joke :speak_no_evil:).

1 Like

(also completely without knowing what you folks are talking about) i’m going to point you to wikipedia

which describes in general how step size and number of steps progresses your simulation walking along a differential equation. just look at the images on the right side: larger step size walks faster, but potentially the wrong direction. more iterations with small step size will be more accurate.

2 Likes

Ha, I am divergent too but yeah this stuff is interesting. Hope these help, Anna.

just a side note: it is stated in the “official documentation” that the zoomed out preview is wrong, but I just noticed that the zoomed out preview is correct if opencl is off (however, the module is terribly slow without opencl). So in theory, this could be fixable, couldn’t it?

Either we use different versions of the manual, or we read it differently…

This is what I get from the official documentation for 3.8:

While this module is designed to be scale-invariant, its output can only be guaranteed at 100% zoom and high quality or full-size export.

I read that as “zoomed-out output might be wrong”, not “is wrong”. It may also depend on how your openCL is set up (i.e. which pipes use openCL), and how you use the module.

There have been a number of threads here one I think related to noise in the preview vs output and it has been my experience that your final output will match 100 percent view but not what you see on the screen at other zoom levels due to scaling. For me a typical zoom would be about 25 percent to give me a full screen preview. To get an exported image that matches what that looks like I have found I need to set scaling to 0.25 when exporting. This may be due setting I have but I don’t have any that are set to degrade things for performance. I am not sure what the best scaling aligo to use is. I think I went to bicubic over lanzcos

Many filters in darktable apply on regions of 5×5 pixels around each pixel. A 5×5 pixel filter, downscaled by 2, gives a 3×3 filter. If you downscale by more than that, then it becomes a 1×1 filter, aka not a filter anymore, but the original image, so nothing happens.

There is a trick in diffuse or sharpen to keep applying a 5×5 filter even heavily downscaled, by adjusting the coefficients in the filter, but it’s still an approximation and only works for large effects (presets like dehaze or local contrast, where the radius span is roughly at least 256 px).

But the blunt truth is downscaled previews will always be wrong compared to the full-size rendition, and that has to do with the fact that we can’t split pixels in fractions. Only is some situations, we can use tricks to make the downscaled output perceptually close-enough to the full-resolution.

Now, if you see a perceptual difference between CPU and GPU output, that’s a bug.

3 Likes

Ok, I found a workaround that kind of works for me: I simply deleted diffuse.cl, which disables opencl for this module. It more or less works for me, because I have a fast CPU. I think I can live well with this workaround.

I don’t think there is a difference between the final exported image or 100% zoomed in preview with or without opencl if the settings are the same. But there is definitely a difference when zoomed out.

Would this not likely be true for many modules…

Some discussion occurred here and in the linked topic embedded that I mentioned…

So that means that there are some combinations of modules where it is impossible to get an exact view of the result:

  • for most modules the display isn’t exact except when zooming in to 100%; but
  • “raw chromatic abberations” is bypassed (i.e. inactive) at 100% zoom…

That said, I have no idea how important that is in practice…

You are right, the preview is not correct either when opencl is off, it just seems to be somewhat more correct than with opencl, it depends on the settings. But there is definitely a difference here between opencl on and off.
I was going to suggest that it would be more correct if there was a demosaicing method for the zoomed out mode that is in between bilinear and rcd in terms of sharpness/contrast, but now I am not sure. On my my screen, the preview usually looks clearer and sharper than the exported image.