Sharpening Experiments in Gimp Unsharp and "Smart Sharpening"

Thanks I must have missed these threads will for sure read up on them.

Very nice hopefully I will have a chance to try it out when RT 5.8 releases. RT does not do so well on my Mac but I will still for sure check it out anyway.

Or vice versa, latest Mac don’t do so well for opensource :frowning:

No I did not upgrade what a mess. My problem is the 4k screen making RT slow. I get the same issue with dt if I turn off openCL. :frowning:. It is just too much resolution for just the CPU. Changing resolution mode fixes it but when the is smushes pixles together like that it looks bad. :smile: Not your guy’s fault :+1:

Well, I’m curious to know why do you think that’s the right time to sharpen an image.

I’ve always heard that the best time to sharpen an image is at the very last processing step, just after resizing it. If needed, under certain circumstances, there are people who say that it’s good to pre-sharpen the image, before processing it, but sharpening at the end, too.

In my limited experience, there are certain scenarios where it’s good to pre-sharpen the image a bit (slightly):

  • where you have to go beyond the diffraction limit (because of a sensor-lens combination)
  • where you need to combine images, that is, stack them (focus stacking, super-resolution, astrophotography noise reduction)
  • on images with low contrast and a good amount of details

And there’s where Capture Sharpening comes into play to me. It’s an invaluable tool to enhance details in such a way that the next tools will get a better chance to discriminate noise from real detail. But just that. It usually is not the definitive sharpening tool.

In less complex images, on the other hand, Capture Sharpening is almost all the sharpening needed.

Then there are some tools that will give the impression of a sharpened image, like those who work with local contrast, or microcontrast, or edges, or even wavelets (not only the Wavelet tool in RT). But the real sharpening tools should be used as a last step. At least that’s what I think.

In your workflow, if you don’t like the results you get within Gimp, and would love to use the RawTherapee tools, then go use them!:

  • process your raw file (with or without Capture Sharpening, your choice)
  • export to tiff, 16bit
  • edit with gimp, with or without resizing as a very last step
  • sharpen with RawTherapee, and export as a PNG or JPEG

You won’t be able to use Capture Sharpening with a non-raw image, but there are a few more tools to give you an enhanced image.

Hope it helps, somehow.

Exactly :+1:

I agree sharpening should be the last step. All in all I probably phrased it poorly but in an Ideal world I would never need to go to gimp or even photoshop in the first place. I feel sharpening in raw tends to give a better result most of the time. Maybe it is just better methods available not sure.

Sadly no raw processor to this date allows for selection based dodge and burn with a soft light mask. The ability to use white or black/or color instead of exposure or a curve in my opinion gives a better result there. Not to mention the ability to have a brush with low force/flow to slowly build up the effect. So I tend to need to go to a Raster editor for that part of my process.

Well, at least not yet :wink:

But you can always do your Gimp editing using a tiff file, and perform the final touch (sharpening) with RT (or darktable, or any other raw processor of your choice). Then you just have to export the final image.

Gimp is simply too late in your image workflow to be used to perform signal reconstruction.

I did a quick example with my own smart edge-aware sharpening in darktable, using the exact same algo with same parameters at 2 different places of the pipe line : before any non-linear transform, and at the end of the pipe (after tone curves, tone mapping and contrast enhancement, but before the gamma/OETF encoding coming with the display RGB conversion).

Non-linear (end of pipe) :



See on the hair, the weird noise + ringing ?

The reason for that is grounded in signal processing and Fourier analyse, but the bottom line is Gimp or Photoshop or whatever come too late in the pipe to perform denoising or sharpening with satisfying results, that job is for the demosaicing software (Rawtherapee, darktable, DXO Prime, etc.).

1 Like

Very true open source software is really making amazing strides and innovating where other software is rather stagnant or focused on “AI automation” edits. I do my best to stay in dt as long as possible maybe I will discover something as I learn more.

A bit over my head I really want to learn some under the hood stuff as at some point in my life I would love to contribute to a project. It does seem to make sense even with my lack of knowledge in the technicalities. By my understanding the longer you can keep the data in that linear representation the better. Not exactly sure why precisely but I would assume better uniformity in the data.

Which module can do that in dt?

Let me say first that I completely agree that sharpening must be done in the proper point of the development of an image.

But even when I don’t want to start a discussion about where is the proper point in the pipeline, I guess we must agree too that when an image is heavily post-processed, and then resized, it is somehow good to get rid of some of the smoothness generated. A.k.a. sharpen the image.

If I get a final result with my raw processor of choice, without any more external processing, then sharpening must be done in the best possible point of the raw pipeline. On the other hand if I use some other program after the raw processor, I bet when all the editing is finished, a bit of sharpening is needed.

And then we can talk about which program is better to perform the final sharpening.

It’s active research and development, I’m currently fine-tuning the maths, so it’s not in the release yet.

Proper resizing does not blur (there are ways to avoid it), so it’s a problem to fix at the interpolation time, not after. Always try to fix the origin of the bug, not patch its appearance. Besides, you will resharpen everytime you re-export for each medium and for every size ? That’s an insane workflow…

Your understanding is correct, but the full explanation would require 8h for me to prepare you a 2h explanation that will avoid any equation dropping, because those equations are bad looking and meaningless anyway if you don’t hold at least a bachelor in applied sciences.

1 Like

May you give an example, please? I really wish to learn about this.

Well, I think it’s the proper workflow, as final sharpening will be heavily dependent on the device it will be addressed to: it’s not the same sharpening for display, than for a printer, or sharpening an image that has been downsized to a 25% (thus, without interpolation), than another image that needs an interpolation to get the desired final size.

Or am I wrong? (anyway, I think we are going off-topic)

Me too. I’m familiar with interpolation resizing, and I don’t know of any incarnation of it that doesn’t make a reduced image that benefits from some addition of acutance through a slight sharpen…

The paper above is about CFA demosaicing, but the core of it is interpolation and can easily be generalized. Notice it succeeds also for upsampling (which is essentially what we do while demosaicing).

Do you mean the possibility to blend a solid black or white layer in soft light mode, modulated by an opacity mask?

Here is what I could obtain with the “enhanced USM” sharpening I have recently introduced in PhotoFlow (I am posting the high-res image because one needs to see the details…):

20191030-IMG_0073-af.jpg.pfi (30.5 KB)

Shortly speaking, t eh filter uses a blur mask that preserves both the very fine textures that are likely due to noise, and the hard edges that would otherwise be over-sharpened. So it basically sharpens the “intermediate” textures…

Basically a 50% gray filled layer mask set to soft light blend mode. (Softlight ignores 50% gray). Then painting white with varying opacity or black with varying opacity of paint respectively. Colors work as well to tint the Dodge and burn. The painting is guided by selections based off luminance allowing control over specific locations.

Looks very good. No introduced grain. Goes to show that sharpening guided to the right locations is much more sane then non guided. All these techniques minus my blanket usm filter in Gimp are targeted in some fashion.

I miss the simple “sharpening” tool. For a reason I don’t understand it was not part of the latest version of Gimp anymore. It was easy to use and apply and helped a lot to prepare shots for printing.

Basic sharpening is a bit primitive and not very robust (halos explode fast). Its merits are mostly historical and related to its algorithmic simplicity, in a context of limited hardware. Now we have better, but in more complex algorithms.

1 Like