Sharpening Experiments in Gimp Unsharp and "Smart Sharpening"

Exactly :+1:

I agree sharpening should be the last step. All in all I probably phrased it poorly but in an Ideal world I would never need to go to gimp or even photoshop in the first place. I feel sharpening in raw tends to give a better result most of the time. Maybe it is just better methods available not sure.

Sadly no raw processor to this date allows for selection based dodge and burn with a soft light mask. The ability to use white or black/or color instead of exposure or a curve in my opinion gives a better result there. Not to mention the ability to have a brush with low force/flow to slowly build up the effect. So I tend to need to go to a Raster editor for that part of my process.

Well, at least not yet :wink:

But you can always do your Gimp editing using a tiff file, and perform the final touch (sharpening) with RT (or darktable, or any other raw processor of your choice). Then you just have to export the final image.

Gimp is simply too late in your image workflow to be used to perform signal reconstruction.

I did a quick example with my own smart edge-aware sharpening in darktable, using the exact same algo with same parameters at 2 different places of the pipe line : before any non-linear transform, and at the end of the pipe (after tone curves, tone mapping and contrast enhancement, but before the gamma/OETF encoding coming with the display RGB conversion).

Non-linear (end of pipe) :

Original:

Linear:

See on the hair, the weird noise + ringing ?

The reason for that is grounded in signal processing and Fourier analyse, but the bottom line is Gimp or Photoshop or whatever come too late in the pipe to perform denoising or sharpening with satisfying results, that job is for the demosaicing software (Rawtherapee, darktable, DXO Prime, etc.).

1 Like

Very true open source software is really making amazing strides and innovating where other software is rather stagnant or focused on “AI automation” edits. I do my best to stay in dt as long as possible maybe I will discover something as I learn more.

A bit over my head I really want to learn some under the hood stuff as at some point in my life I would love to contribute to a project. It does seem to make sense even with my lack of knowledge in the technicalities. By my understanding the longer you can keep the data in that linear representation the better. Not exactly sure why precisely but I would assume better uniformity in the data.

Which module can do that in dt?

Let me say first that I completely agree that sharpening must be done in the proper point of the development of an image.

But even when I don’t want to start a discussion about where is the proper point in the pipeline, I guess we must agree too that when an image is heavily post-processed, and then resized, it is somehow good to get rid of some of the smoothness generated. A.k.a. sharpen the image.

If I get a final result with my raw processor of choice, without any more external processing, then sharpening must be done in the best possible point of the raw pipeline. On the other hand if I use some other program after the raw processor, I bet when all the editing is finished, a bit of sharpening is needed.

And then we can talk about which program is better to perform the final sharpening.

It’s active research and development, I’m currently fine-tuning the maths, so it’s not in the release yet.

Proper resizing does not blur (there are ways to avoid it), so it’s a problem to fix at the interpolation time, not after. Always try to fix the origin of the bug, not patch its appearance. Besides, you will resharpen everytime you re-export for each medium and for every size ? That’s an insane workflow…

Your understanding is correct, but the full explanation would require 8h for me to prepare you a 2h explanation that will avoid any equation dropping, because those equations are bad looking and meaningless anyway if you don’t hold at least a bachelor in applied sciences.

1 Like

May you give an example, please? I really wish to learn about this.

Well, I think it’s the proper workflow, as final sharpening will be heavily dependent on the device it will be addressed to: it’s not the same sharpening for display, than for a printer, or sharpening an image that has been downsized to a 25% (thus, without interpolation), than another image that needs an interpolation to get the desired final size.

Or am I wrong? (anyway, I think we are going off-topic)

Me too. I’m familiar with interpolation resizing, and I don’t know of any incarnation of it that doesn’t make a reduced image that benefits from some addition of acutance through a slight sharpen…

The paper above is about CFA demosaicing, but the core of it is interpolation and can easily be generalized. Notice it succeeds also for upsampling (which is essentially what we do while demosaicing).

Do you mean the possibility to blend a solid black or white layer in soft light mode, modulated by an opacity mask?

Here is what I could obtain with the “enhanced USM” sharpening I have recently introduced in PhotoFlow (I am posting the high-res image because one needs to see the details…):


20191030-IMG_0073-af.jpg.pfi (30.5 KB)

Shortly speaking, t eh filter uses a blur mask that preserves both the very fine textures that are likely due to noise, and the hard edges that would otherwise be over-sharpened. So it basically sharpens the “intermediate” textures…

Basically a 50% gray filled layer mask set to soft light blend mode. (Softlight ignores 50% gray). Then painting white with varying opacity or black with varying opacity of paint respectively. Colors work as well to tint the Dodge and burn. The painting is guided by selections based off luminance allowing control over specific locations.

Looks very good. No introduced grain. Goes to show that sharpening guided to the right locations is much more sane then non guided. All these techniques minus my blanket usm filter in Gimp are targeted in some fashion.

I miss the simple “sharpening” tool. For a reason I don’t understand it was not part of the latest version of Gimp anymore. It was easy to use and apply and helped a lot to prepare shots for printing.

Basic sharpening is a bit primitive and not very robust (halos explode fast). Its merits are mostly historical and related to its algorithmic simplicity, in a context of limited hardware. Now we have better, but in more complex algorithms.

1 Like