I have experimented a bit with the Unsharp Mask, with the goal of improving two aspects:

• the filter sharpens all edges, even those which are already hard and sharp, leading to over-sharpening in some cases
• it tends to enhance the noise, because it also sharpens the textures that are very fine

The first point has also been addressed in DT by @aurelienpierre and described here:

However, I realized that the edge-aware properties of the guided filter also allow to solve the second problem, that is sharpen textures while leaving the noise almost unchanged.

I have implemented an experimental “enhanced unsharp mask” filter in PhotoFlow, currently only available in the enhanced-usm branch:

Before going into the technical explanations, here are a couple of examples (details from the image from this PlayRAW):

Original

Enhanced USM with the same radius, using the settings of the first screenshot:

A detail of a smooth area with noise, from the same image:

Original:

Enhanced USM with the same radius:

What do you think? The noise is not completely preserved, but is much cleaner than in the standard USM case. Also, my impression is that the transitions between sharpened and not sharpened regions is sufficiently smooth to be natural…

@heckflosse you did quite some work on preserving smooth areas from sharpening. Would be interesting to compare the two approaches… do you have some test image that we could use for the comparison?

HOW DOES IT WORK?

The standard Unsharp Mask boils down to the following formula:

USM = I + A\cdot(I - G(I))

that is, the image is sharpened by adding to the original image I the difference between I and gaussian blurred version G(I), scaled by an amount parameter A.

Now let’s consider the Guided Filter. Simplifying a lot, the Guided Filter behaves as a Gaussian blur when the edges amplitude is below a given threshold, and preserves the original edges if they are larger than the threshold. That is:

GF(I) = G(I)

for edges below the threshold, and

GF(I) = I

for edges above the threshold.

Now, what if we combine two Guided Filter output with the same radius and two different thresholds TL and TH?
Let’s consider three image areas:

• I_{1} is a smooth area, with noise that should not be sharpened
• I_{2} is a textured area, that needs to be sharpened
• I_{3} contains strong sharpe edges that do not need further sharpening

One has

GF_{TL}(I_{1}) = GF_{TH}(I_{1}) = G(I_{1})\\ GF_{TL}(I_{2}) = I_{2}\\ GF_{TH}(I_{2}) = G(I_{2})\\ GF_{TL}(I_{3}) = GF_{TH}(I_{3}) = I_{3}

and therefore

GF_{TL}(I_{1}) - GF_{TH}(I_{1}) = 0\\ GF_{TL}(I_{2}) - GF_{TH}(I_{2}) = I_{2} - G(I_{2})\\ GF_{TL}(I_{3}) - GF_{TH}(I_{3}) = 0

Putting everything together, one has

Enhanced USM(I_{1}) = I_{1} + A\cdot(GF_{TL}(I_{1}) - GF_{TH}(I_{1})) = I_{1}\\ Enhanced USM(I_{2}) = I_{2} + A\cdot(GF_{TL}(I_{2}) - GF_{TH}(I_{2})) = USM(I_{2})\\ Enhanced USM(I_{3}) = I_{3} + A\cdot(GF_{TL}(I_{3}) - GF_{TH}(I_{3})) = I_{3}\\

In summary, one can selectively sharpen an “edge amplitude band” by adding to the original image the difference of guided filters at two different thresholds!

EDIT: @David_Tschumperle might also be interested in this…

6 Likes

those samples look great

Hello @Carmelo_DrRaw

It is indeed amazing how much you have been improving your sofware lately

I have just downloaded on Windows 10 (64 bit) this new GIT branch:
photoflow-w64-20191019_1643-git-enhanced-usm-a18b01179177ef108af584051fa59e6d9eb66a25

I am aware it is probably the stupidest question ever but I am unable to find this new tool…
Where is it located exactly?
I have especially looked for it in the “Basic adjustement” tools to no avail…

BTW, as a general question (not strictly related to Photoflow of course): what workflow do you suggest for retouching Raw files:
In short, is it generally “correct” (on the whole advisable, I mean) to sharpen the images as the very last step?
In essence, with Photoflow:

• you open your Raw image;
• you rotate and crop it;
• you do some basic adjustement (curves, cloning -healing with stamp tools);
• in the very end, you sharpen your image;
• you finally resize it to print it.

THANKS a lot for your efforts

It is one of the options of the “Details/Sharpen” tool

I will answer the rest a bit later…

It seems the noise gets a bit coarser?

I tested your old edge sharpening preset and I think I like it better.

Original

Preset applied

The noise is really preserved in this preset.

EDIT:
DISCLAIMER: I haven’t tested the new tool, but I’m assuming you did your best for the screenshots above, so I’m comparing them with these ones using the old preset.

@Carmelo_DrRaw one thing I want to test too is the USM with a true RGB guided filter. For now, I’m applying it on each channel independently, and I find it increases the noise in an unpleasing way (I would call it “dry” noise, almost like salt and pepper).

But I like the idea of the double threshold very much. Do you have a regular relationship between TL and TH ?

@Silvio_Grosso
To access the new enhanced USM you have to:

1. click on the red icon at the left of the layers list
2. go to the “detail” tab and select the “sharpen” tool
3. in the sharpen tool, select the “enhanced unsharp mask” method

I have noticed that the default amount value is far too strong, you have to manually dial the slider down to 200 or 300 for a good result… I will fix this ASAP

@gadolf my edge sharpening preset is surely better in preserving the noise, but the drawback is that the transition between sharpened and non-sharpened areas is much faster, at the point that it is noticeable and sometimes not very pleasing. That’s why I have been looking for a better and more elegant solution…

1 Like

Hello @Carmelo_DrRaw

Thanks a lot.
I have just deleted my last post because in the end I did find this new tool

@Silvio_Grosso I suggest to keep the radius small, as you would do with s standard unsharp mask. The idea is to obtain a result similar to USM, but better.

@aurelienpierre in fact, I apply the guided filter to a log10 RGB luminance channel, and then I convert back to linear RGB luminance before computing the differences.

I found that a factor of 4 between the two thresholds (applied to log10 values) gives a reasonable starting point, with 0.005 and 0.02 as absolute values.

I read the op and thought to myself, Wasn’t this posted a while back? Then I realized that my bad memory equated @Carmelo_DrRaw work with some of the papers I read. I don’t have time to find them again but there are several that address thresholds.

Hello @Carmelo_DrRaw

I have been testing this new option and indeed it does look like it doesn’t increase the overall noise
With the above image (head of a dragonfly) I have just crashed the application while increasing the radius.

EDIT: just tried to reproduce this crash to no avail.
Now the radius change does NOT produce any crash (Windows 10; Intel I7, 8 gb RAM).
Once in a while, there are artifacts (namely gray “squares”, like tiles, stamped on the 1:1 preview of image) but it suffices to move a little more the sliders to remove them.
Here they are:

1 Like

The one from this play raw? Sun and Thunder in Geneva

1 Like

Hello @Carmelo_DrRaw

Just tried to crash once more Photoflow and It is extremely easy on my Windows 10 - 64 bit computer to reproduce now.
I suppose it is some huge memory leaks…
BTW, I have got an Intel I7 cpu and 8 gb of Ram.

To reproduce the crash it suffices to open a very big image.
I have tried with a 61 Mb Tiff.
You open the Tiff and you change the radius and instantly Photoflow always crashes.

Here is the error got on the cmd prompt:

(photoflow.exe:55044): GLib-GObject-CRITICAL **: 23:39:15.038: g_signal_handler_is_connected: assertion ‘G_TYPE_CHECK_INSTANCE (instance)’ failed

(photoflow.exe:55044): GLib-GObject-CRITICAL **: 23:39:15.054: g_object_unref: assertion ‘G_IS_OBJECT (object)’ failed

(photoflow.exe:55044): GLib-GObject-WARNING **: 23:39:15.054: instance of invalid non-instantiatable type ‘’

(photoflow.exe:55044): GLib-GObject-CRITICAL **: 23:39:15.054: g_signal_handler_is_connected: assertion ‘G_TYPE_CHECK_INSTANCE (instance)’ failed

(photoflow.exe:55044): GLib-GObject-CRITICAL **: 23:39:15.054: g_object_unref: assertion ‘G_IS_OBJECT (object)’ failed

(photoflow.exe:55044): GLib-GObject-WARNING **: 23:39:15.054: invalid uninstantiatable type ‘’ in cast to ‘GObject’
!!! Pipeline::set_image(): wrong ref_count for node #1, image=0xafa0970

(photoflow.exe:55044): GLib-GObject-WARNING : 23:39:15.070: invalid uninstantiatable type ‘’ in cast to ‘GObject’
**
ERROR:/sources/src/base/pipeline.cc:287:void PF::Pipeline::set_blended(VipsImage
, unsigned int): assertion failed: (G_OBJECT( nodes[id]->blended )->ref_count > 0)
Bail out! ERROR:/sources/src/base/pipeline.cc:287:void PF::Pipeline::set_blended(VipsImage
, unsigned int): assertion failed: (G_OBJECT( nodes[id]->blended )->ref_count > 0)
Exception code=0x80000003 flags=0x0 at 0x00007FFD304F0192

Thanks, I will try to reproduce this…

Here is a first attempt and the corresponding settings:

Just merged your idea in my code (working in linear RGB), but I find the high-pass cut-off too harsh and a bit unnatural. I want it to degrade more gracefully, so I modified the equation to:

USM(I) = I + \alpha \cdot \left((I - GF_{TL}(I) )- \frac{(I - GF_{TH}(I)}{2}\right).

In my UI, I let the user setup the GF threshold in dB for a more even feeling (between 0 and -4 dB). The high cut-off is therefore set -1.5 dB from the low cut-off.

Here is what I get in an iterative setup varying the GF window from 3 px to 13 px (every denoising disabled):

EDIT: the full output of image doctor, with an iterative joint deblurring and denoising modified with this method:

@heckflosse has recently done a lot of work on capture sharpening, which is intended to be applied very early in the pipeline, just after demosaicing. However, the sharpening tools currently available in photoflow are mostly intended to be applied at the end of the processing. Sharpening after resizing is another option that I have not yet explored in detail.

Another suggestion I have is to apply crop/rations after the basic adjustments, because by doing this you apply the layer masks to the original image, and they are thus independent of the geometry manipulations you do later. Same goes for the perspective corrections. Apply all geometry corrections just before sharpening.

@Silvio_Grosso When to do sharpening depends on the image contents and what your aim is in postprocessing.

I am of the opinion to sharpen a little after (sometimes before) softness is introduced; e.g., raw from camera, demosaicing, denoising, smoothing or resizing. The rule of thumb is to avoid artifact-causing or aggressive sharpening until the end, and only sharpen (such as capture sharpening) if you know what you are doing. Proper sharpening at the beginning can go a long way, if and only if you do it properly and smartly.

If you examine my PlayRaws, they are usually clean, unless I am doing something experimental, which I suppose is quite common. For serious processing, I tend to favour a little soft over an artifact ridden image, no matter how slight.

Here a screenshot from RT using Capture Sharpening:

Same + USM

I would go so far as saying that there is no sharpening, there is only deblurring. Blur happens:

• when interpolating/upsampling the raw file to reconstruct an RGB signal from a CFA (demosaicing)
• when bending light rays through glass lenses
• when transmitting light rays through smoke, dust, fog, windows, etc.

So the place where you deblur depends on the type of blur you want to revert. And basically, in signal reconstruction, you need to revert digitally the bad effects in the opposite order they are applied in real life.

Over these past 2 years, I have acquired the belief that deblurring, denoising and defringing should be performed in a joint fashion. They all are 3 aspects of the same problem : spatial decorrelation between channels and inside channels themselves.