New tool "Capture Sharpening"

Hello @heckflosse

Where can we download the daily builds of RawTherapee, for Windows 10 (64 bit), to try your feature?

For instance, I have checked here:
https://keybase.pub/gaaned92/RTW64NightlyBuilds/

It looks like the most recent branch (21 November) pertaining to your work is:
RawTherapee_dev_5.7-245-gff3e31466_W64_Skylake_191121.zip

Just out of curiosity, I don’t know wheter I am supposed to download the sse4 builds or the other ones most recent SkyLake builds?
At home, I run an old Intel I7: 6500U CPU.
In all truth, I suppose I should opt for the Skylake builds :slight_smile:

Thanks a lot in advance!

http://rawpedia.rawtherapee.com/Download

1 Like

[quote=“Silvio_Grosso, post:108, topic:14197, full:true”]
Hello @heckflosse

Where can we download the daily builds of RawTherapee, for Windows 10 (64 bit), to try your feature?[/quote]

I always use this link:

1 Like

This is a call for help documenting the Capture Sharpening tool in RT from a user perspective.
I tried to document it, but always ended with very technical documentation, which does not help the users to understand how it works and why it’s better than the (old) RL-deconvolution in RT.

If someone is willing to help:
https://github.com/Beep6581/RawTherapee/issues/5453

3 Likes

@Silvio_Grosso
As automatic nightly builds are provided on official site, I now build mainly for skylake architecture. If requested I can build for generic+ SSE4 feature architecture.

The micro architecture of this processor being skylake, you can use the skylake build and perhaps benefit of improved speed.

see “Skylake SSE4 builds.md” in the folder.

1 Like

Today I pushed a speedup for processing of flat regions when using Capture Sharpening.
With this file and pp3 cs processing time is reduced by ~30%

7 Likes

For information:

2 Likes

If you found a way to automatically limit the iteration count when halos develop, does that mean have have a reliable halo detection method?
If so, wouldn’t that lead to a relatively simple method to automatically estimate the point spread function?

I implemented an automatic iterations limitation which works quite well to stop iterating when halos appear. But that does not mean it is perfect.
The current implementation processes the image in tiles of size 32x32 pixels. Each of the tiles can be processed with a different gaussian sigma (currrently only gaussian kernels up to sigma 2.0 (13x13) are supported.
If in an iteration of a tile processing one of 32x32 pixels is 50% darker than the original pixel, the algorithm stops iterating.

Simple approach, but works quite well

Though I’m of course open for better approaches :+1:

Ingo

What if the blurring is off-centre? E.g.,

Corner Softness. Wide-open at full wide angle, the lens showed a little decentering, with the upper corners showing more softness than the lower ones. The center is quite sharp, but softening from the upper corners extends a fair bit into the frame. At full telephoto, corner sharpness is better all around, but the pattern changed, with upper and lower left corners quite sharp and the lower right somewhat soft. The center is again quite sharp, as is most of the frame.

Source: https://www.imaging-resource.com/PRODS/sony-rx100-iii/sony-rx100-iiiA4.HTM

@afre From above

I once had the Idea that one could manually mark borders between objects and enforce a “no ringing” criterion there, using this to guide an algorithm to find the optimal kernel. But I guess that’s a bit off-topic for this thread and quite a project in its own. The idea was to use e.g. the boundary between an out-of-focus background and an object in focus for this because that’s a area in the image where we can make quite solid assumptions about how the transition is supposed to look like with a correct kernel.

@heckflosse Sorry, I forgot. When you get there maybe auto-detect? :crossed_fingers::stuck_out_tongue:

Hi, new RawTherapee user here. I know this is old thread. Not sure if my simple question warrants new thread, so I will just post here.

I have a question why Capture Sharpening is available on some RAW files, but not others.

On my Samsung S23 Ultra DNG file, this option is not available. Under RAW tab, only 2 available sections are available: RAW White Points and Dark-Frame.

I checked my Sony a6400 and Canon 6D RAW files. I do see the Capture Sharpening plus many other sections under RAW tab.

How come DNG file does not have this option? Is this due to limitation of camera or RAW file type (DNG)? Or some other reason?

There are many uses and abuses of DNG… I think some smartphones handle demosiacing differently and so this might bypass the usual ability to use capture sharpening… I am sure someone more qualified about the aligo of that function will be able to cofirm this…

It may be because the DNG is not a raw file or contains unexpected data, such as weird data location within the DNG file container. Capture Sharpening happens after demosaicing, so how it was done should not affect the module’s operations, unless there is something that bypasses it (due to what I suggested above or other reasons).

The best way for you to find help is to upload a sample file for us to investigate. Welcome to the forum!

The most qualified would be @heckflosse , but I haven’t seen him around the forum or GitHub for some time.

Samsung use modified sensor layout and ML demosaicing which they likely or maybe not code into the DNG. I suspect that since capture sharpening comes right after this that the data is just not recognized and so the options are not available… thats my guess…

Here is the DNG file on google drive. In case anyone wants to check it out.

https://drive.google.com/file/d/1q3B9uxMt5HDNmFV3Po_dS12pfU9gkb6R/view?usp=sharing

Darktable doesn’t like that file…