Yes I can see a bit better in this version. A do agree even your original preserved more micro detail without hurting the smooth water. This method reminds be a bit of Deconvolution Sharpening in Lightroom/Photoshop (even though it is really really hidden) only even better. Honestly this is probably one of the best sharpening methods I have seen for a scene with water + detailed rocks.
How does the rapid moving water below the falls look.
Left is No Sharpening, Right is the Edge Detect Smart Sharpen.
This is a spot where I think the smart sharpen breaks down it would be interesting to see how the Capture sharpening handles this location.
The unsharp mask sample in the line up is way over done. I wanted to keep the settings the same. That unsharp mask sample has the same settings as the Edge Detect uses. If I was to just use the Unsharp without the edge detection mask it would have to be way toned down.
What I mean is that you need to treat each part of the image differently for it to be natural. Sharpening and contrast modifications will have a greater effect in areas of greater entropy (more stuff going on).
I agree Non Pushed is nice. Handles it much better than the edge detect does water stays nice and clean.
Cool it is deconvolution just a different name I learned something new
@afre thanks I agree I tend to do local style edits over global edits. I never tried that with sharpening before completely slipped my mind to do such for sure will keep that in mind.
Itās deconvolution assuming gaussian blur combined with a microcontrast based mask (to reduce sharpening of flat areas), auto-calculation of blur radius and auto-stop of iterations to reduce halos. Additionally you can increase the sharpening for the outer regions (corner boost) of an image (which may have more lens blur).
Except the corner boost (which the user has to set or not), the auto-calculated values work quite well (at least for low-ISO shots), so the only thing you need to do is enabling capture sharpening.
Thanks I must have missed these threads will for sure read up on them.
Very nice hopefully I will have a chance to try it out when RT 5.8 releases. RT does not do so well on my Mac but I will still for sure check it out anyway.
No I did not upgrade what a mess. My problem is the 4k screen making RT slow. I get the same issue with dt if I turn off openCL. . It is just too much resolution for just the CPU. Changing resolution mode fixes it but when the is smushes pixles together like that it looks bad. Not your guyās fault
Well, Iām curious to know why do you think thatās the right time to sharpen an image.
Iāve always heard that the best time to sharpen an image is at the very last processing step, just after resizing it. If needed, under certain circumstances, there are people who say that itās good to pre-sharpen the image, before processing it, but sharpening at the end, too.
In my limited experience, there are certain scenarios where itās good to pre-sharpen the image a bit (slightly):
where you have to go beyond the diffraction limit (because of a sensor-lens combination)
where you need to combine images, that is, stack them (focus stacking, super-resolution, astrophotography noise reduction)
on images with low contrast and a good amount of details
And thereās where Capture Sharpening comes into play to me. Itās an invaluable tool to enhance details in such a way that the next tools will get a better chance to discriminate noise from real detail. But just that. It usually is not the definitive sharpening tool.
In less complex images, on the other hand, Capture Sharpening is almost all the sharpening needed.
Then there are some tools that will give the impression of a sharpened image, like those who work with local contrast, or microcontrast, or edges, or even wavelets (not only the Wavelet tool in RT). But the real sharpening tools should be used as a last step. At least thatās what I think.
In your workflow, if you donāt like the results you get within Gimp, and would love to use the RawTherapee tools, then go use them!:
process your raw file (with or without Capture Sharpening, your choice)
export to tiff, 16bit
edit with gimp, with or without resizing as a very last step
sharpen with RawTherapee, and export as a PNG or JPEG
You wonāt be able to use Capture Sharpening with a non-raw image, but there are a few more tools to give you an enhanced image.
I agree sharpening should be the last step. All in all I probably phrased it poorly but in an Ideal world I would never need to go to gimp or even photoshop in the first place. I feel sharpening in raw tends to give a better result most of the time. Maybe it is just better methods available not sure.
Sadly no raw processor to this date allows for selection based dodge and burn with a soft light mask. The ability to use white or black/or color instead of exposure or a curve in my opinion gives a better result there. Not to mention the ability to have a brush with low force/flow to slowly build up the effect. So I tend to need to go to a Raster editor for that part of my process.
But you can always do your Gimp editing using a tiff file, and perform the final touch (sharpening) with RT (or darktable, or any other raw processor of your choice). Then you just have to export the final image.
Gimp is simply too late in your image workflow to be used to perform signal reconstruction.
I did a quick example with my own smart edge-aware sharpening in darktable, using the exact same algo with same parameters at 2 different places of the pipe line : before any non-linear transform, and at the end of the pipe (after tone curves, tone mapping and contrast enhancement, but before the gamma/OETF encoding coming with the display RGB conversion).
The reason for that is grounded in signal processing and Fourier analyse, but the bottom line is Gimp or Photoshop or whatever come too late in the pipe to perform denoising or sharpening with satisfying results, that job is for the demosaicing software (Rawtherapee, darktable, DXO Prime, etc.).
Very true open source software is really making amazing strides and innovating where other software is rather stagnant or focused on āAI automationā edits. I do my best to stay in dt as long as possible maybe I will discover something as I learn more.
A bit over my head I really want to learn some under the hood stuff as at some point in my life I would love to contribute to a project. It does seem to make sense even with my lack of knowledge in the technicalities. By my understanding the longer you can keep the data in that linear representation the better. Not exactly sure why precisely but I would assume better uniformity in the data.
Let me say first that I completely agree that sharpening must be done in the proper point of the development of an image.
But even when I donāt want to start a discussion about where is the proper point in the pipeline, I guess we must agree too that when an image is heavily post-processed, and then resized, it is somehow good to get rid of some of the smoothness generated. A.k.a. sharpen the image.
If I get a final result with my raw processor of choice, without any more external processing, then sharpening must be done in the best possible point of the raw pipeline. On the other hand if I use some other program after the raw processor, I bet when all the editing is finished, a bit of sharpening is needed.
And then we can talk about which program is better to perform the final sharpening.