The unsharp mask sample in the line up is way over done. I wanted to keep the settings the same. That unsharp mask sample has the same settings as the Edge Detect uses. If I was to just use the Unsharp without the edge detection mask it would have to be way toned down.
It is Deconvolution.
With the pushed sharpening:
And with the auto-calculated values (not pushed) which I prefer:
What I mean is that you need to treat each part of the image differently for it to be natural. Sharpening and contrast modifications will have a greater effect in areas of greater entropy (more stuff going on).
I agree Non Pushed is nice. Handles it much better than the edge detect does water stays nice and clean.
Cool it is deconvolution just a different name I learned something new
@afre thanks I agree I tend to do local style edits over global edits. I never tried that with sharpening before completely slipped my mind to do such for sure will keep that in mind.
My personal favorites have been GMIC’s octave sharepn or RL deconvolution.
@blj Capture sharpening is RL deconvolution at its core BTW. Long discussion thread in case you missed it: New tool "Capture Sharpening" and its precursor Quick question on RT Richardson–Lucy implementation.
It’s deconvolution assuming gaussian blur combined with a microcontrast based mask (to reduce sharpening of flat areas), auto-calculation of blur radius and auto-stop of iterations to reduce halos. Additionally you can increase the sharpening for the outer regions (corner boost) of an image (which may have more lens blur).
Except the corner boost (which the user has to set or not), the auto-calculated values work quite well (at least for low-ISO shots), so the only thing you need to do is enabling capture sharpening.
Thanks I must have missed these threads will for sure read up on them.
Very nice hopefully I will have a chance to try it out when RT 5.8 releases. RT does not do so well on my Mac but I will still for sure check it out anyway.
Or vice versa, latest Mac don’t do so well for opensource
No I did not upgrade what a mess. My problem is the 4k screen making RT slow. I get the same issue with dt if I turn off openCL. . It is just too much resolution for just the CPU. Changing resolution mode fixes it but when the is smushes pixles together like that it looks bad. Not your guy’s fault
Well, I’m curious to know why do you think that’s the right time to sharpen an image.
I’ve always heard that the best time to sharpen an image is at the very last processing step, just after resizing it. If needed, under certain circumstances, there are people who say that it’s good to pre-sharpen the image, before processing it, but sharpening at the end, too.
In my limited experience, there are certain scenarios where it’s good to pre-sharpen the image a bit (slightly):
- where you have to go beyond the diffraction limit (because of a sensor-lens combination)
- where you need to combine images, that is, stack them (focus stacking, super-resolution, astrophotography noise reduction)
- on images with low contrast and a good amount of details
And there’s where Capture Sharpening comes into play to me. It’s an invaluable tool to enhance details in such a way that the next tools will get a better chance to discriminate noise from real detail. But just that. It usually is not the definitive sharpening tool.
In less complex images, on the other hand, Capture Sharpening is almost all the sharpening needed.
Then there are some tools that will give the impression of a sharpened image, like those who work with local contrast, or microcontrast, or edges, or even wavelets (not only the Wavelet tool in RT). But the real sharpening tools should be used as a last step. At least that’s what I think.
In your workflow, if you don’t like the results you get within Gimp, and would love to use the RawTherapee tools, then go use them!:
- process your raw file (with or without Capture Sharpening, your choice)
- export to tiff, 16bit
- edit with gimp, with or without resizing as a very last step
- sharpen with RawTherapee, and export as a PNG or JPEG
You won’t be able to use Capture Sharpening with a non-raw image, but there are a few more tools to give you an enhanced image.
Hope it helps, somehow.
I agree sharpening should be the last step. All in all I probably phrased it poorly but in an Ideal world I would never need to go to gimp or even photoshop in the first place. I feel sharpening in raw tends to give a better result most of the time. Maybe it is just better methods available not sure.
Sadly no raw processor to this date allows for selection based dodge and burn with a soft light mask. The ability to use white or black/or color instead of exposure or a curve in my opinion gives a better result there. Not to mention the ability to have a brush with low force/flow to slowly build up the effect. So I tend to need to go to a Raster editor for that part of my process.
Well, at least not yet
But you can always do your Gimp editing using a tiff file, and perform the final touch (sharpening) with RT (or darktable, or any other raw processor of your choice). Then you just have to export the final image.
Gimp is simply too late in your image workflow to be used to perform signal reconstruction.
I did a quick example with my own smart edge-aware sharpening in darktable, using the exact same algo with same parameters at 2 different places of the pipe line : before any non-linear transform, and at the end of the pipe (after tone curves, tone mapping and contrast enhancement, but before the gamma/OETF encoding coming with the display RGB conversion).
Non-linear (end of pipe) :
See on the hair, the weird noise + ringing ?
The reason for that is grounded in signal processing and Fourier analyse, but the bottom line is Gimp or Photoshop or whatever come too late in the pipe to perform denoising or sharpening with satisfying results, that job is for the demosaicing software (Rawtherapee, darktable, DXO Prime, etc.).
Very true open source software is really making amazing strides and innovating where other software is rather stagnant or focused on “AI automation” edits. I do my best to stay in dt as long as possible maybe I will discover something as I learn more.
A bit over my head I really want to learn some under the hood stuff as at some point in my life I would love to contribute to a project. It does seem to make sense even with my lack of knowledge in the technicalities. By my understanding the longer you can keep the data in that linear representation the better. Not exactly sure why precisely but I would assume better uniformity in the data.
Which module can do that in dt?
Let me say first that I completely agree that sharpening must be done in the proper point of the development of an image.
But even when I don’t want to start a discussion about where is the proper point in the pipeline, I guess we must agree too that when an image is heavily post-processed, and then resized, it is somehow good to get rid of some of the smoothness generated. A.k.a. sharpen the image.
If I get a final result with my raw processor of choice, without any more external processing, then sharpening must be done in the best possible point of the raw pipeline. On the other hand if I use some other program after the raw processor, I bet when all the editing is finished, a bit of sharpening is needed.
And then we can talk about which program is better to perform the final sharpening.
It’s active research and development, I’m currently fine-tuning the maths, so it’s not in the release yet.
Proper resizing does not blur (there are ways to avoid it), so it’s a problem to fix at the interpolation time, not after. Always try to fix the origin of the bug, not patch its appearance. Besides, you will resharpen everytime you re-export for each medium and for every size ? That’s an insane workflow…
Your understanding is correct, but the full explanation would require 8h for me to prepare you a 2h explanation that will avoid any equation dropping, because those equations are bad looking and meaningless anyway if you don’t hold at least a bachelor in applied sciences.
May you give an example, please? I really wish to learn about this.
Well, I think it’s the proper workflow, as final sharpening will be heavily dependent on the device it will be addressed to: it’s not the same sharpening for display, than for a printer, or sharpening an image that has been downsized to a 25% (thus, without interpolation), than another image that needs an interpolation to get the desired final size.
Or am I wrong? (anyway, I think we are going off-topic)