Diffuse or Sharpen module ---- For Wildlife photography

I just got back from a bird/wildlife trip to Colombia, and with 3,000+ photos, I have a lot of work in store.

A quick refresh of the DT 3.7 beta and time to check what’s new before kick off.

I hadn’t previously looked too hard at the “diffuse or sharpen” module, but having read some good things about it, I decided to apply the deblur and contrast presets on top of my normal processing.

And WOWEE, looks like the team have done it again! a real boost in the image quality (assuming its not just my elderly eyes that like sharpness and contrast). Its particularly good for wildlife, as the subject seldom stays put for more than a few seconds, so its hard to get perfect focus, and to avoid some camera shake.

Anyway - see the humming bird shot, and of course, if anyone can do any better with the raw file, lets see it.
ER6_8856.CR3 (23.1 MB)
ER6_8856.CR3.xmp (9.3 KB)
licensed as Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)

My take:

So what’s next for darktable? Capture sharpening?

10 Likes

If you take a look at the ‘why sharpening’ thread, I think it’s already a miracle the diffuse module is here. I don’t go expecting more anytime soon, I guess :).

(PS, capture sharpening is basically one of the ‘AA filter’ or ‘no AA filter’ presets as far as I know)

edit: Stupid autocorrect

Which sharpening thread are you talking about?

Most of the threads relate to Raw Therapee.

In the context, I think @jorismak is referring to this thread.

1 Like

Got it - in fact I did read that thread when it started, and it contains a lot of really useful discussion, that I should have looked up before I started this latest thread.

“Why Sharpness” also contains a hummingbird photo processed with and without the new “Diffuse or Sharpen” module that probably says it all much better than I can.

So to change the subject a little, why did I not find that thread? The simple answer is that the pixl.us search function is not sophisticated enough to search for “sharpness” when you ask for “sharpen” or sharpening"

I started to write up a “Tips and Tricks” document about sharpening and contrast as a candidate entry in a new DT “Howto” Wiki. (see this thread).

As soon as I pulled together the material I know about, I realised its a lot harder to get it right. And likely contentious too. But still I think its worth trying as there is so much good information on DT hidden in these pixl.us threads. Information that could be distilled down to usable easily referenced tips and tricks, without losing the arguments and discussion of alternative approaches.

I will keep writing up my tips and tricks docs, and maybe at some point I’ll be confident enough to share them for general use.

5 Likes

That’s a terrific shot. Here on the US Eastcoast, we have only one native species, although we were treated to a west coast straggler who successfully over-wintered with us. That must have been a fabulous trip.

So here is my take. I had trouble bringing out the color without clipping the bright feathers and feeder petals:


ER6_8856_01.CR3.xmp (18.7 KB)

I think this shot is a good illustration of the need for delicate handling of sharpening. Birds have a lot of rich details, but they’re also soft. Push the details too hard, you wind up with a bird that looks like a plastic toy.

I used the Sharpen Demosaicing (AA Filter) preset and decided to make a creative decision (:wink: ) to mask around the hummer because I felt the sharpening around the petal was too much. I’ve also been experimenting with reducing the sharpness slider under edge management to see if I can counteract over-sharpening already sharp details (according to @anon41087856 video, that slider is to bring back some detail from diffusion, so perhaps it would work in the inverse to dial back oversharpened details)

Any rate, it’s a great photo and I had a lot of fun playing with it!

(PS - the rendering on the full sized image is really horrid when I click on the image. Even worse than FB. Is there a recommended maximum size to upload JPEGs on the PIXLS site?)

3 Likes


ER6_8856.jpg.out.pp3 (14.8 KB)

3 Likes

Dave,

I chose a hummingbird photo (this one is a Great Sapphirewing Pterophanes cyanopterus) as a good test case for sharpening because of the iridescence on the wings. They really are highly coloured with a glossy sheen that changes as the bird flexes its feathers. I think in this case, edge definition and crispness are key to displaying what the bird really looks like.

I do understand and agree that less can be more, and there are lots of bird photos online that are wildly overprocessed. Still, I felt the ‘Diffuse or sharpen’ local contrast pre-set really improved this photo.

Having said that, using more iterations really guzzles compute power - I guess I will need to wrestle a good graphics card away from the bitcoin miners if I want to use the module extensively!

1 Like

More or less, yes. Both use a similar iterative technique to reconstruct gradients, but while capture sharpening infers a gaussian blur in the deconvolution process (from what I recall – which by the way is not optically correct), diffuse & sharpen just follows the gradients directly, without assumption, and does it in wavelets space, which allows to target specific frequencies and leave the rest alone. Although, since the wavelets decomposition uses an approximation of a gaussian to split high frequencies from low frequencies, I think there is a way to tweak D&S to arithmetically match what capture sharpening does.

1 Like

I don’t use sharpening much past resize-for-export-sharpening, so this was an interesting exercise.

Here’s a screenshot of a rawproc instance overlaid by a rawproc snapshot, note the “sharpen:usm” in the toolchain; for the snapshot, it was enabled, then disabled for the rawproc window.

Now, there’s depth-of-field going on in the capture; the feathers in the blue wing are probably behind the focus point, and the green in the body are right about on it. The USM application definitely puts a bit of definition in the blue feathers, but IMHO the green feathers already in sharp focus start to artifact.

Reinforces my perspective that, If I were to really perseverate on image detail, I’d start looking for (and saving for) a higher-MP camera. The old adage, “to improve your image, start with the capture…”

3 Likes

Thanks Glen, interesting points.

Depth of Field problems, yes - but for amateurs like me, that is a regular issue. The photo was taken in a “Hummingbird Observatory” outside Bogota, Cololombia. Lots of bird feeders, and different distance to each feeder. This bird was on a feeder fairly close by, approx 5m, and I should have reset my FStop to 10 or 11 to get full depth of focus.

In the heat of the moment I left it on 5.6 so very likely the wings are out of focus, not to mention the motion blur on the wings. This bird was darting back and forward to the petal to get the sweet liquid so it was also hard to get autofocus to connect to the bird. I set the focus manually by eye after a few tries, so again it was approximate.

The point of the explanation is that I often get a “decent-but-not-quite-on-the-money” shot, that I dearly want to improve if possible. The extra image detail from sharpening looks good value, given that I won’t get another chance to ‘start with the capture’.

Regarding the feather artifacting - it can be tricky to decide with hummingbirds - the structure of their feathers can easily create Moire patterns that may look like camera artifacts but are actually feather artifacts.

Anyway, I have several hundred more hummingbird shots so plenty of scope for experimentation.

3 Likes

That’s typically the kind of situation where techniques like unsharp masking will have a very limited success…
Deconcolution methods can work there, in theory, but those have their own problems: you need to get a good kernel, and they require a lot of cpu power, (ask @anon41087856 :wink: ) for a limited return on investment.

Sometimes I forgo sharpening and use local contrast…using bilateral mode, just enable parametric mask…no adjustments and then using the details slider to dial in the effect…my default is to have it at 45% then tweak that and or opacity to get the desired result…doesn’t always work but I like it if the image needs a reasonable bump…

1 Like

For reference, here’s a description of the algorithm: Richardson–Lucy deconvolution - Wikipedia and more about the point spread function: Point spread function - Wikipedia

2 Likes

Yeah I know RL deconvolution, having been tinkering with its various variants for 3 years. The problem remains that you need to use a proper PSF which you don’t know (unless it’s Hubble telescope and physicists have measured it in a lab). A Gauss kernel is not a lens nor a motion PSF.

4 Likes

Yes, I agree – and I didn’t mean to imply you don’t know RL of PSF; I just wanted to provide context to the discussion.

ART has implemented using PSF but I have no idea how many people have a psf to use??

The thing is, calculating the PSF for a given lens is quite difficult. It also depends on aperture, focusing distance, etc. That’s why many implementations just assume a Gaussian, and it works decently in practice (as you can see in RT’s capture sharpening).

It’s highly likely that the PSF of a photo-lens also varies from lens to lens due to residual errors in lens-group alignment and grinding errors (onion-ring-bokeh). Furthermore the real-lens-PSF’s are field dependent and saggital and tangential ‘sharpness’ (coma) in the image field make this really complex, as those can vary drastically if you go away from the center-axis of the lens. One would need to establish a database of PSFs for each lens at various field-positions and for every other paramter you mentioned (aperture, focus, zoom).

Modified top-hat PSFs should be a little bit better guess for a real-world optical system, especially a stopped-down-optical system, than a gaussian, but with the aforementioned complexity it’s reasonable that a gaussian somewhat decently “works” as a PSF for deconvolution.

IMHO that is the reason why so many programs can get away with USM sharpening, it’s the simplest solution that works most of the time as a poor mans deconvolution sharpening.

Blind-guessing better suited lens-PSFs could be one of the rare cases where a machine learning algo could really shine.

Not really, so far. The runtimes are insane, the results are unpredictable, and the RGB channels correlation is really difficult.

1 Like