Got an image problem ? Go see the image doctor.

Very cool. So who is the doctor? :slight_smile:

I have always been wrestling with the same issues as you have been but you are 1000 years ahead of me. And yes, inpainting is also the subject that I am struggling with the most. The results are so unreliable. :cry:

The doctor is in the software ^^.

Definitely.

this looks very awesome. essentially you’re saying you can do a better edge-aware wavelet transform with iterated guided filters, using green as guide? sounds risky for very noisy images, but your example seems to prove otherwise.

this reminds me i need to implement the fully generic guided filter in fast.

Exactly, but the approach is a bit different since:

  • I don’t actually decompose the image into frequency “bands”, but apply the filter again and again in a serialized way, on top of itself, and varying the window size,
  • borrowing to the guided upsampling paper, and confirmed in practice, varying the window size from fine to coarse is better than from coarse to fine (which is the classic pyramidal approach).

For noise reduction, the guided filter falls back to a local patch-wise variance estimation, which I believe could be used to auto-profile the noise reduction with a single image. The theory is yet to write formally.

1 Like

@Carmelo_DrRaw I might have found a way for the enhanced USM to limit it to the actual depth of field, and then increase it or reduce it.

The principle is to blend the result of the band-pass USM with an alpha layer such that:

out(i, j) = in (i, j) + \alpha(i, j) × strength × BP(i, j),

where BP = 2 HF_{low} - HF_{high} (I know you use BP = HF_{low} - HF_{high} but I find the cut-off too harsh), and HF = image - LF

The alpha mask is build by guiding \dfrac{HF_{high} + HF_{low}}{||LF_{high} + LF_{low}||_2} with BP. You can tweak the formula to allow the user to rescale the depth of field map:

\dfrac{HF_{high} + HF_{low}}{||LF_{high} + LF_{low}||_2} + (1 - DOF)

With DOF > 1, the depth of field is made shorter (more blur), and the other way around. Notice that the alpha mask is not a standard premultiplied alpha. Also, you an apply this setup iteratively to get an edge-aware sort of blind deconvolution.

This is all a bit hacky and ad-hoc, I need to check the math to verify the sense of what I’m doing, but what do you think so far ?

Ok the above equations didn’t worked as well as I would have liked. I finally choose a simple deconvolution scheme with total variation regularization, and without the band pass filter.

Example with https://discuss.pixls.us/t/playraw-view-on-the-mosel/15099/2:

Before:

After:

Notice how the background has not been deblurred while the foreground is much sharper. Also the transition is very progressive.

10 Likes

Excellent Aurélien.

1 Like

Outstanding effort! landed here after running into an older Youtube video from Aurelien that discusses the initial deconvolution effort that I thought was discontinued - it’s always really humbling to see amazing contributions coming to DT. This new module is shaping up nicely.
Is it meant to replace sharpen+local contrast or to complement either or both?

It should replace sharpen to restore optical sharpness, but local contrast is more of a perceptual thing, so it’s not the same.

2 Likes

Is denoise profile non local faster in this version (or is it because I changed my system)?

“2.5 years ago, I saw an advertisement on Instagram about some software (can’t remember the name now) that did blind deconvolution for image sharpening.”

Piccure?

I tried it and I struggled with halos on its output, so gave up before long.

Looking forward to where this project ends up.

cheers

Yep, this one !

1 Like

Oh, piccure… took forever, did little :frowning:

Anyway, Aurélian: it seems that guided filters are your leit motif… where can I read something precise but elementary (!!) about what they are and how they work?

Well, they are quite elegant in their formulation, faster than most filters, and don’t have gradient reversal effects as the bilateral filter does, so…

Here is the base paper:
http://kaiminghe.com/eccv10/

So I read one of Kneyazev’s arxiv papres, on guided operators. Looks cool, but I don’t get where the guiding image comes from… or I can imagine that it’s the original scene referred linear RGB when applying luminosity changes (forgive the imprecision) via something like the tone equalizer. But in blind deconvolution?

(This is not a million km from my work, which sometimes involves blind source separation to detect “signatures” of carcinogens, but tend to be strongly influenced by non-linear effects of the constraints… and the investigators’ prejudices).

The guiding image can be the image itself (useful to remove blur), or one RGB channel can be used to guide another (useful to remove noise or chromatic aberrations).

The point of guiding the image with itself is to create a surface blur (as in the the tone equalizer). If you subtract the surface blur from the original image, you get an high frequency isolation with a variance thresholding to ignore sharp things. So, when you reapply the high frequency on top of the image (which is the base of the unsharp masking and the deconvolution), you don’t increase the already sharp details (which is responsible for halos). But also, if you repeat that process iteratively and make the blur radius and variance threshold vary along the process, you can remove static blurs as a blind deconvolution would do but without having to estimate the PSF (it’s kind of an implicit PSF). You can’t do that with usual deconvolution, since deconvolving with a wrong PSF will lead to artifacts (and PSF vary in the frame). But because of the variance threshold (and the depth of field mask I add), the guided-filter-based deconvolution is auto-dampened where the surface blur doesn’t match accurately the real blur in the picture, so the artifacts creation is well controlled, plus the implicit PSF is patch-wise, so we account for the lens distortion.

1 Like

Oh, of course, you’re using (combinations of) RGB components to adjust the other components… which you said, somewhere. Doh.
So as long as there isn’t a peculiar dependence of texture on chroma, it should work… more or less as an expectation-maximisation algorithm.

Otoh, the tone equalizer lives downstream of the channel mixer… which can be used to create monochrome images. So what happens then?

Le mer. 5 févr. 2020 à 15:59, Aurélien Pierre via discuss.pixls.us noreply@discuss.pixls.us a écrit :

Tone equalizer comes before channel mixer, so everything is fine regarding colour.

Chroma doesn’t mean anything at this stage of the pipe, since we didn’t do chromatic adaptation yet. The CFA broke the spectrum into 3 RGB layers, as far as we are concerned here, we are only dealing with gradient fields and enforcing their patch-wise correlation.

Excellent!
But what prevents to includ in darktable-org your image doctor?

Maybe the difference between a prototype implementation on one platform and a full blown performant and stable implementation working on a whole bunch of platforms with and without opencl etc.pp.

2 Likes