#StoryTime
Some of you know I started to dive into the image processing dev side a few years ago, with blind deconvolution. I can’t believe this was already 2 years and 25.628 lines of code (in darktable’s core) ago.
2.5 years ago, I saw an advertisement on Instagram about some software (can’t remember the name now) that did blind deconvolution for image sharpening. I was at Polytechnique Montréal, in engineering school, studying boring mechanical engineering (school was boring as Hell, not engineering itself), and shooting portraits of people to forget about it. It was a Friday night, my girlfriend was already sleeping, I had nothing better to do than scouting Google Scholar about it.
Blind deconvolution is the process by which you try to estimate the blurring function of the lens (Point Spread Function, or PSF for geeks) while you actually deblur the picture. The drill is as follow:
- infer a rough PSF,
- while you are not happy about the result and your computer didn’t overheat :
- sharpen the picture a little bit by reverting the estimated PSF,
- refine the PSF estimation by comparing the sharpened picture with the original picture.
That’s basically supervised machine learning. I was not prepared for that. In French, we have a saying : « Bon à rien, prêt à tout » (good for nothing, ready for everything). That’s me.
That lead to some painfully slow Python prototype, that eventually became an half-working darktable’s module, thanks to @Edgardo_Hoszowski . But still, I wasn’t happy about it : the runtime was beyond usable, the results quality was random, and more importantly, the deblurring algo didn’t care for the already sharp parts of the pictures.
14 months ago, after I spent 1.5 year repeating to myself “I will have to learn the C programming language to start contributing in darktable at some point”, I decided I couldn’t continue to be a loser with blank dreams, so I stopped sleeping, learned C, SSE2 intrinsics and OpenCL altogether, and 2 months later, Filmic v1 was born. 4 months later, I had 13.207 lines of code merged in darktable’s core (these 13.000 lines were mostly bugs, and code quality doesn’t measure in code volume, but just let me brag, okay ?).
Anyway, the point is I didn’t give up on deconvolution, and as I developed the tone equalizer and started playing with guided filters, I got another idea : why not replace the Richardson-Lucy-like deconvolution scheme with an iterative unsharp-masking based on guided filters ? That would ignore already-sharp edges (no over-sharpening) and halos altogether. How nice ?
Well, that’s done.
Before (credit: BPN Photo - 50 mm F/1.4):
After:
I’m working on another darktable module that would reconstruct any damaged part of a picture: image doctor. The goal is to perform a multi-scale denoising, sharpening, and fringes removal at once, using guided filters and inter-channel correlation assumptions.
Try it (crazy slow, unoptimized, R&D material, backup your database, etc.):
How does it work ?
Chroma denoising
On an RGB picture, if you compute the R-G, B-G and G-R-B layers, you will find out that they are pretty much piece-wise smooth, meaning you get smooth surfaces inside of edges. Luckily, the guided filter provide an edge-aware surface blur, meaning it can stop at edges and blur inside. Note that we don’t aim here at fully denoising the picture (which is usually ugly and unnatural), but rather at making the noise more pleasing (finer and more even). Also, we won’t care about the PSNR as most academic papers do, because they try to denoise synthetic blur added on top of the IMAX or Kodak (scanned film) sRGB image dataset, and we aim at denoising any raw linear image in whatever stupid ill-behaved RGB space your camera records in, in a fashion that let you print satisfying pictures.
Let us denote \mathscr{G}(image, guide) the guided filter. The chroma denoising iteratively performs:
where i it the current iteration and \alpha is a user-set strength parameter for the denoising (which falls-back to a simple alpha blending, very suitable with the linear nature of the guided filter output). Note that the guided filter is used here as a simple surface blur.
Before (extended 64508 ISO, under-exposed by 2.86 EV, on Nikon D500 — talk about real-life example - credit mimi85):
After:
Notice that this approach is not enough to fully denoise an image, but it is a very nice complement to @hanatos and @rawfiner work on the non-local means denoising, and might allow to go softer on this module (and get more sharpness) while still getting good-looking photographic results (if anyone here ever shot Ilford Delta 3200, you know grain is not always evil).
Defringing
I got some inspiration from iterative guided upsampling by residual interpolation for raw demosaicing. Most defringing algos do an edge detection (with a Laplacian filter or similar, which is essentially a second-order derivative approximation) and desaturate those edges in Lab or HSV. However, they don’t make an exception for valid sharp and saturated edges, like lipstick or road signs in Europe, so witness the grey-edged lipstick on your models. Sad…
Fringes (chromatic aberrations) happen when R, B, and G channels are decorrelated due to the wavelength-dependent refraction indice of glass. Usually, G is centered, while B is on the inside of the picture and R on the outside. The goal is then to squeeze R and B toward G to get a real sharp edge.
The algorithm for defringing is as follow:
- Guide each channel with G and compute the residual high frequency:
- Guide R and B high frequencies with G high frequency:
- Correct each RGB channel with that:
with \alpha a user-set strength parameter, usually between 0.05 and 0.1.
Before (credit : @rawfiner):
After:
Meanwhile, check that we didn’t mess-up the red flowers:
Before:
After:
We are good.
Sharpening
Well, it’s an iterative unsharp mask with a box window increasing by 2 at each iteration. Nothing new here, except we use a guided filter as an edge-aware blurring operator to avoid gradient reversal. See example on the 2 first images in this post.
Inpainting
I have been investigating the possibility to add inpating based on anisotropic heat diffusion to recover blown highlights along. It’s fully implemented but I don’t reproduce the results of the paper and it produces bad chromatic aberrations. Maybe I’m just a moron and I got it wrong, maybe the examples they show are carefully curated (and none of them are RGB anyway), maybe both. We will see.
FAQ
Will this be included in darktable 3.0 ?
No.
Will you quit horsing around and fix filmic RGB and UI bugs for darktable 3.0 release ?
Yes, I promise.
Is darktable your full-time job ?
Yes, and you can help me help you. Sorry to beg, but you know… rent and health insurance… Basically, R&D like that or like the tone equalizer takes a lot of time, not just programing, but also testing and tweaking the algo, refining user-interaction, getting feedback and remote-fixing weird Windows/Mac bugs that don’t reproduce on Linux.
What beer do you like ?
Christmas beers will start to show up soon