Image Processing – Recommended Reading

What is it that you want to understand ? A sensible way to push pixels for photographers ? The theoretical background of algos used in darktable ? All of it ?

I learned everything the hard way… My original trade is solving heat-transfer PDE by finite elements. So, using my maths and some epistemology background, I binge-watched Google Scholar and tried to reproduce the results (lot of papers come with Matlab code) until things started to make sense. After a while, just looking at the equations (good maths are elegant) and the results (halos mean bad algorithms in bad colour spaces), with years of practical photo editing experience, you can recognize good ideas from geeky BS.

Then, all you have is your judgement. Whenever people squeeze in perceptual frameworks to solve 100% signal processing issues (interpolation, denoising, reconstruction), you know it stinks. There is a lot of BS in academia these days, people pushed to publish whatever, low-rank universities going to image processing because it doesn’t cost much (you don’t need to build a super hadron collider to do research in image processing, that’s for sure), so at the end your judgment is key anyway. It really falls-back to straight epistemology.

1 Like

This is the key question isn’t it? I’m not sure I have a good answer, but I can give you a brief flow of consciousness to try and give you an idea… My initial motivation is to understand complex darktable modules on a deeper level than ‘push the sliders until the image looks good’. So that when, for example, I look at the new (darktable 3, non-local-means-) version of profiled denoise, I can understand what’s happening when I move each of the 7 sliders and to choose which one to adjust when and why (the auto mode seems a bit like cheating to me!). The manual is a good starting point but, being written by people who know the subject inside-out, can often assume knowledge that the average photographer doesn’t have, and at some point I usually start to get a bit lost and revert back to ‘play with the sliders and see what happens’. Transferring the information in the manual to a reasonable workflow is sometimes hard to do. Some of the theoretical background would probably help me to do this in a less ‘random’ way and allow to adapt my workflow based on the image and my intent.

Ultimately I would love to contribute back to the project (to coding/testing or to the user manual) but to have a workflow that I’m fully in control of (because I understand it) is my initial goal.

However, I’m basically interested in everything, so I’m happy to get pulled down some academic rabbit-holes on the way.

Here is the chance for you to help solve what you’ve railed against for some time, photographers who don’t know the difference between the tip of their lenses and the holes in their ass :wink:

2 Likes

Agreed, and this discussion is partly motivated by the discussion on @anon41087856’s pull request regarding the use of the term ‘expert’ to describe darktable on the website (make the intent of the soft clearer by aurelienpierre · Pull Request #59 · darktable-org/dtorg · GitHub) - I almost pasted some of the comments from that discussion into my initial post . There’s a danger that the tool ends up being geared towards experts and there being no agreed way to turn someone into an expert (or even what constitutes an expert), so it becomes a tool that only the developers can use properly.

Anything I can do to become such an expert (and to make sure there are more of them) can only be a positive for myself and for the project.

@elstoc There are plenty of threads that discuss dt’s denoising in detail (alliteration!). You could start there for that particular topic.

For me personally, the part I aim to change is

For now, learning darktable is a lot of lonely effort, crossing sources between the documentation, courses, tutorials, blogs, forums and lots of personal tests until it all converges toward a usable workflow based on some understanding of what’s going on at a relatively low level.

We are not alone, we are here to better our own, and then others, understanding of application use, image processing, and perhaps some general nerditry.

Indeed!

Couldn’t agree more @paperdigits. It absolutely needs to be made clear that there’s a big learning curve to darktable (and that the learning will be rewarded). But then we need to be clear how to go about it, and to put all of that disparate information in one place.

There are plenty of threads about denoising. That’s part of the problem! It’s also, it must be said, part of the solution: The information is out there it’s just really hard to get at.

2 Likes

@paperdigits has been writing a book for some time to summarize the wild and wacky world of photo processing and management. I wonder where you are at? :point_right: :wink: :stuck_out_tongue:

As an example, I use ArchLinux at home. It’s designed to be a linux distribution for ‘experts’ (people that aren’t afraid of the command line and don’t mind learning how to set up configuration files). It gives you a command line and a package manager and expects you to create your own system basically from scratch.

It can only do this because it also provides a massive wiki site that gives you all the information you need to become an expert.

This is what darktable needs.

2 Likes

Still very much in the thinking stage of it. I’d like to get a skeleton published so people can start writing in it though… that’d be a good project for the new year, perhaps.

I don’t really see this as a problem… but it comes down to how you expect to consume information. This is a forum, not a book, and as such information is scattered about and fairly unorganized. I’ve been looking at alternate firmware for smart plugs lately, and if you think this forum is scattered, boy, do I have something for you to look at :laughing:

2 Likes

I’d love to help.

Ok, that’s easy. I can come with a list of background papers for each big chunk in dt (might take some time though).

4 Likes

Thanks @anon41087856. I think there are a lot of people would find that very useful.

2 Likes

I am one who will find that very useful indeed. First I need to reinstall my Manjaro system (arch Linux for dummies!) And start going through the tutorials on dt 3.0

About white balance and colour adaptation:

About contrast equalizer in darktable:

(By the very @hanatos, you sir need to write more of these :slight_smile: )

About the local laplacian:

About non-local means to denoise:

About display-agnostic workflows:

I will come back if I find more. Some of these ressources sometimes disappear, so I suggest you download them somewhere safe.

11 Likes

Thanks @anon41087856. Much appreciated.

You are welcome.

Some more prospective stuff that could be used to up the darktable game… They are state-of-the-art curated stuff.

Bayesian refinement on the non-local means

(cc @rawfiner )

Profiling the noise variance from a single image

Next-gen demosaicing to up darktable’s game

Deblurring

Inpainting and image reconstruction

Spectral colour profiling for cameras

Single sensor imaging (book)

4 Likes

Wow. I’m going to need to set aside some serious reading time.

1 Like

Basic and effective tonemapping reading with a great extension over Reinhard operator

https://64.github.io/tonemapping/

itu bt.2446
http://www.itu.int/pub/R-REP-BT.2446-2019

At pag.21 section 5.1.8 “Optional processing of chroma correction above HDR Reference White” is explained a simple but really good way to control the highlights desaturation
This could be used for every tonemapping method

Edit:
example using the luminance chroma preservation like darktable

we start with a hdr picture (call it rgbhdr) 1000 nits (0-10 range => l*max=10) and we want to tonemap it to sdr 100 nits (0-1 range => l*ref=1)

rgbhdr
find yhdr (l*)
desat_mult=(yhdr-1)/(10-1)          #(l*-l*ref)/(l*max-l*ref)
clamp values outside 0-1 range in desat_mult
desat_mult=1-desat_mult
rgbhdr_desat=(rgbhdr - yhdr)*desat_mult + yhdr
1 Like

Some links of my own too:
Image denoising cuisine (which give a good background on denoising): http://mcolom.perso.math.cnrs.fr/download/articles/acta_numerica.pdf

Some more prospective stuff (papers that I find interesting):
On how to white balance when there are 2 different light sources:
Light Mixture Estimation for Spatially Varying White Balance http://people.csail.mit.edu/ehsu/work/sig08lme/lme-sig2008-sm.pdf

On denoising:
to estimate an image noise variance:
Noise-Level Estimation from Single Color ImageUsing Correlations Between Textures in RGB Channels https://arxiv.org/pdf/1904.02566.pdf

(@anon41087856 about non local bayes, I have to think about how to have a fast implementation of it. I am not confident about this due to matrix processing steps (I fear that inverting a covariance matrix for each pixel + doing some matrix multiplication would be too costly). Though, it would be nice to have it if it is possible to optimize it enough. I will take some time to think about how we could do these steps fastly, and discuss it with you if I am stuck)

2 Likes