A tone equalizer in darktable ?

There was a response wrt the filmic module but I think the same thing applies here. He has noted that corresponding slider values to a parametric curve is not the easiest thing to do. Personally, I don’t mind the sliders. Just focus on the preview as you adjust the sliders and not the numbers, and you should be okay.

I agree that adapting filmic to something different than sliders might be difficult. However in this case the similarity to equalizer or color zones is obvious. The only significant difference is that you don’t modify the horizontal pozitions of the nodes.

In all his examples Aurélien moved the sliders to positions forming a smooth curve, which I believe would be a very typical use case:

Selection_250

You can achieve this approximate shape in color balance in about two seconds, just scroll up to increase the radius and then move the center up:

Selection_254

From there you can do your finer adjustments with smaller radius.

A lot of users asked for something simple - two sliders for shadows and highlights. I think 8 separate sliders would discourage many, because if you move just one or two the results would be ugly. You have to tinker with the neighbouring sliders to have smooth transitions between the tone bands.

So this might be a good compromise - it’s quick and easy for novice users, but still gives you the ability to fine-tune every detail.

2 Likes

I like what you purpose, but isn’t the result just an horizontal oriented tone curve? Or am I missing something? Maybe the maths are different?

Don’t get me wrong: I like having some sort of visualization like a curve and being able to manipulate it directly is even better. In fact, it has been something that I have requested before. However, even if it is an easy thing to implement, I suspect that the current slider, label and EV interface is what @anon41087856 prefers.

I think the math is quite different than a simple tone curve. @anon41087856 mentions that it works in RGB, but more importantly “The luminance channels are blended with gaussian masks and the modifications are applied on a gaussian pyramid to preserve the local contrast” and “I have yet to squeeze in there either a laplacian pyramid or a wavelet decomposition to affect only the low frequencies”.

As for the controls - it’s Aurelien’s brainchild so of course he’s the one to decide. I think the output of both methods is just a set of 8 numbers, it’s just about how you manipulate them. Equalizer-like curve has the advantage of being more natural (better visualisation and we’re all used to the “shadows on the left, highligts on the right” thinking) and quick smooth manipulation of several adjacent points. On the other hand sliders with exact numeric values probably give you a bit finer control. So maybe it would be even possible to switch between the two or show them both at the same time?

1 Like

Yes, ultimately, the view will be changed to an equalizer with an inset histogram in background, and a color-picker to sample one pixel luminance. The sliders were the fast way to get a working prototype.

I got the laplacian decomposition to work this week, the results look really natural. I have some issues with the image padding still, but the algo is pretty much sorted out. I have also added several norms so you can choose how you select the pixels (by their lightness, value, average, euclidian norm, power weighted norm). I have found that the euclidian norm gives the most pleasing local contrast preservation, which is consistent with the maths.

I have used the module in studio shoots, and it makes a real power trio, along with filmic and color balance.

Right now, I’m working on 4 different projects:

  1. filmic v3 with gamut compression and options to address the oversaturation of blue (also, code refactoring of dt’s internal and optimization of pixel’s maths, especially color conversion between spaces)
  2. UI clean-up to remove unnecessary GTK events and drawings, and also to wire every color and size of the GUI to the CSS stylesheet, so we can provide several themes
  3. speed optimizations of CPU code through vectorization and multi-architecture building (so, even when you install dt as a package, there is an optimized version of the code in it for your particular CPU architecture - currently, packages are one-size-fits-all builds using the most basic optimization level)
  4. this equalizer
  5. giving a hand on color management to other devs, especially @Edgardo_Hoszowski , who tries to make the pixelpipe orderable, and @phweyland who has improved the tone curve module with Lch and RGB modes, and did a module that enables dt to use Cube LUTs from the movie industry (very good to reproduce accurate film-look).

Once this round is done, I have projects to:

  1. create a scene-referred spectral color mixer (red, orange, yellow, green, cyan, blue, purple, magenta) to adjust lightness and saturation of colors along the visible spectrum, especially when using bad lighting with scattered spectrum (for example, fluo-compact energy-saving bulbs destroy colors because they have an uneven emission spectrum). That would be equivalent to “fill-in” the lighting spectrum directly on scene, in a physically-accurate way. A lot easier to recover local desaturation of some colors only, rather than the color zones (and remember : the white balance module assumes a daylight spectrum, so you shift the white point, but you don’t fill the spectrum).
  2. improve the usability by enabling mouse + keyboard shortcuts, and key + arrow up/down shortcuts, so the whole editing could be done directly from keyboard (for example : increase exposure = E + Arrow Up, decrease temperature = T + Arrow Down) and possibly from the lighttable
  3. improve the support of Wacom tablets, because it’s really not good enough for the retouch module
  4. make the local contrast and color balance (contrast adjustement) work in xyY space, linear and log.
  5. I still have the deconvolution stuff to make work.

All of this would hopefully make contrast/lightness/saturation, shadows/highlights, basecurve, relight, zones, and the 2 tone-mapping modules obsolete, replacing them with color-safe variants, possibly easier to control (or, at least, breaking less things), and the GUI of darktable much cleaner.

I’m working full-time on dt since Christmas, hoping it’s a long-term investment, so if you have some spare, that would relieve some stress : PayPal.Me

Thanks.

21 Likes

Wonderful.

Do you mean out of your list (lightness, value, average, euclidean norm, power weighted norm), or out of all of the norms? I agree that the euclidean norm has the nicest properties. Use it all of the time in my G’MIC processing.

Glad it is still on your to-do list.

Of all the norms I know, and also of the available options in the list.

That’s several orders of magnitude more difficult than anything I have achieved so far in dt, since the maths are hardcore and need to be adapted (it’s basically supervised machine learning on top of statistical distributions of gradients with no plug-and-play model), but also to make it usable in terms of runtime (lots of optimizations/reasonable approximations to find, both from the maths and the programming side). The time required to only test the changes I make on the algos is the most limiting factor.

3 Likes

I think this also would mean to improve the brush, compared to other tools it isn’t great to handle …

1 Like

hi,
thanks for sharing, this looks really interesting. I am trying to understand what you are doing, and I have a couple of questions about your laplacian_filter (if you don’t mind naive questions from a total newbie in signal processing):

  • what is the idea behind the function? when applied on the original image, it seems to be an edge enhancing filter, is this right? how does it relate to “Laplacian of Gaussian” filters whose description you can find online?

  • how would it differ from applying some local contrast to the output? (apart from efficiency I mean)

  • is it intentional that it still boosts details even when all the sliders are in neutral position?

thanks in advance

When you pull down the highlights while pushing up the midtones (as it happens very often), you revert both the global contrast (luminance) and the local contrast (percieved sharpness). The laplacian, here, is a way to affect only the global contrast, aka the low frequencies, and retain the local contrast. It is a multi-scale filter designed to avoid edge artifacts.

Retaining the original signal is usually cleaner than trying to restore a damaged good afterward.

Noooope that’s a bug :grin: Could you show me ?

1 Like

Thanks, this matches my intuition. I guess that I need to take a look at the paper to better understand the details (I saw that you put a link to the paper, I think it wasn’t there a couple of days ago when I looked at the code… or do I remember wrong?)

Well, I don’t have any concrete example, so I’m not really sure. But it seems to me that laplacian_filter can still generate a weight that is non-zero even when luminance_in and luminance_toned are identical. In that case, wouldn’t that alter the local contrast of the image even in neutral position? But it might just be that I’m missing something.

The paper is just an inspiration for the method. I have built my own functions from the diffusion equation, and simplified their formulation using the linearity properties of my mapping function.

Someone is watching :wink:

If you use a very small \sigma (< 1px) in the gaussian, you come close to a dirac impulse and only the central pixel gets enough weight, but it also is the only one to have a difference L_{in} - L_{toned} = 0. So, in this case, it’s a no op.

As \sigma >> 1, the result is a diffusion of the neighbourhood, so it’s close to an inpainting. I have tested it in specular reflections of glasses, it actually behaves like a highlight recovery thing.

1 Like

Yes, with interest…
So, here’s an example of what I meant in my previous comment. Look in particular at the edges around the train and the semaphore:
https://filebin.net/zxkf1fqwdra7hlvv/Peek_2019-03-04_14-26.mp4?t=9s55rapn

I use LoG all of the time. :+1:

@agriggio

image

right click, copy URL.
I don’t know what I’m doing wrong, every time I link a video from filebin it seems to work at first, but then it gets screwed up…

It sends me to a download page, which is okay.

could you try with details = 1px ? I haven’t had such strong effect with other pictures.

Ok I have found several errors in the indexations (out of bound stuff and passing negative shifts to size_t – as always, I’m a moron)

sorry, I forgot to add that the raw is available from here, if you want to use it for testing:

https://pixls-discuss.s3.dualstack.us-east-1.amazonaws.com/original/3X/6/7/67a3e8c83b2070acdeeacc78fcc5a0f9f0e3ff77.RW2

taken from this post: Local Lab build - #659 by spidermonkey