[Suggestion] Simplify Tone Equalizer

For me it is usually a two step process. I first adjust the mask so that it resolves the tones I’m interested in to modify. Then I go to the basic adjustments tab and either use the sliders or use the scroll wheel over the image to modify tones. I don’t usually go back and forth.

However, I often do this with two instances of the tone equalizer. E.g. the first instance to adjust highlights around the sun or in the sky. It is amazing to have fine control over the contrast within the sky that way. Then maybe a second instance to brighten up some trees or a forest. But the process of adjusting the mask first to resolve the areas I’m interested in and then modifying the tones is the same.

2 Likes

I am not going back and forth between the mask and tone changing. I am just getting the mask “right” before changing any tones.

good approach.

1 Like

Sorry to hi-jack the thread but as there are a lot of knowledgeable people on here I wonder if you can give a few tips.
I struggle with the Tone Equalizer in particular getting nice transitions between the tones on the mask.
I always seem to end up with quite harsh transitions so when tones are adjusted I get horrible artifacts.
I keep going back to it, spend time fiddling with sliders but too not much avail (probably out of frustration change too much). I find the new eigf to be particularly harsh.
So much so that it always is quicker for me to create a few exposure modules with masks to effectively do the same job. However I want to get the hang of the tone equalizer because I think it eventually will be quicker and probably more effective.
I have watched videos, read blogs, re-watched until my eyes bleed and still can’t get the hang of it.
Any tips?
Ta

Hi. Maybe you can share a raw file in a new “play_raw” topic?

I kinda get the gist, I was just looking for tips on how to smooth transitions between tones.
For example in @kofa post, in particular the top picture of the mask of the flower.

There is quite a hard transition between the highlights on the flower and the surrounding petal.
If I have this on one of my masks and the highlights are toned down by scrolling the wheel over them then I tend to get a noticeable transition on the final image.
I have tried all the sliders but can’t quite nail it so I am missing something.
My question was how to smooth these a little more effectively so they blend nicer?

Note that this was an un-tuned mask, turned on simply to demonstrate the issue I had observed with turning colour calibration on/off. I think the hard transition is simply the mask being clipped (being too bright). That image does not actually need tone EQ at all (it’s uniformly lit, a flower in shade).

3 Likes

Okay, didn’t take long did it, I think I understand better now after just a quick play and a re-read

This answered my question in particular ‘adjust the mask so that it resolves the tones I’m interested in to modify’

I think I was trying to do too much, thinking about the tools rather than looking at the image, trying to get the grey slider in the middle, completely filling the bar and a histogram across the entire range.
Hence I had too much contrast compensation and therefore harsh transitions.
As soon as I set the mask purely for the highlighted areas (the tones I was interested in adjusting) and forgetting about the rest I was able to get a better result
It seems so simple now that I can’t get why I didn’t see it before (slap head)
Thanks

1 Like

you can play also with the upper three sliders too:
transitions are too hard : try increasing filter diffusion or change smoothing diameter
you need better separation of edges: increase edges refinement

I’m a software developer, but unfortunately don’t have any C experience, though maybe I’ll dive in at some point.

I understand the math is complicated, but this reminds me of genetic algorithms back in school. Here’s a basic algorithm that might work for this situation which shouldn’t be too difficult to write:

  1. Start with a pool of solutions, maybe 5.
  2. Using random values, calculate those solutions.
  3. Store the best solution out of the pool.
  4. For the next iteration, for one of the solutions, take the best solution and slightly modify its values randomly. For the next 4, using the 2 best solutions, swap some of their values and slightly randomize them. Calculate those solutions. Compare to the stored best solution, and replace it if there is a better one.
  5. Repeat several times

Usually this converges pretty quickly for simple problems like this. I’d be happy to collaborate on the coding, but it would take me a long time to start writing it from scratch without familiarity with C++ and the codebase.

unfortunately this this does not certainly lead to a useful mask. Defining a useful mask is not the same as spreading the histogram of the mask over the whole range.
An algo can’t be aware which areas the editor needs to be differenciated via masking …

1 Like

Each of these solutions will take 250 to 500 ms to compute, depending on CPU. And, actually, we have 2 variables to find: mask exposure compensation and mask contrast compensation. So, it should be a parametric study of 5×5 solutions. So the first step takes 6 to 25 s to compute.

Another 1-2 s at very least.

  1. wait for people to complain that opening a new picture induces a 30 s lag before the preview appears.

Meanwhile, setting the thing manually takes like 10 s if it’s before your first coffee. Sometimes, I wonder if people take me for an imbecile or for an idiot. Again, if there was an easy good-enough solution, it would be coded already.

7 Likes

Hardly. They may take you for a wizard who can fix everything with a flick of a wand compiler :smiley:

10 Likes

Hi all,

A few thoughts regarding the Tone EQ.

It is

  • A chroma and local contrast preserving version of the tone curve.
  • Seems to be inspired by the Ansel Adams zone system. The nodes are placed at 1EV from each other.

Pain points

  • Tweaking the mask that enables it to preserve local contrast diminishes the mask’s contrast, so fewer nodes actually control the final result.
  • To deal with this there are several tools: mask post-processing, mask quantisation, mask exposure compensation, and mask contrast compensation. They help us manually spread the mask’s histogram across the various nodes, but are a bit fiddly.
  • Ideally this should be automatic, but technically it’s very hard to achieve.

Question

Is the 1EV separation of the nodes necessary for the internal workings of the module? Let me explain: After finetuning our mask (luminance estimator, preserve details, filter diffusion, smoothing diameter, and edges refinement/feathering) let’s say we end up with a histogram of 1.5EV spread. Instead of trying to spread that histogram across the nodes through exposure and contrast adjustments, would it be crazy to divide that 1.5EV into however many equal portions we need to match the number of nodes? That way the histogram should always fit the whole node spread.

The module would then in theory keep working as usual. For a given pixel, it would find it on the mask, see in which zone it falls, and apply the corresponding exposure adjustment.

Is this feasible, or completely wrong?

Preserving local contrast demands similar changes in a local range - that’s exactly why the mask contrast is reduced

You need to spread the mask histogram so it fit‘s to your demand: deal stuff equal you want to tweak equally, separate stuff you want to tweak different.
Since that’s not trivial there are a couple of tools to do so

At least unless darktable is able to run in your brain, no automagic will be able to know your intention

and btw: if you want to do very detailed tweaks at the lower and upper range of the image tones you better use two instances with different masking. Instead of fiddling around in a curve having 16 or more nodes. That just simulates precision …

1 Like

Preserving local contrast demands similar changes in a local range - that’s exactly why the mask contrast is reduced

Yes, that’s clear.

You need to spread the mask histogram so it fit‘s to your demand: deal stuff equal you want to tweak equally, separate stuff you want to tweak different.

This is the way it works ATM. It does what it should, and after you get the hang of it, it’s not a big deal. However, and leaving all technical difficulties aside for a moment, I think we can all agree that it would be much better for the user if it “just worked”, without having to manually spread the mask’s histogram.

Since it seems to be very difficult to automatically and reliably expand this histogram across the 9EV range, my question was if this dynamic range expansion is needed at all for the inner workings of the module.

Instead of this:
Screenshot_20211127_104119

This:
toneEQ

Obviously the UI wouldn’t change from what we currently have, I am just trying to explain what I mean. In my screenshots, the mask spans 2.5EV. The question is: can we divide those 2.5EV equally into 9 nodes and have the module work properly? If so, in theory, the mask exposure and contrast compensation would not be necessary. Even if this could not be completely automatic, perhaps we can have some kind of black and white point sliders or eyedropper, like in the filmic module. Not to change the mask’s exposure, but to tell the module what the mask’s brightest and darkest level are.

At least unless darktable is able to run in your brain, no automagic will be able to know your intention

I mean, you guys are good, but perhaps not that good :wink:

Cheers!

What is the difference between moving the histogram and stretching the contrast (to fit the scale) and moving and stretching (compressing) the scale to fit the histogram? Both ways, you want to achieve the same thing: place the control points where you want them to be.

1 Like

Both ways, you want to achieve the same thing: place the control points where you want them to be.

Exactly, but if my suggestion works, it would imply several advantages:

  • Once you generate the kind of mask you need, there is no need to touch it.
  • Perfectly spread mask histogram.
  • Simplified interface, since all the mask post-processing controls would be unnecessary.
  • Potentially being able to remove the mask tab, and move the mask generation controls into the advanced tab.

I agree with you with respect to that it should be the same, but maybe the histogram presentation could be improved.

One point that always make it harder for me is that when some data is past the white or black points of the mask, it bunches up into the edge and makes it very hard to see the shape of the histogram. A logarithmic scale may solve that. Better yet, show the full histogram and overlay the limits of the mask on it.

Edit: I may fill a feature request about that

And why would you have to touch the mask after generation with the current controls?

That’s only useful if you can decide to limit the mask. If I only have to correct part of the image (say the lighter parts) I do not care whether the shadows are all bunched up at the left side. I’m not going to touch them…

See below

That, I doubt: there are a number of other controls in the masking tab, like what type of mask to generate, what estimator to use for pixel lightness, and some to steer mask generation.

Your proposal would replace/remove two out of 8 or 9 controls, some of which cannot be replaced by an automated procedure, like the luminance estimator you want to use. Same for most of the others: the setting to use is not purely technical but also depends on your interpretation of the image…

@guille2306 : Perhaps having an orange bar at the edge where the histogram bunches up would be enough? Showing the full histogram with the mask overlayed on it would narrow the mask range, or hide some of the controls. Either has its own problems. Once again, I rarely need the mask to cover the full histogram, usually a region is enough (shadows, midtones or highlights). I’d really dislike any change that would force me to always use a mask that’s a perfect fit to the image histogram (lowers precision if you only need to work on part of the tonal range).

1 Like