[Suggestion] Simplify Tone Equalizer

Note that this was an un-tuned mask, turned on simply to demonstrate the issue I had observed with turning colour calibration on/off. I think the hard transition is simply the mask being clipped (being too bright). That image does not actually need tone EQ at all (it’s uniformly lit, a flower in shade).

3 Likes

Okay, didn’t take long did it, I think I understand better now after just a quick play and a re-read

This answered my question in particular ‘adjust the mask so that it resolves the tones I’m interested in to modify’

I think I was trying to do too much, thinking about the tools rather than looking at the image, trying to get the grey slider in the middle, completely filling the bar and a histogram across the entire range.
Hence I had too much contrast compensation and therefore harsh transitions.
As soon as I set the mask purely for the highlighted areas (the tones I was interested in adjusting) and forgetting about the rest I was able to get a better result
It seems so simple now that I can’t get why I didn’t see it before (slap head)
Thanks

1 Like

you can play also with the upper three sliders too:
transitions are too hard : try increasing filter diffusion or change smoothing diameter
you need better separation of edges: increase edges refinement

I’m a software developer, but unfortunately don’t have any C experience, though maybe I’ll dive in at some point.

I understand the math is complicated, but this reminds me of genetic algorithms back in school. Here’s a basic algorithm that might work for this situation which shouldn’t be too difficult to write:

  1. Start with a pool of solutions, maybe 5.
  2. Using random values, calculate those solutions.
  3. Store the best solution out of the pool.
  4. For the next iteration, for one of the solutions, take the best solution and slightly modify its values randomly. For the next 4, using the 2 best solutions, swap some of their values and slightly randomize them. Calculate those solutions. Compare to the stored best solution, and replace it if there is a better one.
  5. Repeat several times

Usually this converges pretty quickly for simple problems like this. I’d be happy to collaborate on the coding, but it would take me a long time to start writing it from scratch without familiarity with C++ and the codebase.

unfortunately this this does not certainly lead to a useful mask. Defining a useful mask is not the same as spreading the histogram of the mask over the whole range.
An algo can’t be aware which areas the editor needs to be differenciated via masking …

1 Like

Each of these solutions will take 250 to 500 ms to compute, depending on CPU. And, actually, we have 2 variables to find: mask exposure compensation and mask contrast compensation. So, it should be a parametric study of 5×5 solutions. So the first step takes 6 to 25 s to compute.

Another 1-2 s at very least.

  1. wait for people to complain that opening a new picture induces a 30 s lag before the preview appears.

Meanwhile, setting the thing manually takes like 10 s if it’s before your first coffee. Sometimes, I wonder if people take me for an imbecile or for an idiot. Again, if there was an easy good-enough solution, it would be coded already.

7 Likes

Hardly. They may take you for a wizard who can fix everything with a flick of a wand compiler :smiley:

10 Likes

Hi all,

A few thoughts regarding the Tone EQ.

It is

  • A chroma and local contrast preserving version of the tone curve.
  • Seems to be inspired by the Ansel Adams zone system. The nodes are placed at 1EV from each other.

Pain points

  • Tweaking the mask that enables it to preserve local contrast diminishes the mask’s contrast, so fewer nodes actually control the final result.
  • To deal with this there are several tools: mask post-processing, mask quantisation, mask exposure compensation, and mask contrast compensation. They help us manually spread the mask’s histogram across the various nodes, but are a bit fiddly.
  • Ideally this should be automatic, but technically it’s very hard to achieve.

Question

Is the 1EV separation of the nodes necessary for the internal workings of the module? Let me explain: After finetuning our mask (luminance estimator, preserve details, filter diffusion, smoothing diameter, and edges refinement/feathering) let’s say we end up with a histogram of 1.5EV spread. Instead of trying to spread that histogram across the nodes through exposure and contrast adjustments, would it be crazy to divide that 1.5EV into however many equal portions we need to match the number of nodes? That way the histogram should always fit the whole node spread.

The module would then in theory keep working as usual. For a given pixel, it would find it on the mask, see in which zone it falls, and apply the corresponding exposure adjustment.

Is this feasible, or completely wrong?

Preserving local contrast demands similar changes in a local range - that’s exactly why the mask contrast is reduced

You need to spread the mask histogram so it fit‘s to your demand: deal stuff equal you want to tweak equally, separate stuff you want to tweak different.
Since that’s not trivial there are a couple of tools to do so

At least unless darktable is able to run in your brain, no automagic will be able to know your intention

and btw: if you want to do very detailed tweaks at the lower and upper range of the image tones you better use two instances with different masking. Instead of fiddling around in a curve having 16 or more nodes. That just simulates precision …

1 Like

Preserving local contrast demands similar changes in a local range - that’s exactly why the mask contrast is reduced

Yes, that’s clear.

You need to spread the mask histogram so it fit‘s to your demand: deal stuff equal you want to tweak equally, separate stuff you want to tweak different.

This is the way it works ATM. It does what it should, and after you get the hang of it, it’s not a big deal. However, and leaving all technical difficulties aside for a moment, I think we can all agree that it would be much better for the user if it “just worked”, without having to manually spread the mask’s histogram.

Since it seems to be very difficult to automatically and reliably expand this histogram across the 9EV range, my question was if this dynamic range expansion is needed at all for the inner workings of the module.

Instead of this:
Screenshot_20211127_104119

This:
toneEQ

Obviously the UI wouldn’t change from what we currently have, I am just trying to explain what I mean. In my screenshots, the mask spans 2.5EV. The question is: can we divide those 2.5EV equally into 9 nodes and have the module work properly? If so, in theory, the mask exposure and contrast compensation would not be necessary. Even if this could not be completely automatic, perhaps we can have some kind of black and white point sliders or eyedropper, like in the filmic module. Not to change the mask’s exposure, but to tell the module what the mask’s brightest and darkest level are.

At least unless darktable is able to run in your brain, no automagic will be able to know your intention

I mean, you guys are good, but perhaps not that good :wink:

Cheers!

What is the difference between moving the histogram and stretching the contrast (to fit the scale) and moving and stretching (compressing) the scale to fit the histogram? Both ways, you want to achieve the same thing: place the control points where you want them to be.

1 Like

Both ways, you want to achieve the same thing: place the control points where you want them to be.

Exactly, but if my suggestion works, it would imply several advantages:

  • Once you generate the kind of mask you need, there is no need to touch it.
  • Perfectly spread mask histogram.
  • Simplified interface, since all the mask post-processing controls would be unnecessary.
  • Potentially being able to remove the mask tab, and move the mask generation controls into the advanced tab.

I agree with you with respect to that it should be the same, but maybe the histogram presentation could be improved.

One point that always make it harder for me is that when some data is past the white or black points of the mask, it bunches up into the edge and makes it very hard to see the shape of the histogram. A logarithmic scale may solve that. Better yet, show the full histogram and overlay the limits of the mask on it.

Edit: I may fill a feature request about that

And why would you have to touch the mask after generation with the current controls?

That’s only useful if you can decide to limit the mask. If I only have to correct part of the image (say the lighter parts) I do not care whether the shadows are all bunched up at the left side. I’m not going to touch them…

See below

That, I doubt: there are a number of other controls in the masking tab, like what type of mask to generate, what estimator to use for pixel lightness, and some to steer mask generation.

Your proposal would replace/remove two out of 8 or 9 controls, some of which cannot be replaced by an automated procedure, like the luminance estimator you want to use. Same for most of the others: the setting to use is not purely technical but also depends on your interpretation of the image…

@guille2306 : Perhaps having an orange bar at the edge where the histogram bunches up would be enough? Showing the full histogram with the mask overlayed on it would narrow the mask range, or hide some of the controls. Either has its own problems. Once again, I rarely need the mask to cover the full histogram, usually a region is enough (shadows, midtones or highlights). I’d really dislike any change that would force me to always use a mask that’s a perfect fit to the image histogram (lowers precision if you only need to work on part of the tonal range).

1 Like

Yes, that would be enough. The main problem for me is not to know what’s outside the mask, but the fact that those points outside the mask end up generating a big spike on the edge that messes up the vertical scale of the histogram. Actually the UI is a bit misleading here because from it all points to the rigth are bunched on top of the right-most point of the mask, so one would expect this point to affect all of them (which of course if not how it works).

A “quick solution” (says somebody with zero knowledge of the code :wink:):

  • truncate the histogram at the edge of the mask, do not bunch out-of-range data on the first and last bin
  • (optionally) vertically scale the histogram according to only what’s inside the mask range
  • mark the existence of “many points outside range” by the orange bar

but thats not the darktable way - simply stuff at cost of control.
darktable is driven by those who wants the control they need and other tools doesn’t provide.
So simplifying stuff is ok, but not on cost of control. And the masking tab gives a whole bunch of control.
Crucially, the mask in the Tone Equalizer is an arbitrary element. There is no such thing as a general purpose mask.

I think I didn’t quite explain myself well.

And why would you have to touch the mask after generation with the current controls?

If my solution is viable, and the histogram is always perfectly spread out, these controls are superfuous, since their purpose is to spread the mask across 9EV nodes:

That’s only useful if you can decide to limit the mask. If I only have to correct part of the image (say the lighter parts) I do not care whether the shadows are all bunched up at the left side. I’m not going to touch them…

You are right, that’s what the white/black point sliders I was proposing would do. You could choose to exclude certain parts of the mask to work on highlights, shadows or midtones only:

Think of them as a way to zoom into the mask’s histogram, without changing its exposure or its contrast.

That, I doubt: there are a number of other controls in the masking tab, like what type of mask to generate, what estimator to use for pixel lightness, and some to steer mask generation.

Your proposal would replace/remove two out of 8 or 9 controls, some of which cannot be replaced by an automated procedure, like the luminance estimator you want to use. Same for most of the others: the setting to use is not purely technical but also depends on your interpretation of the image…

Once you remove the “unnecessary” sliders (this is all hypothetical), they could be moved to the other tabs no problem, and the masking tab could be removed.

but thats not the darktable way - simply stuff at cost of control.
darktable is driven by those who wants the control they need and other tools doesn’t provide.
So simplifying stuff is ok, but not on cost of control. And the masking tab gives a whole bunch of control.

I’m not advocating for removing any features. If my idea is doable, things would work the same, but once the mask is created to suit your needs, it would “just work”, without the need for compensating exposure or contrast, so less tabs and sliders. In fact, it would be easier to use, since you could watch the mask’s histogram while you adjust the controls.

What exactly do you think is the funtion of the exposure and contrast compensation sliders?
Your “black and white point sliders” would replace those two, and only those two. Just using another terminology…

But exactly this „mask is created and suit my need“ is done with the masking tab. No algo in the world can create a proper mask to match my need in a specific situation for an specific image…
Using a luma representation of the image as a mask is just one of several usecases…

1 Like

According to the manual they recenter and expand the mask’s histogram respectively (by changing its exposure and contrast). If we could zoom into said histogram, so that our 9 nodes affect the area we want to affect (or click on an eydropper, like in filmic), there would be no need for changing the mask’s exposure or contrast, and things would be pretty automatic. If you desire to zoom in further and work just on a portion of the histogram, you could do it with the black/white point sliders.

The advantage: exposure and contrast compensation seem to be difficult to control automatically so that the histogram spans the whole spread. My solution would in theory circumvent that problem, while retaining the ability to work on a specific range if you so desire.