A tone equalizer in darktable ?

Looks really good to me! Very nice work again Aurélien!

So, I managed to get the inset histogram in log space inside the equalizer view:

The sliders are still available under the “simple” tab, for those who prefer a “Lightroom” feel.

Here is the “before”:


Now, working on @MStraeten idea of interactive editing directly on the picture.

Picture by Anna Simon.

13 Likes

@anon41087856 Thank you very much for working on this feature I just noticed now as I am new to all this editing and photography in general.

This is actually pretty much exactly what I have been trying to find in darktable. I recently just switched to 2.7 due to mainly the UI updates and the culling stuff along with the RGB curves.

Can’t wait to try this out.

I probably use this feature differently then most. When I was trying out other software I used these sliders to pre flatten images out. So basically I used the Shadows/Highlights sliders to pull the shadows and highlights closer to midtones and then set the whites to exactly below clipping the pull the blacks to just to taste clipping. I find it gives a lot of tone depth to the scene.

I tried using tone curves and the shadows and highlights in dt 2.6.2 just for it to kinda muck up the image.

Until this is available I am going to try to use filmic to achieve my “flattening” and then curves to add the contrast pop.

Again thank you for the hard work I can program but crazy out of practice maybe one day I can contribute.

I do have one question tho (not trying to make more work for you) is it possible to add some sort of Mask toggle to the whites and blacks sliders? For instance in other software with this feature you can often hold down Alt when using the slider. Not trying to turn dt into other software as there is a reason I chose dt over everything else.

For the Whites when holding down a shortcut you get a black mask that shows when the individual RGB channels start to clip through.

With the blacks you get a white mask that also shows when the individual RGB channels clip through.

It is not “necessary” per say as you could technically I believe just use the clipping overlay in dt when adjusting the sliders. The special masks do make it quite easy to fine tune the whites and blacks without the distraction of the whole image, and have the advantage of showing what channels are clipping.

This change maybe way too much of a code requirement tho so you are more then welcome to ignore it.

Again thanks for all the hard work you have been doing.

2 Likes

Please correct me if I’m wrong. Also, I’m not sure if this is what you want.

You could do that with functionality already available in DT. Just use a tone equalizer instance for the black slider and then create a mask. Then create a new tone equalizer instance just for the white slider and mask it the way you want.

Not exactly it is not quite that kind of mask. Mask is probably the wrong term it is more of a channel clipping overlay that shows the individual channel clipping. Still probably not needed as I believe similar evaluation can be done with the clipping toggle.

1 Like

I just want to thank you for your contribution.
It is almost a year since I suggested the feature https://discuss.pixls.us/t/do-you-know-if-darktable-has-a-lightness-control-tool/7223 supported by the change of paradigm it would provide, but most people did not see it, actually, the reaction was so bad that I did not expect someone from that post to implement it, so, I am really happy that you decided to make it in such a short period of time.

2 Likes

Here are a couple of screenshots of a rough prototype of in-picture editing cursor. The inner circle is the input exposure of the module, converted to a RGB shade, the outer circle is the output exposure of the module. Zebras show (as in some DSLRs previews) when the pixel is overexposed. The bar on the left is the amound of the exposure compensation. The label on the right is the input exposure, similar to the x axis on the equalizer view.

So, very soon, all you will have to do is hover the region you want to push/pull, and just roll your mouse wheel.

There are still a lot of rough corners though. What do you think ?

That’s a completely different scope from what I’m doing here, if I understand correctly. Altough a mask showing the different exposure channels is planned.

16 Likes

So this tool will replace tone curve’s local adjustments, that is, tone curve + parametric masking. Am I correct?
If so, I’m confused as to how would parametric masks work with this new tool.
What I mean is that the tone equalizer seems to already apply parametric masks under the hoods, right? So, why is the parametric mask showing below the tool?
Also, if the tool is already applying a parametric mask, shouldn’t we be able to profit from all mask features, like feathering, blur, opacity and a display mask switch?

Yes, on the lightness only but not on other channels. And it does far more than just a plan mask to smooth the transitions.

3 Likes

Wow! Great news! Thank you so much for your work. I am very excited about this functionality!

Why do you need these input/output circles? I mean, why do you do additional programming work with creating information in circles when the results are immediately visible on the image?

In Viveza the point is very simple:

Viveza%20point

You can either increase or decrease the given value. What happens with the corresponding action can be seen immediately on the image.

The answer is not simple. There is no masking inside the module, it’s just a bench of band-pass filters, isolating exposure bands, exactly like your Hi-Fi amplifier equalizer isolate frequencies and let you set a separate gain for each of them. It’s a simple exposure compensation, but with a correction that varies depending on the input exposure of the pixel. The actual correction is smooth over the exposure range, constructed by interpolation of the different bands gains (it’s the curve you see on the equalizer view), so there is no discontinuity between bands and gaps and stuff.

TL;DR : a mask is a spatial 2D subset of the picture, triggering lots of bounds problems and transition issues, this equalizer deals with pictures as 1D exposures series and applies a gain on them.

Then, if you want, you can use it in combination with the regular masking options to create 2D masks, isolating objects, foreground, luminance ranges, etc. With the guided filters, it should work way better than the current shadows/highlights module (and display none of its saturation artifacts).

Actually, no. It works in RGB on the three channels, but since it applies the same linear correction upon them, it retains the original chromaticity.

Because you don’t know what -6 EV looks like, so I find it helpful to have both the exposure value and the corresponding shade of grey as a preview. Many other things come later in dt’s pixelpipe, so the actual image does not represent the raw output of the module.

1 Like

That’s what I said, or meant to say. From a user point of view since the chromaticity is not changed it “works” only on lightness. Or I missed your point.

Ah, fair enough. Lightness usually refers to one channel of HSL space, so that’s what I understood.

I added a guided filter in the module, instead of the planned laplacian pyramid, in order to preserve the local contrast. It’s faster and simpler, and easier to control too.

So let’s see a well exposed image (from Anna Simon, used for her workshop at LGM2019) with filmic set up to get no blown highlights and bright enough midtones:
Capture%20d%E2%80%99%C3%A9cran%20de%202019-07-12%2013-24-15

The picture arguably lacks some crunch because the subject doesn’t pop out enough from the background. So, let’s enable the tone equalizer in its naïve mode, to perform a dodging and burning:
Capture%20d%E2%80%99%C3%A9cran%20de%202019-07-12%2013-24-44

The contrast has improved on the subject, but on the background too, which is quite disturbing. We would like to treat the subject as one single exposure blob, but showing the luminance mask used to compute the exposure compensation (new feature, by the way), here is what we get:
Capture%20d%E2%80%99%C3%A9cran%20de%202019-07-12%2013-40-22

To isolate the subject from the background, we need to use an edge-aware surface blur to decompose the image into piece-wise smooth exposures areas. That’s where the guided filter comes in handy:
Capture%20d%E2%80%99%C3%A9cran%20de%202019-07-12%2013-25-02

And here is the result:
Capture%20d%E2%80%99%C3%A9cran%20de%202019-07-12%2013-24-51

Working from that starting point, we can fine-tune the masking:
Capture%20d%E2%80%99%C3%A9cran%20de%202019-07-12%2013-25-24
Capture%20d%E2%80%99%C3%A9cran%20de%202019-07-12%2013-25-36

To check the robustness to halos (which are the pest of the shadows/highlights module), let’s use a much contrasted picture:
Capture%20d%E2%80%99%C3%A9cran%20de%202019-07-12%2013-50-32

Let’s enable the naïve tone equalizer:
Capture%20d%E2%80%99%C3%A9cran%20de%202019-07-12%2013-50-43

A lot of local contrast is lost in the sky. Using the guided filter, we can revert that:
Capture%20d%E2%80%99%C3%A9cran%20de%202019-07-12%2013-58-16

Comparison with the current shadows/highlights output (with bilateral filter):
Capture%20d%E2%80%99%C3%A9cran%20de%202019-07-12%2013-51-27

A bit of color-grading with color balance, and you are good to go:
Capture%20d%E2%80%99%C3%A9cran%20de%202019-07-12%2014-05-41

Notice the relatively short stack of modules:
Capture%20d%E2%80%99%C3%A9cran%20de%202019-07-12%2014-07-58
Life is too short to loose it on computers. Shoot longer, post-process faster, enjoy more.

16 Likes

27 posts were split to a new topic: Exposure Fusion and Intel Neo drivers

Interactivity is coded folks ! Hover a region of the picture with the cursor, scroll up to increase the exposure in this area, or down to decrease it, and enjoy :slight_smile:

24 Likes

Thanks for this cool feature, works fine. Now I have to invest some time in trying to get the best out of the masking/guided filter feature …

don’t hesitate to display the mask (last icon in masking tab) in order to understand what it is doing. General rule is : the sharper the mask, the more you might destroy local contrast. So the trade-off is to find the parameters that makes the mask piece-wise smooth while also following edges.

For example, I find that good smoothing diameters lie between 6-25 ( of the largest image dimension), depending on the size of the features to mask. Small feathering factors (1-2) make the filter behave like a simple box blur. Increase it, and it follow edges more closely, but you might loose the smoothness inside surfaces and make details appear in the mask (thus disappear after the exposure equalization). To alleviate that effect, you can increase the mask quantization to 0.5 or 1 EV, but that’s not always suitable if you got blur/bokeh in your pictures. The mask iterations help refining the mask contours, but will cost some performance and they will remove contrast in the mask (so you need to boost it in the mask pre-processing).

I would like to test the tone equalizer, but it doesn’t compile on Windows, with MinGW64. I get a lot of errors
error: the call requires ‘ifunc’, which is not supported by this target
Any clue?

I think it’s the multi-archs building that fails. Basically, the vectorized functions are compiled once for every SSE/AVX generation, then the program is supposed to select the variant that suits the best the CPU in use. Not sure if it’s supported on Windows.