Improving clipping threshold selection UI

I’ve been toying with the idea of contributing to darktable lately, and wanted to toss some ideas out there for comments. One thing that is particularly annoying for me is the poor usability when I need to tweak highlight reconstruction parameters.

The problem is I think well discussed in Should/could darktable warn when invalid pixels pass the pipeline?. There’s a lot of discussions about reconstruction methods, but I think that selecting a proper threshold is the thing that takes the most time for me.

Let’s look at the relevant parts of the pipeline.

“raw black/white point” module


This module knows about the actual pixel value at which each sensor channel saturate. This comes from the raw metadata (from libraw) and I never touch that, but I guess there are some people who could need to if the metadata were inaccurate.

Proposal 1: If inaccurate metadata is a common case, we could add a small image button that lets you select an overexposed region of the image and pick the max(R, G, B) as the white point.

“highlight reconstruction” module


A lot of other modules have sliders in EV (exposure, filmic highlight reconstruction), and the fact that this single module requires to make calculations is somewhat annoying. Yes, I can do base 2 logs in my head, but the inconsistency of the units is not great. It’s also lacking a way to automatically determine the threshold from a region of the image that you know has at least one overexposed channel.

Proposal 2: Make it possible to toggle between linear threshold (current) and a log unit (EV, like other modules). Maybe make EV the default.

Proposal 3: Add a small image button that lets you select a region of the image where one channel is overexposed and not the others, and pick the maximum non-saturated channel as the clipping threshold.

“filmic” module


Proposal 3bis: Add a small image button that lets you select a region of the image where one channel is overexposed and not the others, and pick the maximum non-saturated channel as the clipping threshold.

some closing thoughts

The proposals above should be fairly simple to implement, so I’m mainly looking for comments on whether they’d be useful to add to Darktable from other users/developers. What do you think? Are there other proposals being discussed elsewhere that I may not be aware of?

One last thing : RGB highlight reconstruction is useful outside of filmic, and it’s not even enabled by default. I’m wondering how feasible is it to move that to a separate module (“rgb highlight reconstruction”?) that can be used alone in linear space. Probably this depends on how much does it is integrated inside the filmic tonemapping, which I haven’t investigated yet.

I don’t pretend to understand the technical validity of your proposals, but the idea of an eyedropper to set the required thresholds for highlight reconstruction sounds appealing to me as a humble user. I only recently got my first good result using laplacians for highlights reconstruction because I lowered the threshold. On other occasions I have had to adjust threshold in highlights reconstruction module when using inpaint opposed. I have also occasionally had to adjust the white point setting when DT has given the wrong value by default.

Would you still remember which shot you had to adjust the white point setting for? (and can you share it?)

That’s an interesting data point ; I have a fairly old and well-supported Canon DSLR which doesn’t have any issue, but I suspect images from less reputable manufacturers or buggy firmware exist.

It was actually a plaw raw sample and not one of my own images.

I don’t think you’ve understood the filmic rgb highlight reconstruction (and the name is not good, but I digress) as it isn’t reconstruction at all, but rather tries to smooth the transition. It uses filmic’s white point rather than the one from black/white point.

Right. We might want to change that “naming”. What would you propose? Or just do a PR with RFC?

I am really surprised about this. We had some problems especially with sony cameras about wrong white-point exif data but those are mostly resolved by now. So if you have a camera with such problems you can either

  1. let devs know about it via github so we can try to fix it in rawspeed or libraw
  2. do some specific presets

What we don’t take into account here is “where in the signal curve does non-linearity start”. Those data might help but except for DNG specs they are mostly hidden in manufacturer specific exif data. Plus that doesn’t tell us how the maths would be.

You are aware of the “clip visualizing button” in HLR? If you suspect that your white point might be wrong it’s very easy to see it here. And changing the clip value here is fine in the vast majority of cases, only very rarely you have to set whitepoint.

If you are interested in this field and would like to start developing in dt, it might be nice to do statistics and visualize the power spectrum in the rawprepare module. But i suspect it won’t help a lot for developing images and might be very cpu hungry to do that in realtime.

Highlight roll off.


Sounds good to me.

1 Like

Selecting the raw black and white points is pretty simple (if you have to touch that module; and once you have proper values, they are put in a preset).

Otoh, picking the best threshold value in filmic can take some time, also due to interactions with other values. But as the “best” result is also a matter of taste, that will be very hard to reduce to a simple button…

EDIT: as you can see, I’m not at all convinced adding a picker to either “raw black and white point” or “filmic threshold” will solve any problem. But as some seem to think that any button in the interface has to be used, such buttons could actually cause more problems…

Is it really about roll-off? Roll-off suggests a curve-like, pixelwise operation, doesn’t it?
How about simply highlights? (highlight <anything> is probably too long for the GUI.)

An additional problem with the current name, reconstruct, is that it’s a verb, while all other tabs have noun labels.
While we are at it, is the section header highlights clipping really appropriate? It does not clip anything (abruptly).

1 Like

it isn’t reconstruction at all

It’s possibly just a matter of naming things. I reused the terminology from the manual about the “reconstruct” tab:

“This tab provides controls that blend transitions between unclipped and clipped areas within an image and can also help to reconstruct colors from adjacent pixels.”

So my understanding is that it does both a global “rolloff” (which is run independently on each pixel) and reconstruction (which uses adjacent pixels to try and estimate what the scene radiance looked like on the clipped pixels). Admittedly I haven’t dived yet into the code to understand exactly how it’s implemented.

I, myself, don’t. So if that’s a fixed thing for most that’s great.

That’s my main issue. It’s difficult to know that beforehand for a given sensor because that depends entirely on the relationship between the sensor’s spectral sensitivity and the spectrum of the blown highlight I’m trying to correct.

I am. And yes, it’s useful as a debug tool, but ideally it shouldn’t be needed for 90% of the cases so that I can focus on making the image look as I want…

Yes! Another aspect is that it’s very difficult to integrate highlight correction in a preset right now, because of all the interactions. I’m not sure exactly how I’d solve that right now, so my idea of adding a button is perhaps just a band aid…

As long as the name doesn’t suggest that anything is reconstructed where that isn’t the case, I’m fine. At the end of the day, those names are labels to indicate a function. Of course they should have some relation to what the function does, but other than that the exact name isn’t too important.

So something like “highlights” for the tab, and “highlight blending” and “enable highlight blending” for the tab tittle and tick box, resp., would work for me.

1 Like

I don’t think it does. Highlight Rolloff is just the transition from the surround scene into blown out highlights. Mostly talked about with film photography and videography, but I don’t think there is an implied curve there. I view the term as qualitative, much the same as bokeh.

1 Like

That is not really a problem. HLR inpaint opposed and the rest falls into place.
I am sorry if that sounds like a brutal answer, but here it goes anyway:

By exposing correctly.

I have just gone through to find settings and readings for my main cameras in relation to my preferred settings in darktable. If I stick to my derived rules, there is nothing to do. Highlight rolloff and dynamic range is now better than anything I have ever used before and that includes tons of film back in the days. Spoiler alert: film is not better, it’s just way more foolproof because the whole technology has been designed to be an integrated workflow.

Old digital stuff looks awesome or meh, depending on how well it matches the current ideal workflow. If you have been a very strict user of ETTR you should be fine.

I often encounter shots which are ETTR except for some light spots, and exposing for those highlight will just drown the rest of the shot in noise. What you say makes me think that perhaps I’ve overlooked something. I’ll try to collect examples, maybe that will show me that in fact there’s no need to modify Darktable to find a correct preset for these kinds of shots.

Well, photography will be a compromise until we get those 50 stops dynamic range 10-4000mm f1.0 lenses with 500 megapixels in a pocketable camera. And even then, someone will find a use-case that is outside of that system. :upside_down_face:

But one thing is for sure … if you absolutely need those lights, then you gotta capture them. Because there is one thing in digital that is absolute: overexposure has no data to work on. If it’s gone, it’s gone. You can kind of re-invent some of it with software, but that’s it.

On the other hand, if the rolloff is nice, who cares.

And if the shadows block because you need the lights, well, so be it.
Or you need to photograph brackets.

In the end it is a management of expectations and possibilities.

1 Like

Couldn’t say better :slight_smile: All reconstruction in HLR is sort of “best guess”. And we are really good in dt atm. I just go with default for 99% of the images with any problems at all. A few are more tricky. In my experience those with difficult (non-natural) lighting or large parts with smooth gradients into blown out skys. That’s for segmentation :slight_smile:

1 Like

But then we also needs screens with those 50 stops dynamic range…(and what about prints? I don’t see paper get much beyond current dynamic dynamic range)

1 Like