Should/could darktable warn when invalid pixels pass the pipeline?

Hi,

By now I believe to have a good understanding of both the origin of “magenta highlights” and the different ways to avoid and treat them. Thanks to the great filmic rgb module I am able to edit sunset shots such that the they look much better than out-of-camera JPEGs. This is great.

Still, I continue to wonder about one thing: Darktable knows which channels of which RAW pixels are overexposed. The current behavior is that this information (=which pixels are invalid) is apparently not used. Overexposed pixels can be indicated, but that’s it.

Instead, the wrong color values of the overexposed pixels are fed into the color pipeline just as if they were correct, and result in false colors. Then it’s up to the user to operate different sliders (e.g. the various methods of highlight reconstruction in the module of said name and in filmic) until the wrong colors are no longer apparent.

Wouldn’t a more robust approach be to clearly mark the wrong pixels as INVALID in the pipeline, and warn (or even consider it as a failure), if any of them pass through the pipeline without being repainted by some method?

Would this approach pose conceptual problems or is it just that Darktable’s pipeline is not working in this way for historical reasons?

You may have noticed that darktable does very little to prevent you shooting yourself in the foot (metaphorically speaking…). It means you have to do more yourself. It also means that you don’t have to fight against “protections” when they do the wrong thing.

Treatment of overexposed pixels follows the same theme: there is an indicator in place to show them and tools to deal with them. Then it’s up to you to decide which method to use. And for some highlights that method might very well be “push them into oblivion” by just lowering the white reference in filmic. That means treatment of them is possible with no corrective action on any colour values. A bit nasty if that made the pixelpipe fail…

Keep in mind that while such overexposed pixels may not have meaningful colour information, they are not invalid in the mathematical sense. So there is no need to block processing anywhere because of such pixels. It would be different if you had mathematically invalid input values to a module… (as can theoretically happen with filmic).

1 Like

but see with clipped pixels it’s not so easy… of course you can tell which one is saturated from the beginning. but then you don’t have colour information, just one single channel per pixel, pretty much meaningless before colour transformations. you can of course clip at your earliest convenience… but that will void your possibility of recovering stuff later on (you can fill in one clipped channel if the other two are still good etc). also this simple clipping here refers to clipping the other two (good) channels after white balancing at the same minimum level, throwing away information (and no, you can’t just remove a pixel if you don’t like it, you need to fill the hole…).

with overexposed (say >> 1) or out of gamut colours (negative) it’s a bit simpler: of course you want to keep them around until the very end, maybe there would be a module making good use of the information? throwing away early is a thing that was done in the past (due to 8-bit restrictions of temporary buffers).

1 Like

I think the new methods introduced for HLR are helping alot here…have you been using them?? Inpaint and segmentation??

Thank you @rvietor and @hanatos for your rapid and thoughtful replies.

I would argue that there’s a difference between providing powerful tools that are difficult to use and mixing up invalid data together with valid data.

As far as I can tell darktable will happily let me mix data from valid and invalid channels (for example blur valid image data together with magenta pixels), which seems fundamentally problematic, as it lets measurement errors propagate in an uncontrolled way.

I imagine that the treatment of invalid values could be made to handle such cases graciously. Note that I do not propose to simply stop processing for pixels with blown channels or to exclude such pixels from processing. Instead, I imagine that it might be possible to treat the data properly.

For example, one simplistic idea would be to mark overexposed channels as having an infinite value (technically this could be perhaps achieved using a floating point inf, but this is not relevant here).

As you describe, the user may very well decide to lower the white reference in filmic such that the two other color components of the overexposed pixel also turn infinite. At the end of the pipeline, when colors are mapped to display RGB, It would seem natural to map such an (inf, inf, inf) pixel to (1, 1, 1).

I do not have any understanding of the actual implementation of darktable, and the above idea is likely not workable, but I hope that it conveys the idea that it may be possible to make the pixel pipeline treat overexposed channels as what they are instead of pretending that they are normal values.

I think that silent errors are actually worse than explicit failure.

Sure, I agree that clipping all the channels at the earliest convenience is not at all a solution. This amounts to throwing away even more information. Instead, the solution may be throwing away less information, i.e. preserving the information about which channels are blown until these are replaced in one way or other.

1 Like

A lot of these errors are exaggerated by the previous defaults soon to be officially different…v6 filmic set to maxrgb and clip highlights as the method of HLR introduced many of these things… I doubt you are going to see much magenta unless you really blow things out … Filmic has been tweaked …the new norm will be power norm and the new HLR algorithms seem to all but remove the instances of cyan and magenta that would pop up… That is why I ask my question above as it may be that you are on 4 or 4.01 or earlier and using the defaults of that generation??

Yes, I’m on 4.0.0 (actually the backport to debian stable), but I could build it myself just as well. Is the development version safe for “production” use?

Also, even if the effects of invalid data become less noticeable with the new defaults, I’m curious to learn whether my above critique is valid in principle, or whether I am overseeing something.

For example, I recently treated an indoor image that looked all but perfect, but then I noticed that a very small highlight was magenta. In a way, this is not a problem, since it is barely visible without pixel-peeping. But it just feels wrong to treat data in this way (from a data treatment point of view).

1 Like

Well the old default of clip highlights will set all the values to any remaining channel making them all 1 and then they are boosted by the wb coefficients to produce invalid data essentially… So I would say yes it is fine to use… its very close to the 4.2 code. There might be some small changes and bugs flushed out before Dec but I dont’ think anything major and the previous issues at least for me that you would see are no longer a problem… AP explains the issues well here with key points around 19 and 25 min… He can offer only Laplachian as an option to work with color calibration and the modern wb that the other methods are not techincally compatible with as the new methods were not available at this time but now they are… and the older ones well at least clip highlights I think have been removed… [EN] Highlights reconstruction : the guided Laplacian - YouTube

For that, you must first be able to define what “treating the data properly” actually means in the different cases (there are more ways of getting invalid pixels than sensor clipping). Already in the case of sensor clipping in one or two channels, there are several choices possible (as shown in “highlight recovery”).

That’s not what happens as I understand filmic: anything coming out of filmic is already clipped to the range 0…1. One of the functions of filmic is the tonemapping from scene-referred (0…∞) to display-referred (0…1). So any value >1 after filmic would be invalid data…

If it were purely a matter of programming/mathematics, I would agree.

But here we are visually editing images. Apart from some specific use cases (technical/scientific), as long as the result looks good, is is good. So in that sense the errors aren’t “silent”.
And there’s the display option.

As a side note, depending on your camera, you can still have magenta highlights even with “highlight recovery” on and set to “clip to white”. For some cameras, dt can’t retrieve the correct raw white point, so it picks a default (which is usually too high). If that happens, clipped areas aren’t recognised as such.

An automatic warning “you have clipped pixels” would fail in that case, and happily (and silently) propagate the invalid data through the whole pixelpipe. It’s a situation that cannot always be avoided (unless dt can always read the makernote metadata…). But if you are used to the automatic warning, you’d easily overlook that…

Given all that, I think there is no reason to have an explicit warning for the presence of clipped pixels, and even less for any kind of automatic treatment.

Not nice if true… although I couldn’t find any sign of depreciation of those modes.
Those modes are still valid if you don’t use the color calibration module for white balancing, and you can use some tricks even when using color calibration (e.g. a second instance for the highlights, set to “daylight 5000K”).

The problem is that you’ve started from an invalid assumption: clipping is bad. This is not correct. Clipping is bad only if you don’t want clipping. If you did want clipping and exposed that way on purpose, like many portrait photographers do, then the rest of your idea is not applicable in the workflow.

While the type of photography you do may generally not want clipped channels, not all photography thinks the same.

I’m with @paperdigits on this. Cameras have limited dynamic range, and sometimes its necessary to push the sensor to measure past the limits of its resolvability, the old ‘trying to fit two gallons of milk into a one-gallon jug’ metaphor.

That you have to explicitly deal with it in darktable is a testament to the level of control offered. Most mainstream softwares automatically insert a highlight-reconstruct in their pipeline, and you have to live with its a-priori decisions. Here, you can tailor the response to the nature of the clipping, and get better results.

I do agree with the need for indicators. Using my hack software I don’t include the highlight reconstruct tool by default and rely on the magenta manifestation to tell me I need it. For most images it’s evident but sometimes it’s subtle and easy to miss.

2 Likes

I do understand the dynamic range limits of photographic sensors. And, I do appreciate the use of “clipping” as a form of artistic expression, for example when the background of a portrait is overexposed and turns to white. (I assume that this is what you mean.) As far as I understand, in digital photography this should be a deliberate artistic decision (i.e. to lower the white point in filmic rgb), and as far as possible not the result of physically overexposing large swaths of the sensor.

However, this is not what I had in mind when I started this thread. When individual channels of photo sites of the sensor become overexposed (outdoors it’s often the green channel resulting in the famous magenta skies), and this is not treated specially, the result are arbitrary and uncontrolled hue shifts that are hardly of any artistic value. At least I’m not aware of anyone who appreciates a blue sky that turns cyan, or a setting sun with unnatural banded halos.

I also appreciate that darktable gives the user a lot of control. I am not mandating any particular way of treating overexposed highlights. My intention with this thread was to inquire whether it would be feasible to somehow mark known saturated values when they enter the pipeline and use this information to detect whether the lost highlights have been treated in some way. Then, the user could be warned if the highlights were not treated in any way - something that can be difficult to see directly. Also, it may allow to have a default way of treating blown highlights that might be more satisfactory than treating the wrong values as if they were correct.

If this is a way of saying “do it in post” then I’d argue that the effect is not the same.

This is also problematic. While it is generally true, gatekeeping “artistic value” is a no-go.

Turn on raw over exposure indication and activate the gamut checking.

It might be possible to automate this further with some of the upcoming lua stuff from @dterrahe and @wpferguson but I am not 100% sure of that.

The new inpainting highlight reconstruction coming with 4.2 and being the default should help. Turn on the module by default.

Just a new default and reconstruct color dropped…

image

For now, I suggest to limit the discussion to overexposed pixels. Other types of invalid data require different treatment. If bad pixels, say, were to be marked as invalid, they should be marked differently.

What I mean with proper way is a way that explicitly marks the overexposed values (individuallly per channel per pixel) and allows modules to remove markings for values that they consider repaired.

I do not have a concrete proposal. I am no even sure whether I understand the problem sufficiently. That’s why I started the discussion here.

OK, these are implementation details that I do not know enough about. In that case perhaps filmic could do the mapping that I mention. Or perhaps some other way of marking invalid values would be more appropriate.

The display option does not tell whether the problem has been treated, and visual assessment of the result can be misleading if the overexposed area is small or if the artifical hue shift is subtle.

On the other hand, darktable modules might be (in principle) able to know when they overpaint invalid pixel (channels) with data taken from valid pixel (channels) - this is what hightlight reconstruction does, right? In this way, it should be possible to automatically verify that all blown hightlights have been indeed treated. Note that this is not restricting the user’s artistic freedom in any way. For the rare case where the user wishes to use the magenta sky for artistic effect, there could be a very simple special module that declares all values to be valid.

Interesting. Naively, I would expect that the correct raw white point should be detectable from the raw histogram: if a channel is saturated, its raw histogram should have an abrupt cutoff. Even if this cutoff is not perfectly sharp, it should be possible to detect it, or not?

It’s not as straightforward a problem as one might think. In the camera, the blown values pile up at the highest valid value. The problem becomes, how high does a pile need to be before it is ‘invalid’…

Is the clipping indicator not good enough? It shows any pixels that are clipping after applying the full pixelpipe. You even get to choose the criteria.

Screenshot_2022-11-22_19-32-50

What’s the reason for dropping reconstruct color?

1 Like

Possibly, but it could concern only a small number of pixels (cf. the example you cited where only a small area was blown). Hard to decide if that means some pixels are clipped, or just very bright. Add a few very bright, not unclipped zones, and you have a real problem detecting clipping…

And keep in mind that:

  • the indicators to show clipped areas are there
  • there are tools to deal with them
  • dt doesn’t want to impose anything

Also, allowing part of the image to clip can be an artistic decision taken at the moment of exposure (either by choosing the least of two evils, or because the photographer wanted that area overexposed).
While a lot can be changed in post-production, it’s better to take the important decisions at exposure time (when and if possible, of course).

1 Like

It very often gives serious artifacts in the reconstructed region

That’s not the same thing. That indicator shows values “clipped” by the scene->display conversion;
the raw clipping indicator shows the areas where you have saturated pixels on the sensor.

The main practical difference is: if you only have overexposure after the scene->display conversion, you can recover the area and you won’t have lost any information. If you have clipped pixels, you have lost information that cannot be recovered. Any treatment of such sensor-clipped areas is an approximation of what the area would have looked like without the clipping.

The idea about a warning isn’t so bad, but the invalid pixels idea is just not sensible. There are so many implications for any software - not only DT - I can’t list them all.

Just one example: many image processing algorithms are optimised to work on a rectangular region with no “holes” to avoid. If you want that in a pipeline, you need to rewrite all of them and take the perfomance impact, assuming it can even be done for that algorithm.

It might be good for you to implement a trivial example in a high level language, such as a box filter and see how you deal with the concept…

Edit: in case that last point looks like sarcasm, I’m totally serious - if you’re thinking in this sort of detail about inner workings, you could learn lots from such an exercise!

1 Like