module proposal: gamut compression

I’ve resurrected the idea of adding @jedsmith’s gamut-compress to darktable, this time not as part of AgX, but as a separate module.

In AgX, out-of-gamut colours are “pushed inside” the gamut: the components are adjusted so no negative value remains. However, as the rest of the colours are not compressed, these “mitigated” colours end up mixing with original colours. Jed’s algorithm gently desaturates highly saturated colours, bringing out-of-gamut colours inside the gamut.

These are agx (with blender-like|base, only with contrast etc. adjusted), and then the same, but gamut compressor before agx, compressing into Rec 2020.

While sigmoid and agx allow manipulation of primaries, e.g. filmic does not. Also, most tools operate on all colours, while this one allows you, on a component-by-component basis, to only compress out-of-gamut and highly saturated colours.

What do you think?

9 Likes

Where does this module come in the order of processing??? So right before the tone mapper… Is it a calculation on its own or does it need sliders to feed the math…is it primarily for AGX or is this a general tool to be added for gamut compression…some of the tonemappers in the ART ctl scripts have a color space dropdown and gamut slider… I know AGX is slider heavy but could it be integrated or does it have wider use… DT has a touch of gamut compression added I think in CC and filmic has some gamut math so would all they complement each other…

Just some random thoughts

1 Like

Did you read Kofa post?

but gamut compressor before agx, compressing into Rec 2020.

Yes I did and I know it can seem redundant ie my comment but he also talks about other modules using all the colors as input and also mentions the other tone mappers…if its just for AGX or tone mappers then I would integrate it with them…if it has wider application then maybe the module is justified…

1 Like

I can think of at least two possible places:

  • early in the pipeline; this could complement / replace the gamut compression in color calibration (it is my observation that color calibration, with the default compression of 1, never emits colours where (max(RGB) - min(RGB)) / max(RGB) > 1.25)
  • before output color profile, e.g. to make sure all colours fit into sRGB.

I don’t want to merge it into agx; it’s useful with other tone mappers, or even without a tone mapper. It does not manage components > 1, only components < 0.

The algorithm has 6 params; whether we need them all, or (for example) combine the ‘start compression’ sliders into 1, I don’t yet know. You can read about the method on Jed’s GitHub repo, he has provided explanation. The maths is much-much simpler than AgX: no matrices or the like, it’s all basic arithmetic.

2 Likes

I think it should be a separate module too at or after color calibration. I like how it helps with the blue lights in the police car.

2 Likes

Thanks I was just thinking that the two usual locations would be at input or soon after to clean things up or during output …

I missed this nuance too…

1 Like

Well, yes: it’s not a tone mapper. It’s a gamut compressor. Components > 1 (in display-referred) mean ‘too bright’ (for the display). Components < 0, in scene or display-referred, mean ‘out of gamut’ (for the colour space).

1 Like

I have been thinking about this as well at one point and its definitivtly needed for some pictures!

I have one observation to share about the topic, not sure how it impacts where it should.

My thinking is that follows like this:
1: Denoised raw images in “camera space” are never out of their gamut.
2: Out of gamut colors happen* in the input color profile step and can be more or less worse depending on white balance (both legacy and color calibration)
3: Observe that both the input color profile and white balance are linear operations and it should be possible to calculate where the camera profile boundary is in your selected working profile. Could that be used as input to the gamut algorithm? Maybe together with a smoothness parameter?

  • Adding saturation can also push colors out of gamut but i feel like that is a bit more self inflicted :grimacing:
6 Likes

Maybe doing that (getting inside the camera profile boundary) could be useful, but I don’t think this algorithm can do that.

1 Like
  • before output color profile, e.g. to make sure all colours fit into sRGB.

I’d actually really like the idea of having more control over over the gamut mapping to the output color space and that’s something I also wanted to experiment with.
Currently, one can only choose the rendering intent (given LCMS is used, but it seems kind of broken).
Typically, I edit my images on a monitor with a wider gamut than sRGB and if I want to export an image to sRGB, I basically have two alternatives: Either, altering the whole image pipeline while using soft-proofing to handle out-of-gamut colors in a way that looks good - which, however, will then not look as previously intended in wider gamut.
Or using some modules at the end of the pipeline, often with masking, to compress the gamut for sRGB and those modules will be deactivated if exporting to / viewing on wider color spaces.
Both is quite fiddly and abusing modules for a cumbersome gamut mapping for which they are not designed…

But I’d say the the more general term “Gamut Mapping” might be more appropriate than gamut compression in this regard?

However, I have no idea how such a module should be conceived, what it should do in detail and how the UI should be designed to make it actually useful. There are many different gamut mapping approaches in the scientific literature. Usually, they optimize for preservation of different quantities or other more perceptual design goals, but this is often more of a technical nature and may or may not work for different scenarios.
Finding and implementing algorithms that work for their design goals is not a problem, but linking this to artistic control to tackle the real-world issues originating from the color space conversions w.r.t. the look users want to achieve / preserve for a large variety of photos seems quite challenging - even if one limits oneself to local algorithms :wink:

2 Likes

YES!
@kofa , I don’t think simply implementing someone’s algorithm is the best way forward. Rather, what does the user want to do with gamut when processing an image. I think this should be considered before launching into new modules.
Just off the top of my head…
See where out of gamut pixels are being created in the pipeline so as to better understand how to avoid or deal with.
Choose whether to shrink the whole gamut into the intended space or shrink just the OOG and see easily the difference in the preview window.
Easily create a mask of OOG pixels for massaging in other modules.

Other thoughts…
If I’m not mistaken the OOG button is confusing and has a complex relationship to working profile and output profile and monitor profile. It might be a good idea to sort this out before building new layers of gamut processing? (Options might include ditching the current button and starting with a clean sheet)
The shrinking of the whole gamut - isn’t this Perceptual rendering? I think, but am not totally sure, the position is that this doesn’t work at all without Little CMS, and with it, it’s uncertain or we don’t have much in the way of LUT output profiles. It might be best to plan an overall way forward on Perceptual output before making a new module.

It’s great you want to develop DT @kofa but I think there’s also great benefit in attending to some fundamentals first.

2 Likes

As just a user I wonder if an added module that at least on the surface appears to do the same thing as the output color space transform and/or the input raw mapping to the working color profile wouldn’t just cause confussion. Is this ultimately just an alternate, perhaps better, to the long time perceptual and colormetric gamut transforms? If so could it just be added as an option there? If not, how does this complement the modules we already have?

3 Likes

… if you have an output profile that supports them; even if you do, ‘control’ is basically selecting between relative and perceptual.

Also, this is not only about the final output: the processing modules themselves often cannot deal with negative components. AgX has something built in to eliminate them (and so does sigmoid), but, as I mentioned in the initial post, it can cause ‘colour collisions’.

It starts at the camera, I believe. Our input can easily be out-of-gamut for the default Rec 2020 working space, and is usually out-of-gamut for the typical final output space, sRGB.

This algorithm can do that; you can set the starting point of compression to 0% (‘shrink the whole gamut into the intended space’), or to any other value. ‘Shrink just the OOG’ will cause colour clashes; you need to make space for them.

Again, this is not just about the final output gamut mapping/conversion/compression. The examples I provided apply the module before AgX.

It also works afterwards. For example, for this image, the output from agx is outside the sRGB gamut:

From sigmoid, too, even if using sRGB primaries:

This is with the module’s internal (debug, currently always on) gamut check (false colouring pixels with negative components), before compression:

And after:

Darktable’s own gamut check still shows out-of-gamut areas; I don’t know if that’s due to the screen profile being part of the pipeline or not, or maybe different criteria. Setting the display profile to sRGB just changes what it considers out-of-gamut:

It is, of course, only a proposal. The method itself has been adopted by ACES, and the results I have seen so far look convincing to me.

6 Likes

I imagine Jed’s processing does what it says on the tin especially if ACES is using it but I still think it would be good to “expand the project” into a wider OOG thing and e.g. try to generate some impetus for sorting out the gamut check function. DT has a lot of great functionality but the gamut check is a fly in the ointment for me.

You refer to clashes when the overall gamut is not compressed. When I’ve delved into this, I haven’t had clashes in the sense of bad artefacts, rather the OOG colours, say a red flower, just merge with the in-gamut ones, and you have an area with no detail in it. So here is a potential requirement for new gamut functionality, namely to do some custom local contrast processing in lightness or chroma or both just in the area where the OOG colours are. This could reinstate or emphasise detail and be part of the gamut reduction processing. This could mean you retain detail but don’t have to squash the whole image.

It seems to me there are at least two types of OOG. There are perfectly valid colours in say the rec2020 default working space which become an OOG issue when outputting to sRGB say. Secondly there are pixels in RGB space with one or more negative components. To me the latter are simply bad data. We’re not doing quantum physics here and we can’t have negative light energy! I think all modules should be responsible for ensuring they never output negative components into the working space. Fixing this might sometimes just be a matter of clamping up to zero, but it might also be more complicated. Enforcing proper pixel values seems like a useful enhancement to DT. It’s easy for me to say all this, I know. There’s a sense in which a new gamut module could be building more layers and complexity to fix an issue like negatives which could perhaps be prevented at source.

I think having negative values is the definition of being out of gamut. (1, 2, 3) is not out of gamut (in the scene-referred part of the pipeline); it’s simply bright. It does not fit into the display gamut, but it’s not because of its saturation, but rather because of its luminance.

Adobe RGB green primary (0, 1, 0) is (-5.14, 1, -0.55) in sRGB (I’m in the office, on lunchbreak, so it’s just some random web calculator I have access to here; could be wrong). Jed’s pages show how valid camera data can produce negative components in commonly used colour spaces.

I think that, for practical reasons, gamut compression is closely tied to mapping into display referred spaces (filmic, sigmoid, AgX) in the workflow.

There is no theoretical reason for this (potentially it could happen before or after), but if it is before, then the module that maps into display space would have to promise not to expand gamut (and ideally not shrink it either), which is a difficult design constraint.

If it is after, then the compression would have to happen in display-referred space, which is doable but not ideal either.

I am all for modular design, but in this instance it is tricky to factor it out.

It might be…the dedicated gamut checker uses the softproofing profile for its reference of gamut…and the overexposure set to full gamut or other uses the histogram profile… The selection of display could be interacting with one of those two profiles depending on how you check gamut and what profiles are used in each of those settings.

1 Like

Sure, we need to ensure the final image fits inside the gamut the display/output file (e.g. sRGB JPG) can handle. But we also need to ensure it’s in a gamut our processing modules can handle.

agx and filmic rgb use logarithms; I haven’t checked exactly what sigmoid does, but it also calls:

    // Force negative values to zero
    _desaturate_negative_values(pix_in_base, pix_in_strict_positive);

Therefore, having a tool that can gently compress into the working gamut would also be nice.

If we check the agx output this, we can see a weird shape inside the light cone:

Perhaps it’s no coincidence, that that area is out of gamut; namely, the green values are negative, as shown in the gamut compressor’s debug visualisation:

Increasing green compression cleans that up, and (after also cleaning up the negative red components) the agx output looks like this:

BTW, since AgX has no idea the final output is sRGB (it takes data from Rec 2020 and outputs in Rec 2020), it’s perhaps not surprising that a gamut compressor instance after agx shows out-of gamut (for sRGB) values, this time with negative red:

Compressing:

Comparison of the last step:

With compression to sRGB: above; without: below.

5 Likes

I still don’t understand why the final tonemapper modules don’t just take the output profile into account. I don’t know if it is technically feasible, but conceptually those two modules are very closely tied.