Introducing a new color balance mode in darktable

This is something I wanted to ask “anonymous” about. Sometimes it seems he meant that never, ever, ever should one edit non-linearly-encoded RGB, and never, ever, ever should one perform any edits that aren’t the sort of edits that preserve scene-referred ratios, or at least allow to recover scene-referred ratios, such as linear to log, reversible to recover the intensities. But this doesn’t seem like a reasonable way to edit when the goal is a final “pretty” rather than “scene-referred” image that will be displayed as a print or on a monitor.

But sometimes it seemed that all that was meant, was that the original scene-referred image file is untouched, and transforms are done on copies of the original image, linked by nodes, entirely reversible at will. But this interpretation also doesn’t seem consistent with the emphasis given to keeping everything scene-referred.

Or maybe what was meant was that the original footage/images is untouched apart from scene-referred edits to bring the footage/images in line with other footage/images that will be combined in the final production. And then all the “make it pretty” edits are done on the “now consistent across sources” footage/images, again using nodes and also LUTS and whatever else one might do to produce a final “pretty” version suitable for display. Which might then be further modified on a per output device basis.

color transformations, and most image processing are linear algebra. Doing linear algebra in non-linear spaces is dangerous. Doesn’t mean it doesn’t work, but it is not dummy-proof.

Is it possible to move unbreak module after the LUT one then ?

No. That would break the compatibility with old edits. It has to be another module.

I’m working with Troy Sobotka on something great with log and filmic curves packed together, sit tight and give me some time :slight_smile:

1 Like

Anonymous is not here, so you’ll have to take word from one of his evil minions.
All the processing can be done on linear scene-referred data. We have established that scene-referred data is not ready to be displayed, so the next step is to look at the data through a “view” transform.
Think the view transform as a virtual camera taking a jpeg from a real world scene.
The view will apply a curve to accomodate a portion of the dynamic range within the limits of the display, bending it to produce a nice-looking display-referred image that fits with your desired output (that can be your screen, or a display-ready format, like a jpeg).
It’s really not that complicated.

In other words: You don’t need to make your image display-ready to edit it. You can leave it scene-referred and watch it through some magic goggles while you edit.
The benefits of staying linear and keeping the scene ratios have been discussed before.

1 Like

Old edits ? … this is not yet released … I would find better to make it right before the release…

the unbreak profile module still has the gamma mode.

Also, I found the problem of the 50 reading. There is actually an issue in the module colorout, which is applying the display gamma/tonecurve for you without telling you. That gamma curve is disabled only in softproof mode. And the colorpicker reads after the gamma curve is applied. When you remove that gamma, you get actual 50-ish grey out of the log.

Right !

Log has several advantanges

  • It lets you recover the scene ratios easily.
  • It allows to cram wider dynamic range in low-bitdepth
  • it provides an even distribution of data (equal number of bits per stop)

It’s not so hard to imagine why video/cinema cameras are using it. It’s a great format to store scene-referred data in a convenient way, without needing huge fp files.
You can pull scene-referred linear for manipulation (compositing/VFX, etc). Or map to different displays through LUTs. for previewing or delivery.
You can grade on Log too with benefits compared to grading display-ready transfers where the tone distribution is intended for viewing and not for processing.

This is the one that gets me. I’m trying to figure out a use case for “recover”…

Your eyes see in log2. That means every time you divide the lightness by 2, you loose 1 stop. Dividing by 2 forever never returns 0. 0 is null energy, no radiation, it means absolute 0 temperature (0 Kelvin). And that’s a good thing because log2(0) does not exist, it’s undefined (-INF limit). We, and film, see in stops, with stops evenly spaced (perceptualy speaking). These stops are what @gez calls the scene ratios : they are evenly spaced.

Cameras readings are 16 bits integer. 0 in camera readings doesn’t it mean no energy, it means the sensor can’t record data below. It’s sensor ground level. It’s not how we see. We don’t see pure black. It’s not how it is neither, except in the center of black holes.

Problem : in 16 bit linear encoding, the top first stop goes from 65535 to 32768 (range : 32767 values), the second from 32768 to 16384 (range : 16383 values), the third from 16384 to 8192 (range : 8191 stops) and so on until you reach the 16th stop which has only 1 value range (from 0 to 1).

So, the stops are not evenly spaced in linear encoding. They get squeezed more and more as you go in the dark shades. And the stops are all that matters to human vision.

Human perceptions are all approximated by logarithmic functions (Weber-Fechner law). Sound intensity is expressed in deciBels, that is a log10 function, and nobody ever questions it.

Now, CRT screens have durably messed up the color industry with their hardware response curve (that wasn’t log but power, thanks electrons !) which has engraved into marble the use of gamma corrections to display linear data in a proper way. But that era is long gone, LCD screen are linear, and yet we still continue to apply the same tacky workflow, recording linear, displaying gamma-corrected garbage when there is no need to (because the first thing your LCD screen does is reverting the gamma).

I realise now that the trick I gave with the colorbalance gamma factor, that @phweyland mentionned too, was not to revert the log profile (as I thought), but to revert the gamma curve that darktable applies before displaying the image without telling you, and that messes up the color picker readings.

So, to sum up : work in linear, remap dynamic ranges through log, display at gamma = 1 (that is, linear), and let the the display DAC or ICC profile adjust the linear data according to the response curve of the display medium.

5 Likes

Not sure about what you mean on the last paragraph.
You won’t be feeding log directly to the screen (that paragraph seems to imply it) unless you’re displaying HDR content.
For typical SDR content you have to use a specific transfer to cancel the display non-linearity.

EDIT: I wasn’t clear: I meant that you’re not going to display log encoded intensities because the screen’s DAC non-linearity will sort of cancel it. Log encoded is not what your eyes going to receive, your eyes want linear intensities akin to the original scene’s ratios.
If you send linear directly to the screen tha DAC will bend it with a 2.2 power and that will turn out too dark. That’s why “gamma correction” is needed in a display-referred image like a regular JPG. To “cancel” screen’s non-linearity (CRT’s non-linearity in the past and modern screen’s DACs, that mimic CRT non-linearity because it’s still convenient for low bitdepth delivery).

https://github.com/darktable-org/darktable/pull/1795

This is the color chart from @phweyland with my log profile centered in 50 % and gamma correction disabled by hacky means in darktable code, and no other correction. It seems that log looks good when you don’t reapply a gamma on top.

This is the same chart with no log but the default gamma correction in darktable :

From my standpoint you don’t need to do that.

Moving from linear mode (exposure output) to log mode (unbreak output) the colorchecker histogram changes as follow :
image image

The points distribution is compressed to the right giving the expected dull aspect to the image. Middle grey values follow the same path and it seems completely normal to see the middle grey patch much lighter.

On the other hand, if you disable the gamma correction as you did just above, the output of unbreak appears normal (as you show) but the ouput of all other dt modules will appear darker turning dt unusable …

No. These are 2 different workflows. Gamma and log fulfiil the same needs in 2 different ways and shouldn’t be used in the same time. The fact that darktable applies the display gamma is wrong : it breaks the softproofing (technically, it works, but the values are wrong), it’s not consistent, it messes up the color picker and prevents you from seing the actual data.

Tonight, I have done a change in the output color profile module to allow users to choose between the gamma way and the linear way. I still need to fix some problems with LittleCMS, but the results are better.

@anon41087856 - thanks! for clarifying the code.

I tried the saturation option and thought the results were visually pleasing, though there was quite a bit of clipping in the shadows of the particular image I picked as a sample image.

My sample image was a red apple, shot raw, no other processing in darktable beyond the bare minimum to interpolate the raw file, and specifically no “make it pretty” options such as the base curve or sharpening, and using the default dcraw camera input profile.

Would it be possible to add some sort of saturation mask to the algorithm to help reign in any otherwise resulting out of gamut/clipped channel values when using the saturation slider?

I think darktable allows adding a mask as a separate step, but I was thinking about something like a checkbox to apply or not apply an automatically generated saturation mask. There are ways to construct a pretty good sat mask using just RGB, but I don’t remember the procedure - an internet search would probably turn up the calculations.

If I understand properly, to see the output of unbreak I choose the ouput linear way, while to see the output of color balance (or any other classical dt module) I choose the gamma way. Is that correct ?

What clipping ? Gamut or dynamic range ?

darktable has that already within the parametric blending options.

I think what you are looking for is the vibrance module, it does exactly that.

As a temporary hack, yes, but my fix to get the true readings has been merged in master today so you don’t need to do that anymore, as long as you don’t use LittleCMS, the readings will be right in both cases. That one with LCMS is harder to fix.