Introducing a new color balance mode in darktable

@anon41087856 - I’m looking at your “color balance” module and also at the colorbalance.c code in darktable from git. I can read and write c code, but only if it’s relatively straightforward and doesn’t have too much of the sort of code that’s “more efficient but less transparent”. So if you don’t mind I’d like to ask some totally dumb questions:

  1. Is this the code loop that’s used to calculate saturation?

       // transform the pixel to sRGB:
       // Lab -> XYZ
       float XYZ[3] = { 0.0f };
       dt_Lab_to_XYZ(in, XYZ);
    
       // XYZ -> sRGB
       float rgb[3] = { 0.0f };
       dt_XYZ_to_prophotorgb(XYZ, rgb);
    
       const float luma = XYZ[1]; // the Y channel is the relative luminance
    
       // do the calculation in RGB space
       for(int c = 0; c < 3; c++)
       {
         // main saturation
         if (run_saturation) rgb[c] = luma + d->saturation * (rgb[c] - luma);
    
  2. Despite the comments, the code does use linear gamma ProPhotoRGB in the loop? This code/similar code is repeated several times, but sometimes using linear gamma sRGB instead of ProPhotoRGB?

  3. In the user interface there are several options, including “lift, gamma, gain (ProPhotoRGB)” and “life, gamma, gain (sRGB)”. What’s the actual difference in these two UI options? The provided sliders are not exactly the same for these two options, even though they sort of sound like the only difference should be whether the image is converted to ProPhotoRGB or sRGB before calculating “luma + d->saturation * (rgb[c] - luma)”.

  4. Does the user’s chosen output space affect the color balance calculations? I’m guessing not? In case it’s not clear, by “output space” I mean the ICC profile that the user selects to convert the image to before exporting the image to disk - I don’t mean the user’s monitor profile.

  5. Do you (or anyone reading this thread) know what color space darktable uses for the color picker in the left-hand column? It doesn’t seem to update very easily, so I’m having trouble even checking whether changing the output color space has an effect, but it doesn’t seem to.

What prompted these questions is I set up a spreadsheet to calculate some RGB values after using the saturation slider, and immediately ran into things like “which color space for Y”, are the RGB values linear or not in the calculation of saturation? What about in the color picker? And what color space is the color picker even using?

I think LAB uses 0-100 range, it’s not really 8 bit but probably in Darktable 0.0-100.0 float.

When doing 8 bit to float we have ti divide by 255 :thinking:

yes

yes. The sRGB gamma-corrected version is the historic left/gamma/gain one, which I have to keep untouched for compatibility with old edits.

the sRGB LGG is the historic one, the ProPhotoRGB LGG is mine. The sliders that disappear with the old sRGB variant are the contrast and global saturation because it’s a bad idea to mess-up with contrast in sRGB.

Yes, but not as you may think. Since the gamma conversion in ProPhoto RGB is a straightforward x^gamma_RGB, I have been able to merge the user input gamma and the RGB gamma (because (x^gamma_user)^gamma_RGB = x^(gamma_user × gamma_RGB)), so I spare one power evaluation and speed-up the computation by ×2. In sRGB, the gamma_RGB is a silly operation where you add 0.055, divide by 1.055, apply gamma 2.4 above whatever, gamma 2.2 beyond, say one Ave Maria and two Pater nostris…

Notice that RGB is not an output space here, just a working space. The module outputs Lab, as everything in the pixelpipe between input and output color profiles. There is no ICC profile involved here, it’s just a matrix transformation.

That’s the million dollar question. I have not found the part where this is coded in darktable, and I very much would like to check, and add XYZ readings as well.

32 bits floats IEEE are usually used between 0 and 1 because they get imprecise as you move away from 0 and working in normalized spaces triggers interesting linear algebra properties. What this 1 means is up to you… If your input was 8 bits integer, that 1 means 255, so you divide the input by 255 to convert to float. If the inpout was 16 bits integer, that 1 means 65536. It’s just a rescaling, really.

don’t look at these RGB values, you don’t know in which RGB space they belong (I have read that it’s the display space, but given the amount of *** I have found in dt’s code, I won’t swear until I have checked myself).

I have had the confirmation today that no arbitrary gamma is performed in dt before displaying. So the 75 Lab value seems more or less accurate.

You will never be able to map the grey, white and black to their theoretical at once with the Log. That’s mathematically not possible. If you want that, you have to work in a linear space and apply the log at the very end of the pipe (I’m working on a new module to do so). This log correction enable a full logarithmic pipe, which is similar to the basecurve but with less artifacts. That’s good when you are more concerned by the dynamic range than the accuracy of your profile/LUT.

Agreed.

That’s the obvious one. I bet that Color picker L=21 black and L=96 white are not accurate neither for unbreak. That probably explains why the black setting in unbreak is not stable.

And the question is still there.
How to introduce LUT profile in that logarithmic pipe ?
But I don’t see that at the very end of the process, but yes in your step 1 …

BTW: Sadly troy_s has withdrawn his posts … :frowning:

Hi all!

troy_s has asked to have his account removed. We have anonymized his account to scrub any reference to him and removed his information from the system. He has withdrawn his posts here himself, and we are currently discussing how best to proceed (if we should revert his posts to what they were originally to not disrupt the flow of conversation and information here).

Having introduced the log profile that early in the pipe is not the most brilliant idea I have had, I see that now. My bad. It’s still better than the basecurve, which is just before.

The LUT should be used before the log encoding, in the linear part, for linear algebra reasons. So that would be possible soon, in another module.

The problem with the black setting is unrelated. It’s, again, pure maths. I you want to map the black to zero, then the black setting is indepedant from the dynamic range and you can set it at once. If you want to have it at a certain value (here, 18), the black value appears at the numerator and at the denominator, so you can’t solve it, you have to numerically (or manually) optimize it until convergence.

Yes, one of the conundra about the recent scene-referenced discourse I’m trying to wrap my head around. If the linear energy relationship of the image data is to be preserved until time for the display transform, I’m not understanding the log outputs of video cameras. If that’s where video PP starts, that runs at odds with manipulating the original energy relationships, I would think.

I think the reason that log is in vogue is because it is a good compromise in terms of hardware, encoding, storage and visualization. Editing in scene-referred linear and a robust colour management paradigm is obviously the best way to go but for practical, cost and time related reasons log is a good in-between. That said, hardware with log profiles is still in the realm of the pros; e.g., I would not be able to afford the gear or workflow.

This is something I wanted to ask “anonymous” about. Sometimes it seems he meant that never, ever, ever should one edit non-linearly-encoded RGB, and never, ever, ever should one perform any edits that aren’t the sort of edits that preserve scene-referred ratios, or at least allow to recover scene-referred ratios, such as linear to log, reversible to recover the intensities. But this doesn’t seem like a reasonable way to edit when the goal is a final “pretty” rather than “scene-referred” image that will be displayed as a print or on a monitor.

But sometimes it seemed that all that was meant, was that the original scene-referred image file is untouched, and transforms are done on copies of the original image, linked by nodes, entirely reversible at will. But this interpretation also doesn’t seem consistent with the emphasis given to keeping everything scene-referred.

Or maybe what was meant was that the original footage/images is untouched apart from scene-referred edits to bring the footage/images in line with other footage/images that will be combined in the final production. And then all the “make it pretty” edits are done on the “now consistent across sources” footage/images, again using nodes and also LUTS and whatever else one might do to produce a final “pretty” version suitable for display. Which might then be further modified on a per output device basis.

color transformations, and most image processing are linear algebra. Doing linear algebra in non-linear spaces is dangerous. Doesn’t mean it doesn’t work, but it is not dummy-proof.

Is it possible to move unbreak module after the LUT one then ?

No. That would break the compatibility with old edits. It has to be another module.

I’m working with Troy Sobotka on something great with log and filmic curves packed together, sit tight and give me some time :slight_smile:

1 Like

Anonymous is not here, so you’ll have to take word from one of his evil minions.
All the processing can be done on linear scene-referred data. We have established that scene-referred data is not ready to be displayed, so the next step is to look at the data through a “view” transform.
Think the view transform as a virtual camera taking a jpeg from a real world scene.
The view will apply a curve to accomodate a portion of the dynamic range within the limits of the display, bending it to produce a nice-looking display-referred image that fits with your desired output (that can be your screen, or a display-ready format, like a jpeg).
It’s really not that complicated.

In other words: You don’t need to make your image display-ready to edit it. You can leave it scene-referred and watch it through some magic goggles while you edit.
The benefits of staying linear and keeping the scene ratios have been discussed before.

1 Like

Old edits ? … this is not yet released … I would find better to make it right before the release…

the unbreak profile module still has the gamma mode.

Also, I found the problem of the 50 reading. There is actually an issue in the module colorout, which is applying the display gamma/tonecurve for you without telling you. That gamma curve is disabled only in softproof mode. And the colorpicker reads after the gamma curve is applied. When you remove that gamma, you get actual 50-ish grey out of the log.

Right !

Log has several advantanges

  • It lets you recover the scene ratios easily.
  • It allows to cram wider dynamic range in low-bitdepth
  • it provides an even distribution of data (equal number of bits per stop)

It’s not so hard to imagine why video/cinema cameras are using it. It’s a great format to store scene-referred data in a convenient way, without needing huge fp files.
You can pull scene-referred linear for manipulation (compositing/VFX, etc). Or map to different displays through LUTs. for previewing or delivery.
You can grade on Log too with benefits compared to grading display-ready transfers where the tone distribution is intended for viewing and not for processing.

This is the one that gets me. I’m trying to figure out a use case for “recover”…