Introducing a new color balance mode in darktable

I know, but as the GUI is beeing currently modified, I will do the tuto once all the labels/settings/modules are definitive

2 Likes

Here is my try.

image

image

The middle grey color picker value is far from 50 and there is no way to get it.
What is wrong there ?..

You need to use the gamma factor in color balance (with slope offset power) and set it at roughly -0.62. Could you send me the picture ?

The LUT obtained with this method seems to work well (may be by chance, to be double checked)

Not a manual, but here is a straight forward way to use AurĆ©lienā€™s tools Iā€™m happy with :

  • color balance (when possible)
  • exposure. Center the histogram (set as linear) visually keeping room on both side to avoid clipping (the automatic clipping fails some times).
  • unbreak. Idem. Keeping grey at 18%, play with black point and dynamic range against histogram still keeping room on both side to avoid clipping. Considering only color picker may be misleading. It happens to have max Lab at 96 and still saturation zones on the image. The histogram is more reliable (and faster).
  • lookup table with camera profile preset (optional).
  • color balance (slope offset power). Center the histogram with slope, enlarge it with contrast until before clipping, and balance the image with power. Then tweaks with contrast and SOP if necessary.

No unexpected or weird effect as far I can see. A beautiful (well, ā€¦ depends ! :wink:) and natural looking image.
Bravo !!!

1 Like

Unbreak works on the log curve for data sanity and not for visual accuracy.
The grey is a real one and internal data corresponds to a real grey, so the color picker should have a mode (Lab Log ?) showing up actual data (50) instead of displayed data (75).
Does that make sense ?

No. Displayed data is the actual data, if darktable does not apply a silly gamma somewhere (which I suspect, but still no answer on the mailing list).

Log is a way to salvage the dynamic range and squeeze into in what the display has. What we need is a later way to remap the grey and manage the contrast at the extreme values. Iā€™m currently working on that.

:thinking: ā€¦ On the colorchecker view, the middle grey patch is really displayed lighter than a real middle grey (just compare with the dt background middle grey). The color picker is not too bad saying 75 as displayed color.

Itā€™s not always clear what the float space should be. As a former 8-bitter, it took me a bit of digging to find out that the imaging convention is 0.0-1.0, because I think the practitioners just assumed we all would know it. Gā€™MIC confused things, as it does floats, but in the 0.0-255.0 range.

8bit / 256 = normalizedfloat, for this discussion, so 75 / 256 = 0.29, or thereaboutsā€¦

1 Like

I think thatā€™s only for the benefit of inputs/outputs that require it - Gā€™MIC enforces no particular assumptions about data in general, although some specific commands/filters might do.

1 Like

Right now, rawproc internal data is all 0.0-1.0, but my curve tool is still 0-255, but I convert a curve value to 0.0-1.0 before applying it.

To the main discussion, it really isnā€™t always clear what data is being worked vs. what is being displayed. In that regard, Iā€™m probably going to make 0.0-1.0 the configurable default for tools using the curve core, in a subsequent version of rawproc.

Sorry Troy, on the UI I donā€™t see any normalized value.
With only white balance and exposure, tweaking exposure, color picker Lab values match the colorchecker lab.


RGB values:
image

Adding the unbreak module (log) I can tweak it to get the same black and white values, but the middle grey is on 75 instead of 50. The displayed patch is really lighter the previous one.


RGB values;
image

Then AurƩlien suggested to apply a gamma factor at -0.62. Having done that I can tweak again unbreak module to get this:


RGB values:
image

@anon41087856 - Iā€™m looking at your ā€œcolor balanceā€ module and also at the colorbalance.c code in darktable from git. I can read and write c code, but only if itā€™s relatively straightforward and doesnā€™t have too much of the sort of code thatā€™s ā€œmore efficient but less transparentā€. So if you donā€™t mind Iā€™d like to ask some totally dumb questions:

  1. Is this the code loop thatā€™s used to calculate saturation?

       // transform the pixel to sRGB:
       // Lab -> XYZ
       float XYZ[3] = { 0.0f };
       dt_Lab_to_XYZ(in, XYZ);
    
       // XYZ -> sRGB
       float rgb[3] = { 0.0f };
       dt_XYZ_to_prophotorgb(XYZ, rgb);
    
       const float luma = XYZ[1]; // the Y channel is the relative luminance
    
       // do the calculation in RGB space
       for(int c = 0; c < 3; c++)
       {
         // main saturation
         if (run_saturation) rgb[c] = luma + d->saturation * (rgb[c] - luma);
    
  2. Despite the comments, the code does use linear gamma ProPhotoRGB in the loop? This code/similar code is repeated several times, but sometimes using linear gamma sRGB instead of ProPhotoRGB?

  3. In the user interface there are several options, including ā€œlift, gamma, gain (ProPhotoRGB)ā€ and ā€œlife, gamma, gain (sRGB)ā€. Whatā€™s the actual difference in these two UI options? The provided sliders are not exactly the same for these two options, even though they sort of sound like the only difference should be whether the image is converted to ProPhotoRGB or sRGB before calculating ā€œluma + d->saturation * (rgb[c] - luma)ā€.

  4. Does the userā€™s chosen output space affect the color balance calculations? Iā€™m guessing not? In case itā€™s not clear, by ā€œoutput spaceā€ I mean the ICC profile that the user selects to convert the image to before exporting the image to disk - I donā€™t mean the userā€™s monitor profile.

  5. Do you (or anyone reading this thread) know what color space darktable uses for the color picker in the left-hand column? It doesnā€™t seem to update very easily, so Iā€™m having trouble even checking whether changing the output color space has an effect, but it doesnā€™t seem to.

What prompted these questions is I set up a spreadsheet to calculate some RGB values after using the saturation slider, and immediately ran into things like ā€œwhich color space for Yā€, are the RGB values linear or not in the calculation of saturation? What about in the color picker? And what color space is the color picker even using?

I think LAB uses 0-100 range, itā€™s not really 8 bit but probably in Darktable 0.0-100.0 float.

When doing 8 bit to float we have ti divide by 255 :thinking:

yes

yes. The sRGB gamma-corrected version is the historic left/gamma/gain one, which I have to keep untouched for compatibility with old edits.

the sRGB LGG is the historic one, the ProPhotoRGB LGG is mine. The sliders that disappear with the old sRGB variant are the contrast and global saturation because itā€™s a bad idea to mess-up with contrast in sRGB.

Yes, but not as you may think. Since the gamma conversion in ProPhoto RGB is a straightforward x^gamma_RGB, I have been able to merge the user input gamma and the RGB gamma (because (x^gamma_user)^gamma_RGB = x^(gamma_user Ɨ gamma_RGB)), so I spare one power evaluation and speed-up the computation by Ɨ2. In sRGB, the gamma_RGB is a silly operation where you add 0.055, divide by 1.055, apply gamma 2.4 above whatever, gamma 2.2 beyond, say one Ave Maria and two Pater nostrisā€¦

Notice that RGB is not an output space here, just a working space. The module outputs Lab, as everything in the pixelpipe between input and output color profiles. There is no ICC profile involved here, itā€™s just a matrix transformation.

Thatā€™s the million dollar question. I have not found the part where this is coded in darktable, and I very much would like to check, and add XYZ readings as well.

32 bits floats IEEE are usually used between 0 and 1 because they get imprecise as you move away from 0 and working in normalized spaces triggers interesting linear algebra properties. What this 1 means is up to youā€¦ If your input was 8 bits integer, that 1 means 255, so you divide the input by 255 to convert to float. If the inpout was 16 bits integer, that 1 means 65536. Itā€™s just a rescaling, really.

donā€™t look at these RGB values, you donā€™t know in which RGB space they belong (I have read that itā€™s the display space, but given the amount of *** I have found in dtā€™s code, I wonā€™t swear until I have checked myself).

I have had the confirmation today that no arbitrary gamma is performed in dt before displaying. So the 75 Lab value seems more or less accurate.

You will never be able to map the grey, white and black to their theoretical at once with the Log. Thatā€™s mathematically not possible. If you want that, you have to work in a linear space and apply the log at the very end of the pipe (Iā€™m working on a new module to do so). This log correction enable a full logarithmic pipe, which is similar to the basecurve but with less artifacts. Thatā€™s good when you are more concerned by the dynamic range than the accuracy of your profile/LUT.

Agreed.

Thatā€™s the obvious one. I bet that Color picker L=21 black and L=96 white are not accurate neither for unbreak. That probably explains why the black setting in unbreak is not stable.

And the question is still there.
How to introduce LUT profile in that logarithmic pipe ?
But I donā€™t see that at the very end of the process, but yes in your step 1 ā€¦

BTW: Sadly troy_s has withdrawn his posts ā€¦ :frowning:

Hi all!

troy_s has asked to have his account removed. We have anonymized his account to scrub any reference to him and removed his information from the system. He has withdrawn his posts here himself, and we are currently discussing how best to proceed (if we should revert his posts to what they were originally to not disrupt the flow of conversation and information here).

Having introduced the log profile that early in the pipe is not the most brilliant idea I have had, I see that now. My bad. Itā€™s still better than the basecurve, which is just before.

The LUT should be used before the log encoding, in the linear part, for linear algebra reasons. So that would be possible soon, in another module.

The problem with the black setting is unrelated. Itā€™s, again, pure maths. I you want to map the black to zero, then the black setting is indepedant from the dynamic range and you can set it at once. If you want to have it at a certain value (here, 18), the black value appears at the numerator and at the denominator, so you canā€™t solve it, you have to numerically (or manually) optimize it until convergence.

Yes, one of the conundra about the recent scene-referenced discourse Iā€™m trying to wrap my head around. If the linear energy relationship of the image data is to be preserved until time for the display transform, Iā€™m not understanding the log outputs of video cameras. If thatā€™s where video PP starts, that runs at odds with manipulating the original energy relationships, I would think.

I think the reason that log is in vogue is because it is a good compromise in terms of hardware, encoding, storage and visualization. Editing in scene-referred linear and a robust colour management paradigm is obviously the best way to go but for practical, cost and time related reasons log is a good in-between. That said, hardware with log profiles is still in the realm of the pros; e.g., I would not be able to afford the gear or workflow.