Processing skies with clipped channels in Darktable filmic/sigmoid

Keep in mind that current tools check for gamut validity at each step, and remap to the closest color in working space at constant hue.

If you edit in such a way that makes blue clip here, the old tools will just happily clip blue and you may get a new dramatic color that has no relationship with the original intent (regarding hue and chroma, and perhaps even lightness).

Since the current tools prevent you from doing that, they also prevent a lot of user exaggeration.

Besides, we have research on color memory dating as far as 1951 (McAdams) that show that the preferred color rendition is never the accurate color. For skin tones, preferred rendition is less saturated and less red than original color. So much for ā€œnaturalā€.

So I really donā€™t know what to say, and Iā€™m tempted to blame expectations over results here. Maybe just use a polarizing filter.

1 Like

I took the edit from OP in first post. I didnā€™t pull exposure in dt over the JPEG output, I just measured the color of the JPEG to color match the dtā€™s edit.

If a color-match based on exposure, WB and input profile (all linear tools) cannot replicate the color rendition, there is nothing more needed to conclude that whatā€™s going on is non-linear. Aka not exposure-invariant. Aka prone to blow up in your face at some point.

Those are darktable edits. All 3 of them. How does this relate to RawTherapee being fishy?

Again: What did you do to compare darktable and RawTherapee.

My bad, on first read I understood that the 3rd edit was RT.

So my conclusion is even more concerning: if itā€™s dt + sigmoid, aka same WB and input profile, what it does to color is frightening.

1 Like

No problem. Please do edit your post to reflect this oversight.

I think I can (partially) follow your explanation (not so indepth knowledge of how DT works).

I had thought that being it an unbounded workflow it would let you whatever you like and just in the final conversion step from scene to screen with tonal mapping and color preservation.

But as you point there might be problems with colors generated out of gamut.

Any way here we are not generating more lighter colors that can be difficult to map later.

We need only a tool for expanding tones in the last 1EV or 2EV of the highlights to the mid tones while keeping the white point.

Change the luminosity wonā€™t ever change the color and make it an immaginary color.

That is the kind of edition I usually used to sky (merging only luminosity or using a curve only in the grays).
If you want later more color, you have tools to do that and make it more saturated.
Then you use a selection and luminosity mask to do the blending and that is it.

I can use exposure to lower a couple of EVs the exposure, but it changes the white point, so it is not an expansion of lights, it is darkening light.

The color calibration trick of darkening blues and play with red an blue color works, but not allways so well and that changes colors that can create out of gamut colors.

Please donā€™t be rude.
I am sure nobody here is trying to cheat anybody, just different opinions, may be even misinterpretations but no cheating.

Aurelien and the work he has developed, he does not deserve to be unpleasant.

Not trying to be rude or unpleasent here and I certainly did not accuse anybody of cheating. I do, however, like to know how a certain conclusion is reached.

1 Like

May be my poor english. Here ā€œsupposed to be a scientistā€ is a unfriendly irony.

There is more to it. Saturated blue doesnā€™t exist in gamut at high lightness. See Engineering | The sRGB book of color

Lightness is over the y axis, and here itā€™s sRGB gamut at the hue slice corresponding to primary Rec 709 blue, split into uniform steps in JzAzBz. The lack of highly saturated blue inside the valid sRGB gamut is what tells me your usual sky handling might well be only surfing the gamut clipping artifacts in a way you like. Which is now prevented.

2 Likes

What does it look like if you set your display profile to rec 2020 tooā€¦remember the histogram data goes though the display profileā€¦there were several issues around this on gitā€¦the image will look wrong in the preview but unless this has been fixed the only way to see true clipping in DT is to set the display profile also to rec2020ā€¦I can dig up the explanations or maybe you will recall themā€¦

Edit I donā€™t think this will effect the raw clipping indicator but it might affect the histogram display that a user might try to evaluate in conjunction??

Okay I went back and read you post you tried thisā€¦sorry I need to read better I was going to withdraw the post and I can if you think I should

It may be the way DT is undoing things as explained here using the raw black and white point so when you change it you are changing that output

This one

Directed me to this oneā€¦in which you were actually participatingā€¦
I need to go back and read some of theseā€¦

1 Like

The dynamic range on this image doesnā€™t look challenging at all. Here is Photoflow with minimal processing (just -0.5 stop exposure adjustment). No curves, no tone mapping. I changed the hilight recovery mode to ā€˜blendā€™. The only part of the sky that appears to have any clipping is just below the wires - the rest looks the same in ā€˜blendā€™ or ā€˜noneā€™ mode.


DSCF0175.jpg.pfi (18.4 KB)

Well, yes you are right, of course.

But as I tried to explain, to dramtize the clouds or displace the highlights to the midtones, you donā€™t need to change color.

You can use a curve that only changes luminosity.
If you only change luminosiy, hue would be preserve.
If you obscure it so much that they go to almost black, they wonā€™t preserve saturation (indeed if it is so bright, it wonā€™t have too much saturation, will it?)

Then to introduce more blue if you want, you can use other tools easily to boost vibrance.

But another question arises to my mind reading your comments.
Should DT prevent saturated blues with high luminosity in the early steps?

I can understand it to prevent strange colors, but it can alter the real phisical distribution of light intensities what is what DT is supposed to preserve in the scene phase of editing.

I mean, the relation of blue/red/green is the same in the incoming light independently of the exposure you set in your camera.

If there is a high component of blue and a bit of red and green, and you expose that color to the mid gray it will be a perfectly valid saturated blue.
If you expose it 2 EV more, you will multiply all channels by 4, but with the same proportions and less noise.

But it can render a saturated blue color that is not visible to the human eye and the color model would correct that color to a less saturated one.
So you are altering the color distribution and the color when a user then reduces exposure by 2 EV will get a no saturated blue.

That kind of manipulation to adapt to the color space and human interpretation of colors should not happen in scene mode.
That should be left to the end of the chain.

If you do (as I understood by your comment), you are altering colors and the reprocity laws.

You should get the same color and luminosity if you use a exposure combination in camera than using 2 EV more exposure (with no clipping) and then correct exposure by -2 EV in software.

I you are correcting colors in each step in scene referred process and making them to be in gamut, you are breaking reprocity.

If we had a perfect display capable of reproducing any light and intensity, it would emit the same light than original: a very saturated blue with high luminosity, and the human would see the same as non correcting that color: a bright light close to white with some blue hue, no need for corrections or compensations or adaptation to colors humans can see.

So color adaptation and correction to gamut shouldnā€™t be kept to the final steps of conversion?

You result is too dark, which falls-back to what I said about tone-mapping a couple of days ago: itā€™s easy to avoid having to compress HL if you lower the middle-grey.

Yes, you need to. Lightness is a part of color just as much as chroma and hue. If you change lightness, this has consequences over the max chroma you can fit within the gamut boundary. See in sRGB, max chroma in blue is around 30% L. Your sky here registers at 85%. See on the graph, you have 3 units of chroma available at 90% and 5 at 80%. It means that you canā€™t have more blue in this sky while staying in gamut, you need to darken it to at least 60 %.

Yes. It makes no sense to keep invalid colors, they will only go worse in the later steps of editing. Also, we donā€™t have models to fit imaginary colors back inside the valid range, all we have are nasty hacks extrapolating actual color models because color models actually donā€™t aim at modelling non-colors. So the earlier you fix colors, the easier it is to remap them.

Also, non-colors will produce negative RGB and negative LMS values that will make a lot of algos color algos fail in dt, so they clip negatives values as a safety measure, but that does not preserve color at all. So, negatives should be handled where they are created because thatā€™s where we have a chance to guess what happened and get a reading of the original hue before things got crazy.

So, any sane color pipeline checks gamut validity at every color-massaging step. (Gamut being here the one of the working space).

Remember we are talking here about mapping to display, whatever happens in the pipe is only theoritical until it is mapped to display. You never see the pipeline, you see its projection to display, which is the goal and purpose of filmic.

2 Likes

What you are calling ā€œnon colorsā€ are colors that humans can not see, not a distribution of energy at red/green/blues that do no exist.

If you expose your image x EVs more (without clipping anything) and then compensate exposure in -x EV you should get the same results (letting noise apart).

But if DT is changing that extreme saturated blue (or other color) of high luminosity to produce a color that corresponds to human perception, you wonā€™t get the same result.

That would break the relation between the values in scene mode and the energy distribution in the fisical world.

I understand that there are technincal problems in not doing that adaptation.

But then scene referred mode is no more related to the physical spectrum.

It may be that we have a non clipped and ubounded pipeline, but colors are clipped and bounded by the color space chosen for processing (by the way quite narrower than human perception).

That might be the cause of some difficulties we experiment?

Mapping to display is a final step that should happen at the end.
No need of any conversion if the display is capable of reproduce original brightness and hues, even if the humans cannot see them.

As it is not the case, you need to convert to what the display can show in a best effort.

I understand that due to limitations in processing color space or color models or color conversions, we use right now, forces us to do such ā€œcolor clippingā€ tests in order to produce the best that can be done.

But if that color changes are done in the early steps, that might be behind the difficulties to manipulate highlights and darken and get the natural colors we saw when we looked at the sky.

The scene-referred thing is a workflow working unbounded signal. Itā€™s still not a spectral pipeline.

Unfortunately, basic blocks like input profiling match camera tri-stimulus against normalized observer cone response (CIE XYZ 1931). Then the CAT. Non-colors are numerically unstable as soon as XYZ or LMS spaces are used, and XYZ is a connection space that is used basically everytime a color profile is applied.

So, no, scene-referred is not entirely physical. It canā€™t be if we want to manipulate concepts like chroma, hue and saturation that are not defined physically. But itā€™s the best we have.

Also, notice that gamut-mapping is done against working space (Rec2020 if you donā€™t change it), which is about as large as the space of visible colors and several times larger than the average display.

2 Likes

OK, I understand much better now the problem, thank you.
You are right, tristimulus loses most of the spectral info and it might be impossible to get a represention of physical light that keeps being closely related to numerical values.

But now.

Let me see something more.
Would it be possible to create a module that keeps the white point and lets you change that two higher highlight values and expand it to the gray or the number of EVs you want with a shape control (not linear of course) but in a way that the original rgb relations are kept as acuratly as it can be?

something like r/val, g/val, b/val where val is a factor from 0 t x (being x 2^EV and EV the number of darkening steps) and a function of the luminosity of the rgb combination?

Would that create OOG color? if so of course they would need be mapped to in gamut.

Which is the best way of doing that?
something like the exposure but with no white point translation.

There is another image for testing with almos no clipping (just a bit of green and in a zone with almost neutral color) of a backlight that will be more interesting to see results than mine.

@anon41087856 @ariznaf Just want to pause and thank you both. I am enjoying the questions and the replies and the interaction and for the most part an exchange of information with no venom.

Thank you both for your time, curiosity and information exchangeā€¦those of us on the sidelines at least me are grateful for both the content and the exchangeā€¦

4 Likes

Thanks to you, Todd.

I am learning a lot from all of you.
I have no studies related to image processing or color science.
But I have a lot of curiosity about it.

Sometimes I might seem a bit obstinate, I know, but just moved by curiosity and trying to understand.

I donā€™t fully understand many of the deep and technical answers I receive many times, but with any of them I learn a bit and am a bit more close to undertand how all this works.

1 Like

I donā€™t understand what that module would do.