Filmic changes bright yellow to pink


(Source: Wikipedia)

This is a principle graph of your average gamut volume. Everything out of the enveloppe of this volume is not representable. In the Lab color model, the chroma is expressed as \sqrt{a^2 + b^2} and has a maximum for L around 70%. You see that, as the lightness tends toward 100% (max), the boundary chroma decreases because both are linked (in complex ways).

The chroma represents the colorfulness, hence our perception of color deepness. So it’s just not possible to represent colorful highlights out of a theoritical pixelpipe. If you want rich colors, you need to lower the lightness. If you want highlights, you need to desaturate them.

Whatever saturated highlight you see in your software is just a clipped channel handled by a CMS which performs damage control on the crap you feed it.

3 Likes

No it isn’t a no-op, filmic use a s-shaped curve so it would be different from the original hdr image even in the 0-0.18 range.

Sorry there wasn’t a filmic output for the samsung image, I’ve used a tonemapper operator that is a straight line from 0 to 0.18 and the it roll-off the highlights, and it was applied using the luminance ratio.
This could sound bad but it has two great advantages.
It’s the only way to have PERFECT color accuracy where the curve is a straight line, so the image will looks identical in a hdr and sdr monitor.

The second properties is that is possible to obtain an accurate map where the highlights are compressed and how much they are compressed.

However this should be called display mapping , it is used in displaycal, madvr and some video-editor that they could handle hdr video…

https://www.slideshare.net/DICEStudio/high-dynamic-range-color-grading-and-display-in-frostbite

https://www.itu.int/pub/R-REP-BT.2390

Here’s the samsung’s video for that sample frame
https://www.youtube.com/watch?v=qaVjnWc-DQY&loop=0

Usin this image as a test for the color accuracy
before


after

2 Likes

You can achieve that by setting the grey value of filmic to 45% (= 0.18^(1/2.2)), so the grey will be mapped from 45 to 45%. The linear part is the latitude, so you can manage how wide you want it and then slide it to the right/left using the shadows/highlights balance. The contrast is the slope of the linear part. You will still have a small toe near black (though you can tweak it to be unnoticeable) but it’s pretty much what you describe (except the exposure should be raised before).

Base picture:

Before filmic (+6 EV in exposure module):

After filmic:

3 Likes

The L*C*h°ab (D65) visible gamut. Source: Wikipedia. Also see the animation.

Initially we didn’t talk about out-of-gamut issues, which is different story and definitely demands some sophisticated math to be involved to deal with. Originally the thread was about the filmic’s chroma preservation feature that introduces a hue shift within the gamut. Let’s try to sort this out somehow first.

So what’s the alternative? Practically? Besides claiming “I’ll never ever get accurate colors in dt if the dynamic range exceeds 8EV, period”?

To me this doesn’t seem to sound somewhat reasonable, because 1) I doubt it’s really possible to restore the original color with any subsequent coloring: it’ll never be accurate, and 2) this means that luma adjustments would have changed hues instead of keeping them in place with reasonable accuracy, and 3) it seems to bring in a significant amount of extra work: to change the WB for processing then to try to restore it with addiitional means like color balance module. Given than simple triple-exposure-1EV “native” base curve gives me better results with more accurate hues (attached). This is only reasonable if we would have some “technical” color picker somewhere in the UI that would allow to temporarily fine-tune the WB to allow the color transformations to be more accurate and automatically undo the color balance later in the pipe. However I doubt it’s easily implementable. And frankly, it all sounds weird to me: I shoot the scene, then I have to arbitrarily change the WB because “were the light 6500K, then the scene would look this way, so I have to pretend the scene’s original colors were different from what they really were and change the WB accordingly, process the image, then try hard to restore the original look”. Am I alone who doesn’t see any sense in this approach?
I see it the totally different way: My camera has recorded the scene, then if its processor mistook the WB I’ll have to correct it to match the original scene’s light and colors, then I want to squeeze the dynamic range preserving the colors whatever they were. That’s it. Because there’s no “right” or “wrong” color, there is only real scene’s color under certain circumstances. So if the software is unable to do such transformations, it doesn’t work as expected. Period. Again, there is no “right” or “wrong” WB. If you shoot the white shirt, you’re saying it must go a=0 b=0, but in real life it never comes true. Under overcast sky the shirt would become blue, or orange on sunset, etc. If the software destroys colors when the shirt is slightly bluish or orange, that’s an erroneous software. (Of course a=0 b=0 shall be output on the monitor or printed with some color temperature that a human eye would perceive as “white” under some circumstances, but that’s another story. Let it alone). a=0 b=0 on the white shirt is about a stock photography, not the real life photography. Let alone the artistic reasons. Please correct me if I’m wrong.
So the question that is still open for me if the filmic is capable to produce the better results than “regular” bracketed base curve, and if so, in what cases? This far I see only trial-and-error way: process a photo one way, then another (third?) and chose the best. No “magic pill”, again…

Out-of-gamut issues are grounded into filmic machinery, with a selective desaturation on extreme luminances.

darktable offers standard matrices out of Adobe DNG converter. These are basic, fairly inaccurate yet robust ways to perform low-level color adjustments. Then, what you have are white balance and color balance to adjust color to taste because, at the end, purely technical adjustments don’t always work as expected (and usually not out-of-the-box) and photography is still an art, so your eyeballs (should) get the last word.

Suit yourself. You want to use a profile ? Put your pixelpipe in the same state as the one used to produce this profile. Otherwise, what you are doing is as idiotic as applying a 50mm f/1.8 lens profile on a picture done with a 9 mm f/4. You don’t have to apply the right profile the right way, but you can’t expect accurate results if you are goofing around ignoring signal processing rules.

If the light spectrum of the scene matches the standard illuminant D65 or D50, depending what your profile expects, it’s the right white balance. If not, we have numerical ways to correct it to simulate (more or less) the correct white balance and shift the spectrum. Otherwise, you can make a very custom profile with whatever lighting you have, but it won’t be portable and reproductible in other lightings.

Your input profile is a matrice defining the RGB primaries of the camera in the XYZ connection space. Every camera sensor has a particular set of values {a_{11}, a_{12}, a_{13}} such that Y = a_{11} R + a_{12} G + a_{13 }B. These values are accurate only for a particular spectral distribution of “white” light, called an illuminant (daylight illuminants D for example).

Y channel is linked to human vision as the linear luminance perception. RGB channels are linked to sensor’s electronics and holding no information readable for a human. The coeffs {a_{11}, a_{12}, a_{13}} are supposed to link (in a basic imperfect way) sensor vision to human vision.

Imagine you tweak the lighting such that you get much less red, a tad less green, and a lot more blue (aka fluocompact bulb). Your perception of luminance will change (because human luminance is crap and depends on the spectrum repartition, namely colors, surface albedo, etc.), but the sensor doesn’t care and still outputs the same unbiased readings. So, unless you tweak the {a_{11}, a_{12}, a_{13}} coeffs, your Y value (as well as X and Z) becomes wrong and Y doesn’t mean human-based luminance anymore, so whatever conversion using XYZ (from which Lab derives, by the way) becomes wrong and pointless because this XYZ is not human retina response.

All in all, you either need to adjust your RGB → XYZ matrice (with a Bradform or von Kries transform) to take account for the spectrum distribution change, or the RGB input values (with a white balance adjustment) to simulate a standard illuminant input.

But a generic profile (color profile, lens profile, you name it) is a computed correction that expects the pixelpipe to be in a certain state to be valid. Otherwise, you create an ad-hoc profile with the no-name illuminant you get on stage (that’s actually the selling point of color-checkers), but it won’t be useful for another shoot. Or you just ignore the pre-digested correction and do everything manually with the color balance until you eyes are pleased.

If you don’t want to bother with such technicalities, don’t aim at color accuracy at all, because you will make your input matrice invalid in the conditions of use, get wrong colors as a result, then amplify the color inconsistencies with luminance tonemapping (where the luminance is not actually luminance but garbage), and finally complain that the software is not a magic pill because the geeky camera profile you get at the beginning of the pixelpipe was supposed to take care of color somehow, and yet colors are ugly at the end.

Sorry to have opened the Pandora box, but there is nothing magical in a pixelpipe. Only trade-offs, attempts and approximations. You need to understand them. They work as long as you use them the way they are designed. If you don’t understand them, don’t rely on them for anything and tweak colors manually.

TLDR; The sensor doesn’t see colors. It sees light spectra and split them into 3 intensities of “primary” values. Which we remap to colors (color don’t exist out of human brain) using a matrice, stored in a profile. Matrice which coefficients are valid for a certain light spectrum.

2 Likes

Cameras don’t really have a human-friendly notion of “white”, they just record light values through different filters for the color filter array. Prove this to yourself by developing a raw file with absolutely no white balance, either as-shot or some patch or color-temp pet trick. Look for a tone anywhere in the image that corresponds to your notion of white; you won’t find one.

IMHO White balance is really about dragging the camera values to someplace where a thing we want to be white has its R=G=B=maximum display value. That’s for light that has some semblance of full representation of the spectrum (I’m probably misusing terms here), but your sunset light was thrown upon the scene after a rather challenging trip through the atmosphere, where a lot of the lower wavelengths were attenuated. This throws the whole notion of “white balance” out the window, and requires specific attention from you regarding setting up the appropriate color cast for what you remember. Well, me, I’d still start with a white-balanced image, and then color it to my recollection.

Notice I never used the word “accurate” regarding color. The only notion of what color should be are the observer experiments of yore, which gave us CIE XYZ and such. Light isn’t colored, we humans just interpret it in those terms. Indeed, a particular color could be constructed with a variety of light combinations, so who’s to say which is “correct”?

I did a bit of google on sunset white balance, and found the predominant advice to be to set the camera WB to “cloudy daylight”, which would produce a warmer output render than high-noon daylight. In color temperature terms that makes sense to me. But to get the particular cast you remember from that atmosphere-filtered light, you’ll probably have to also do tint and hue pet tricks…

1 Like

Indeed, sunset is really hard to reproduce in post, at least for me. See my Play Raw: [PlayRaw] Changing Light. No one got even close to the beautiful colour that I saw outside my window.

1 Like

I shoot a lot of sunset/blue hour. You have to shoot raw! Then adjust your white balance after. Here in Southern California, the color temperature changes so quickly in golden hour/blue hour that if you’re trying to set the white balance, it’ll be wrong by the time you set it :stuck_out_tongue:

2 Likes

My favorite time of day for photography is sunrise, same thing.

In my new Nikon Z6 I found a feature I didn’t know I had in my D7000: measured white balance. I’ve set it up so I just hold down Fn2, aim at a white thing, and shoot; instead of taking a picture, it records white balance multipliers that’ll end up in the metadata of the succeeding shots. A decent batch workflow begs this data.
Cool Beans!!!

Z6?? Too fancy :stuck_out_tongue_winking_eye: That is a cool feature though!

I don’t like to buy the first of anything, but this camera makes the exception. Brings new meaning to “pulling up the shadows”…

Okay, back to topic.

In RawTherapee, what would be the best way to add a color cast, assuming I start with a WB that makes the image look “neutral”?

As we here are in a Software/darktable topic, I would appreciate to discuss in a Sotware/RawTherapee topic :wink:

Oops, sorry… I forgot where I was :blush:

1 Like

After some test I think that filmic gives the best result for both standard and chroma preservation with shadow/highlight balance set to +50%, middle gray luminance 18% and target gamma set to 1.00.

standard


20170626_0028.NEF - rgb.xmp (8.0 KB)

Chroma preservation


20170626_0028.NEF.xmp (8.5 KB)

It looks better even with this test image
https://filebin.net/iww4b534j4zp5fck

Chroma preservation with gamma 2.2

New settings


hdr_chroma2.xmp (4.6 KB)

standard

1 Like

I did a quick test (darktable 2.7.0~git1477.7740b7a8c) on two images an indeed shadow/highlight balance set to +50% turns the colors more natural. The other settings you’ve mentioned didn’t work at all, at least with those images.
Thanks for the tip.

The cubic spline interpolation will most likely become unstable and hit the ground before 0 for grey = 18 % with gamma = 1.0, so what you see is probably just numerical artifacts that shouldn’t happen.

BTW, I have developped a custom new spline from scratch here : https://eng.aurelienpierre.com/2018/11/30/filmic-darktable-and-the-quest-of-the-hdr-tone-mapping/#filmic_tone_curve . It should behave better (that is : properly) in the dark area with gamma = 1.0. @hanatos is testing it, we will see here it goes. (If this child gets his brain and my gorgeous looks, he shouldn’t need a fairy godmother).

Also, please, trust your eyeballs and stop looking at numbers as if they held some kind of truth. These numbers are there only for those who use lightmeters and can then translate their scene reading directly into parameters, but for the rest of the folks, the parameters will depend on what you did previously in your pixelpipe.

For example, there is another way to use filmic : set up your exposure until the midtones look bright enough, and don’t pay attention to the highlights that you will blow up. Then, adjust the white exposure in filmic to recover them. Since your white exposure will then be around - black exposure (that is, white between +4 and +6 EV, black between - 6 and -4 EV), the curve will be more centered and the spline should behave even for gamma = 1.0. But you might want to add some saturation in chroma preservaction then, because it tends to wash colours (which is supposed to respect the Abney effect, so it’s ok maybe ?).

Good thing I didn’t hardcode that gamma, isn’t it ?

4 Likes

Looking forward to test it out!

Sure it is! That was clever move))) Thanks!

I find this method a very good starting point for filmic adjustments. :+1:

2 Likes