I installed dt 3.4 yesterday and started to play with the new modern chromatic adaptation defaults.
To compare, I duplicated the image, applied the legacy defaults, and used the same modules except color calibration.
What I found is that it was really fast to get a (super) realistic color rendition:
Regarding out of gamut colors, there are lots in this image, and my first attempt (in modern defaults) to get rid of the gamut check warnings, by playing with the gamut compression slider, rendered a very desaturated image (only using the max allowed compression value, 12, could I get rid of almost all gamut warnings).
So I only added a small amount of compression (2), enough to get the blue bulbs closer to what my eyes see, and that’s it.
The gamut check still shows lots of oog areas, but I’d leave it like this, since the result is very close to what I see (almost identical):
I loaded your xmp (downloaded as xml ??? Never mind, I could also loaded it).
I see what you did: instead of enablig CAT, you reduced the blue channel on the colorfulness tab. I had played with it also, but ended up just tweaking gamut compress.
My main point is: the blues in your render might be scientifically and correctly bounded in the color space, but they just don’t correspond to reality
When in the pipeline, you need to ensure you don’t escape the working RGB gamut, which should be, ideally, as large as the visible spectrum. That is either Rec2020 or ProPhoto space.
Output gamut is something to care about at output color profile application, aka none of your business when tuning color calibration. If pipeline colors are all visible, the output profiles have ways to gamut-map them using the intents.
Always make your master edit independent from all output media.
Although not conscious about the all the underlying theory, I think that’s what I’ve thought when I opened this thread.
On the other side, I think I understand the split between scene reconstruction (technical modules) and creative/output processing @aurelienpierre proposes, so you have a medium independent master from the first phase.
I just don’t see that much fit to an amateur, occasional photographer like me, that don’t need to feed images into a production line that outputs images for different media.
Conceptually, though, I find that approach elegant.
Just wait until consumer HDR screen become ubiquitous, and you will thank me. Not having to re-edit all your images for HDR is going to save you a lot of time. But now it feels like unnecessary complication, I get it.
This is where I kind of halt and get confused: if the first goal is to have a scene referred edit, not tied to any output device, what determines how much colors should I bring back into gamut, in other words, how a “wrong” trade off made in the scene reconstruction phase (the one aiming to produce the master) could negatively affect later edits - later in the pipe, and/or later in time, as the example you’ve mentioned. By this reasoning, I could be tempted to push gamut compression all the way up to save a master as free as possible from gamut issues, so further creative/output modules, now or in the future, won’t suffer from a bad decision down the pipe.
It’s still difficult for me to think of editing an image without the immediate feedback from a medium, or, in other words, without introducing some kind of output bias in the scene referred settings, as it seems to have happened in my edit above.
That’s because cameras tend to record a bit of UV and make it pose as blue, plus color profiles and chromatic adaptation may push colors out of visible spectrum. So we cleanup after the CAT. And, as a metric of visible spectrum, the closest we have are Rec2020 and ProPhoto RGB.
If every pixel lies in the visible spectrum, then we can work. If not, slider-pushing on things like saturation will only make it worse, and the gamut-mapping at output becomes a fiesta of randomness.
Yes, it’s expected. You apply a channel-wise non-linear brightening, it is known to desaturate. Change the concavity of the curve, now you darken and saturate. Hence the color preservation modes in the non-linear modes.
Problem is the saturation changes don’t happen at constant hue.
@aurelienpierre, I think an article or video about using gamut compression and soft-proofing, gamut check would be really helpful.
E.g. your video seems to suggest that one should push gamut compression to bring back all colours into the gamut (https://youtu.be/U4CEN0JPcoM?t=2250). Also, one can see at 35:45 that the soft-proof profile is linear Rec2020, and the histogram is AdobeRGB (which, if I remember correctly, you mention to be a close match for your display). Above, in one of the comments, you advise to
Change softproof profile and histogram profile to Rec2020 PQ or HLG.
I’m sure each has its uses, but when to use which is a real mystery to me.
It is my understanding that at output colours are pretty much ‘truncated to the gamut’, meaning that if I have two different areas in the photo that have the same ‘colour hue’ but different ‘saturation’ they may well be mapped to the same sRGB values (on the boundary of the sRGB gamut) in the output, leading to a loss of detail. So shouldn’t we care about that?
I’m just taking family snapshots, so pinpoint colour is not important for me, really, but I still find the question itself interesting.
Me too. Eg this idea of editing freely to begin with, to future-proof your work, then tailoring to the actual output medium at the end. Suppose in a few years I have an HDR screen and I pull up an old edit which needed a fair bit of work to get the sky ok. I can’t see it being anywhere near optimal for the new screen. When I originally edited it, I could only see what my sRGB (or AdobeRGB) screen allowed, so it could never have been subtle enough for HDR surely?
This doesn’t necessarily happen and there is “perceptual” and “relative colorimetric” processing which gives choices over how the output is fitted to the gamut. The Cambridge colour site and others explain this (and a third process).
The pipeline color space is the human visible spectrum. Always. The closest RGB spaces from that are Rec2020 (which is a bit shorter) and ProPhoto RGB (which is a bit wider aka has some imaginary colors).
That’s all we care about when I talk about pipeline gamut. Visible gamut is the goal/principle, Rec2020 is the closest technological tool from that goal. Forget about Adobe RGB, sRGB and the likes. These are output media, that’s for export, not for retouching.
Then, histogram space doesn’t really matter. Just choose something that puts middle grey reasonably on the middle for legibility (that is, a non-linear color space). Anyway, it’s only a scope.
No, we don’t care. The only concern is if a color gradient looses its gradient to become a flat blob, that’s ugly. But this happens when gamut is clipped, aka the whole surface gets “rounded” to the same color. With gamut mapping, sure we will have to make some sacrifices and loose saturation, but gradients should remain gradients, although less saturated, so the image will still look believable. And 2 pixels at same hue but different saturation will still have different saturation in sRGB if they are mapped.
Just cool down. For the past 5 years or so that I have been doing opensource photography forums, people are way too concerned about gamut on a theoritical level while still producing the infamous rat-piss yellow sunsets (aka unable to see the actual gamut escapes that are right in their face while fantasizing about manual corrections they should apply to take care of gamut issues that don’t happen). Truth is gamut is handled trough intents in a semi-clever way at output, and anyway your screen most likely displays sRGB so what you see on screen is already gamut-clipped and/or gamut-mapped. If it looks good, then don’t worry.
Again, we handle gamut mapping in color calibration only to cleanup after the chromatic adaptation because we know it will push colors out of the visible gamut. It’s only intended to not make things worse and start retouching with legitimate colors in the pipe.
I’m a hobby photographer annoyed by prints which look different to that what I see on the screen.
That’s the reason I started recently to dig into color management.
Up to now I understand what an input and an output profile is, a working profile does and I have an idea of what softproof needs to be used for. I even calibrated my display.
What’s still a mystery for me is, is the thing about all the different clipping indications. We have gamut check and softproof for which I can choose the different profiles for softproof, display, preview display and histogram.
When I switch on clipping indication (preview mode full gamut) and then toggle between different
profiles for the histogram the indication differs. As also for different softproof and display profiles.
To be honest I’m a bit lost.
When it doesn’t matter which settings lead to proper clipping indication why does it change the indication itself?
Don’t get me wrong, I think it’s great to have all these options.
I would like to understand what I’m doing, and darktable, as difficult it is, helps to this. But until the tipping point of understandment it’s a pain…