Thatās what I thought originally but the expanses of blues and violets produced by Adobe Standard processing fit pretty well within Adobe RGB as shown earlier, not an extreme case after all. sRGB has more trouble, for instance near the purple spotlight reflections.
Differences are mainly due to subjective application of saturation and contrast.
Yes, but you can reproduce this only on monitors with full AdobeRGB color space. If you donāt have such a device or want to publish/share on the web, it gets difficult. And printing is the real challenge .
Thatās an interesting result.
When I inspected my pic in darktable I had negative values all over.
But digging more into it (Darktable 3.4):
The RGB values the color picker shows depend on the setting of histogram and softproof profile. If both profiles point to srgb I get negative valuesā¦
If I change the histogram profile to lets say linear ProPhoto the lowest RGB value I can find is zeroā¦
Does that makes sense?
Well, black and white is not an option. The special thing about the building is, among other things, the lighting after sunset.
The colors change with the time. I made a series of pictures with certain details at different colors. One in black and white does not really fit into the seriesā¦
Your color result is a nice one!
Unfortunately my monitor does not show full AdobeRGB color spaceā¦
So if I want to edit and print the image properly I need a better monitor and a print provider that can handle AdobeRGB, right?
Interesting to learn that someone experimented with it I think my own days of doing so are over, except perhaps images dominant in cyan and aqua. It comes at the expense of colour accuracy. Still, the colour gradations are pleasing in your edit.
If one follows the DNG spec for raw conversion there are several checkpoints during the process where one needs to block negative values to zero. At each of such points I add the relative pixels to an out-of-gamut RGB image. Values are set to either 255 or zero in the appropriate color plane. I usually carry clipped pixels over then just after projection to the destination color space, but before any further processing, I add in those as well. Itās usually obvious by comparison to the rendered image which are clipped vs blocked.
The resulting image above doesnāt show you by how much values are blocked/clipped, just that they are. I find it is useful to quickly see how much clipping/blocking is due to somewhat objective āgamutā issues vs subjective ones by further processing, as in this discussion. Not many options for the operator with the former, many more for the latter.
Assuming the DNG spec process, the checkpoints I use are basically in linear space just after matrix multiplication:
conversion to XYZ
conversion to ProPhoto (if LUTs are in the dcp)
conversion to destination color space.
These are the stats I get using the DNG tags for the capture in this thread with ābasicā table:
Ant these are for the same with the ālookā table applied:
I donāt know what part of this functionality is available in raw converters discussed around here.
Filmulator renders the colors extremely saturated (especially the blue) but I find it doesnāt look bad in this particular situation; it doesnāt clip colors harshly and doesnāt artifact at the bright spots on the magenta wall. That said, itās unlikely it looked that blue in real life.
That sounds smart.
Iām sure I havenāt fully understood everything, as Iām only just getting into colour management, but I understand that as something like the āRaw overexposed indicationā but for the conversion into other color spaces, right?
And all additional blocking/clipping is then user-made by processing stepsā¦
And that shows that I could do a better job even in sRGB space. Also with a monitor that is limited to srgbā¦
Simplifying somewhat, the intuition is that once we are in a colorimetric color space like XYZ (or even sRGB) we can look at the span of possible tones in the space as being contained in a cube with the origin at the minimum possible value (zero, blocked black) and the opposite apex at the maximum possible value (clipped white, normalized to 1). Negative and clipped values cannot be rendered, so they are swept under the carpet.
To get to the final color space you need to project from cube to cube via matrix multiplication (WB, camera->XYZ->ProPhoto->XYZ->xRGB). Every projection can and often does result in some negative and clipped values that need to be discarded. There is very little one can do about this if one wants to keep things as linear as possible, and we often do (right Glenn?). What little one can do is often quite a subjective fudge. So I like to know what tones those are.
ā¦ which is however under the direct control of the user so easier to subjectively guide into final āgamutā. For instance some form of contrast, which is pretty well needed in every single image in order to better squeeze its tones into the smaller contrast ratio of the typical display medium today. If not done properly (no way is perfect nor necessarily more or less desirable) it will substantially shift chromaticity and increase saturation, with consequences for the final tone range.