Camera gamut outside horseshoe?

Many thanks, everyone. The 87 photos used in this experiment were taken in sunshine, in an English garden in September 2018, including flowers and berries. 0.29% of the pixels were out-of-horseshoe. This is equivalent to one pixel in every 19x19 pixel square, so it’s not a massive problem, but not trivial either.

Thanks to the discussion and LUDD - Homepage closed , I think I understand:

  • The camera may be sensitive to wider frequencies (ultraviolet and infrared) than humans.
  • Spectral sensitivities of cameras are not the same as the standard observer, so some colours look the same to humans but different to cameras, and vice versa (aka metamerism). Hence an accurate mapping from all camera sensor values to XYZ is not possible. Any mapping is an approximation, usually attempting accuracy in low-saturation colours (which are common) at the expense of inaccuracy in high-saturation colours (which are rare in ordinary photos).
  • The transformation can be optimised for specific colours (eg those on a chart) but this can create problems elsewhere. Ideally, we use colour charts with patches that typify the colours we want to be accurate.
  • The 3x3 transformation matrix used by dcraw and other raw converters is optimised for a narrow range of chromaticities. Ideally we would use a different matrix for different circumstances, after calibrating the camera by photographing colour charts.
  • LUT transformations may be better than 3x3 matrices, but perfection is not possible.
  • Problems are often found in the blue-violet region, because humans are markedly less sensitive than cameras to blue light.
  • Noise in dark regions can cause isolated sensor pixel values to be too high. This creates a significant value in only one of the three RGB channels, so the debayered result may not represent a real colour and is outside the optimised behaviour.
  • Transformations work best for colours close to the white point. A transformation that works well for some saturated colours will harm the accuracy for less saturated colours.
  • Cameras are not substitutes for colorimeters, which are designed, built and maintained for the specific task of measuring XYZ, and have a price-tag to match.
  • We can aim for colorimetric accuracy in our photos, but we won’t achieve it.

It seems to me that there is scope for more automation: if a raw converter finds pixels are out-of-horseshoe, it can report that, and suggest alternative transformations that cause fewer OOH pixels.

Yes, that is what I gathered from being on this forum a while.

@Elle also mentioned yellows and yellow-greens.

That would be sweet. Sometimes I change things, without rhyme or reason, just to get fewer OOG (or OOH) pixels. If a raw converter or someone could give guidance on this problem, I wouldn’t be entirely relying on guesswork. :flashlight:

That’s what kicked off this query. I had an idea that software would calculate x and y for every pixel, and find which triangle of primaries (sRGB, AdobeRGB, rec2020, Aces, whatever) most tightly enclosed all the values. But then I found I had images with xy values that are outside all the triangles, and even outside the horseshoe, so that blew that theory.

Okay, I’ll take a step backwards and consider the camera-to-XYZ transformation. An issue I had avoided so far.

As an aside, I suspect that Elle has already covered all this in her excellent pages. But I need more bashes with a hammer to get this stuff into my skull.

You probably need to include the four rendering intents in your research: perceptual, saturation, relative colorimetric and absolute colorimetric. That’s where the decisions are made.

True, @ggbutcher. Are those particular intents specific to ICC? But the principles apply to any transformation. For example, an image that is OOH just at the blue area could clamp those values, or adjust so all colours except the red and green pimary shift towards those primaries, or also leave the white point unshifted, or whatever.

@elle thanks @snibgo for the compliment and metaphorically throws rolled up bits of paper at @ggbutcher :slight_smile: for not listening to @elle when @elle previously explained (more than once or twice :slight_smile: ) that when converting between RGB matrix color spaces such as from a linear gamma matrix camera input profile to sRGB or Rec.2020 or ACES or etc:

If you pick perceptual, you get relative.

If you pick saturation, you get relative.

In a V4 workflow, if you pick absolute, you get relative.

And fortunately, if you pick relative, you get relative :slight_smile: .

Relative colorimetric clips to the gamut of the destination color space except when using unbounded ICC profile conversions, and clips anyway upon exporting to disk in a file format such as png or integer tiff that doesn’t channel values below 0 and above 1.0f.

Yes, but doesn’t moving to a properly informed LUT TRC open you to the other intents?

I do listen to you, I do, I do… :sunny: I don’t remember a lot, lately. Birthday next week, 61… Squirrel!!!

1 Like

If you are talking about a LUT camera input profile, then hopefully that profile already doesn’t have any imaginary colors.

In my experience, which is limited to just two cameras, a Canon and a Sony, a LAB LUT profile made using a target chart shot of my IT8 chart does eliminate problems with blues and yellows and yellow-greens. But I don’t have a target chart that I think is suitable for creating LAB LUT profiles, so I only use said profiles when dealing with problem colors, and then only to the extent required to mask out the otherwise imaginary colors.

From years following the ArgyllCMS mailing list archives, the consensus seems to be that LUT profiles are best made using controlled lighting and custom-made target charts, one chart and profile per lighting condition and colors to be photographed (ie if you are photographing watercolor paints, make your target chart using watercolor paint samples). Though perhaps there is a bit of elitism on that forum. I’m sure people do make and use LUT profiles using 24-patch color checker and such. And if they like the resulting colors that’s all that counts in my opinion.

Now if you do use LUT profiles, then yes, the intent you choose makes a difference. But exactly what difference is something I haven’t spent much time exporing:

  • It depends on whether the LUT profile(s) is/are the source profile or the destination profile or both.

  • It depends on what source color gamut was used to make the LUT profile(s). If the source color gamut for the destination color space doesn’t include these imaginary blues and yellows and green, the LUT won’t remap these colors! LUT profiles only map colors within the source color gamut that was used to make the profile in the first place.

  • It also depends on what software you use to do the conversion: ArgyllCMS utilities allow to specify the intent for source color space to XYZ separately from the intent from XYZ to destination color space. I’m pretty sure LCMS just uses the same intent for both source and destination, but I haven’t made sufficient tests to be sure. Anyway, the user interfaces for software that uses LCMS doesn’t provide a way to specify more than one intent.

  • And then there’s black point compensation, which ArgyllCMS utilities don’t support and LCMS does support - be very careful of what BPC does to your colors as it will vary depending on the direction of the profile conversion and the respective black points of each profile.

Hmm, hope some of the above helps! And keep in mind I have not spent a lot of time experimenting or using LUT profiles. When I do use a LUT profile, generally I don’t use black point compensation. And almost always I just use relative colorimetric intent, which clips - Why would I want to let the person who made the profile decide what particular perceptual intent color mapping gives the nicest results? That person hasn’t ever seen my images and doesn’t know what sort of final image I really want.

As an aside, ArgyllCMS has some amazing utilities for creating custom source color gamuts for making LUT profiles. I wonder what would happen if @snibgo used his composite images as the source color gamut for a destination ICC profile?

1 Like

I’ve done some work on this. For a particular photo of a tree from a Nikon D800, using the raw .NEF file, the xy diagram showing white where any pixel is that chromaticity, is:

The circles show the sRGB primaries and white point. Some chromaticities are outside the triangle of primaries, and a few are outside the horseshoe.

Telling dcraw to make an sRGB image, then converting that to xyY gives the following diagram:

dcraw has heavily reduced chroma (roughly speaking, “saturation”), and has changed some hues.

Instead of using dcraw’s sRGB conversion, we do some tweaking on chromaticities, and we get this…

… which is closer to the original xyY chart, and contained within the sRGB primaries, but has some clipping.

The tweaking is done by these Windows BAT commands:

set INPRIM=0.56,0.44,0.35,0.65,0.08,0.06
set OUTPRIM=0.56,0.44,0.35,0.65,0.12,0.07

%IMDEV%convert ^
  cbc_xyy.miff ^
  -set colorspace xyY ^
  -process 'barymap channels xyY inPrim %INPRIM% gain 0.85 outPrim %OUTPRIM% v' ^
  -process 'barymap channels xyY ign inPrim sRGB clampBarycentric outPrim sRGB v' ^
  -depth 32 ^
  -define quantum:format=floating-point ^
  cbc_tx9.miff

I chose those numbers manually. Further automation is possible.

More detail is at Colours as barycentric coordinates. I haven’t yet published the source code for “barymap”.

3 Likes