Many thanks, everyone. The 87 photos used in this experiment were taken in sunshine, in an English garden in September 2018, including flowers and berries. 0.29% of the pixels were out-of-horseshoe. This is equivalent to one pixel in every 19x19 pixel square, so it’s not a massive problem, but not trivial either.
Thanks to the discussion and LUDD - Homepage closed , I think I understand:
- The camera may be sensitive to wider frequencies (ultraviolet and infrared) than humans.
- Spectral sensitivities of cameras are not the same as the standard observer, so some colours look the same to humans but different to cameras, and vice versa (aka metamerism). Hence an accurate mapping from all camera sensor values to XYZ is not possible. Any mapping is an approximation, usually attempting accuracy in low-saturation colours (which are common) at the expense of inaccuracy in high-saturation colours (which are rare in ordinary photos).
- The transformation can be optimised for specific colours (eg those on a chart) but this can create problems elsewhere. Ideally, we use colour charts with patches that typify the colours we want to be accurate.
- The 3x3 transformation matrix used by dcraw and other raw converters is optimised for a narrow range of chromaticities. Ideally we would use a different matrix for different circumstances, after calibrating the camera by photographing colour charts.
- LUT transformations may be better than 3x3 matrices, but perfection is not possible.
- Problems are often found in the blue-violet region, because humans are markedly less sensitive than cameras to blue light.
- Noise in dark regions can cause isolated sensor pixel values to be too high. This creates a significant value in only one of the three RGB channels, so the debayered result may not represent a real colour and is outside the optimised behaviour.
- Transformations work best for colours close to the white point. A transformation that works well for some saturated colours will harm the accuracy for less saturated colours.
- Cameras are not substitutes for colorimeters, which are designed, built and maintained for the specific task of measuring XYZ, and have a price-tag to match.
- We can aim for colorimetric accuracy in our photos, but we won’t achieve it.
It seems to me that there is scope for more automation: if a raw converter finds pixels are out-of-horseshoe, it can report that, and suggest alternative transformations that cause fewer OOH pixels.