Camera gamut outside horseshoe?

(Alan Gibson) #1

My Nikon D800 camera apparently records colours that are outside the horseshoe of a CIE xy chromaticity diagram. For points that are beneath the “line of purples”, this can be explained if the camera can see ultraviolet and infrared frequencies. But points outside the curve of monochromatic colours? I thought those points didn’t represent colours at all.

I took a bunch of *.NEF files from a Nikon D800 with various lenses. dcraw de-bayered and converted to XYZ, ImageMagick converted each to xyY, and made a scatterplot from (0,0) to (1,1) that is white where any of the pixels translates to that xy, otherwise black. I superimposed the CIE horseshoe in red on top. The script is shown below.

Why do some pixels translate to xy values that are outside the horseshoe curve? Possible hypotheses are:

  1. I’ve made an error somewhere.

  2. My understanding is wrong, and we should expect values outside the horseshoe curve because …

  3. The dcraw conversion to XYZ is wrong.

  4. The dcraw conversion to XYZ is correct only for some photos. I think it’s a simple 3x3 matrix. Perhaps for accuracy it should be a more sophisticated transformation, or a different 3x3 matrix made (eg with a colour chart) for each lighting condition. (If it needs different matrics, why?)

If the answer is (3) or (4), can I expect a more accurate transformation from other raw converters, eg RawTherapee? Or is something on the horizon to increase sophistication? Perhaps something can be added: if any pixels are outside the horseshoe, adjust the 3x3 matrix automatically.

The Windows BAT script, which uses dcraw and ImageMagick build with my process modules, is:

rem %1 is directory containing *.NEF files.
rem Subdirectories will be searched.

set FILELIST=\temp\nefg.lis
set TMPXYZ=\temp\ng.tiff
set TMPOUT=ng.miff
set OUTFILE=ng.png

%IM7DEV%magick ^
  -size 512x512 xc:Black ^
  -define quantum:format=floating-point -depth 32 ^

@for /R %1 %%F in (*.NEF) do (
  @echo %%F

  %DCRAW% -v -4 -w -W -o 5 -T -O %TMPXYZ% %%F

  %IM7DEV%magick ^
    %TMPXYZ% ^
    -strip ^
    -set colorspace XYZ ^
    -colorspace xyY ^
    -set colorspace sRGB ^
    -process 'plotrg dim 512 verbose' ^
    -flip ^
    %TMPOUT% ^
    -compose Plus -composite ^
    -define quantum:format=floating-point -depth 32 ^

%IM7DEV%magick ^
  %TMPOUT% ^
  -fill White +opaque Black

call %PICTBAT%cieHorseshoe x.png nefGamuts.png

(Elle Stone) #2

If you search around the forum you’ll find several posts that talk about this issue. It’s the camera input profile. There are various ways to deal with these colors. I don’t feel like searching through the forum to pull up the posts, so you are on your own (says @Elle most unhelpfully :slight_smile: ). The problem colors seem to be saturated violet blue (neon signs, backlit dark blue glass) and bright saturated yellows and yellow-greens (dandelions and backlit green leaves).

(Alan Gibson) #3

Thanks, Elle. Yes, I’ve gathered from discussions and your web pages that the “camera input profile” should be calculated for each lighting condition. Perhaps we conclude that, unless we do this, we can expect points to be outside the curve. Conversely, if we get points outside the curve then we know the camera input profile was wrong.

If I understand correctly, the problem is that a simple 3x3 matrix is insufficient to characterize the camera, so a 3x3 matrix is merely an approximation, a best-fit for certain colours.

I’m unclear whether a 3x3 matrix is the best solution possible to OOH (out-of-horseshoe) colours so the solution is to lug colour charts around with us, or whether a more sophisticated transformation is possible that will ensure we don’t get OOH.

(Elle Stone) #4

Well, “should” and “practically speaking let’s all do this” are two different things. I just use one profile shot under D50-ish direct morning sunlight. Getting a new target chart (my old chart is just too old to use) and using it to make multiple lighting conditions profiles is on my “to do” list, but it’s not moving up the list very fast :slight_smile: .

But the color of the light for the profile vs the actual image isn’t the problem that creates the “problem colors”. The problem is the camera sensor with its little color caps vs linear gamma matrix input profiles. Beyond that you’ll have to ask someone who knows a lot more about sensors and profiles than I do.

“That” it’s a problem is easy to show, as you just did. The ultimate problem lies in sensors not meeting the Luther-Ives condition, but again that’s just words to me, a concept waiting for hugely more information, none of which will solve the practical problem of dealing with cameras that sometimes produce problem colors when assigning our usual matrix input profiles.

I deal with these problem colors by also making a lab lut profile, that I only use to help “move” the imaginary problem colors (in images that have these colors) back into the realm of real colors, using masks and layers.

Another solution discussed in documentation to a camera profiling software that I just can’t ever remember the name of, but @ggbutcher linked to it recently, is to modify the target chart reference file to artificially move the problem colors before making the camera input profile.

Or just treat those colors like a standard soft proofing problem and use your editing tools to move them back into the realm of real colors.

(Glenn Butcher) #5

e’re y’go:
I’ve read parts carefully, skimmed others, it’s a great treatise on the topic. His dcamprof software has options that introduce adjustments for things like extreme colors. I messed with that to get a better rendition of an image I shot in a theater with extreme blue accent lights; funny, the best profile was my original Sunlight camera profile, where I used dcamprof to parse it to JSON, manually modified the blue Y value, and used dcamprof to reconstruct the ICC from the JSON. Oh the Adobe Standard profile (snarfed from a download of the Adobe DNG Converter), did the second-best job, but I haven’t torn it apart to see what the differences are…

I’ve been making chromaticity plots of my camera profiles, and they all have bounds that exceed the visible space in some way. Far be it from me to explain it, but I think that’s one of the reasons you should be converting from that to an, as @Elle would put it, ‘well-behaved’ working profile. Note that not all working profiles confine themselves to the visible space, ProPhoto being the usual example.


Besides the things @elle and @ggbutcher rightfully bring up also remember that a 3x3 matrix profile will always be a triangle in xy space and it is quite possible to have all the colors the camera can capture be inside the locus but in such a way that it is impossible to fit a triangle around it without putting the points of the triangle outside of the spectral locus. Note that this does mean that the R/G/B elements of the sensor are not quite independent of each other.


Camera gamuts go outside the horseshoe because no real color can ever excite only one color on the sensor. But noise in dark regions can, so that’s why you have spurious impossible colors.

(Glenn Butcher) #8

Oh and I’d be remiss if I didn’t point out that cameras don’t have a gamut, they have a spectral sensitivity. I got schooled on that at dpreview. ‘Gamut’, I guess, is for output devices…

(Elle Stone) #9

Noise in dark regions is definitely a source of spurious and imaginary colors. But saturated violet-blue and bright saturated yellow and green-yellow colors that fall outside the horseshoe are actually not from noise at all, but rather simply from the matrix profile mapping what the sensor recorded to imaginary colors. This doesn’t happen with Lab Lut profiles, but Lut profiles bring their own issues.

Try photographing a blue neon sign or backlit colorful blue glass or a really yellow flower or yellow-green leaf back-lit by sunlight. This article shows examples from my old Canon camera:

My Sony A7 camera acts similarly, and Torger’s documentation talks about the problem, and I recall darktable documentation talks about violet-blue as a problem color. I’m pretty sure it’s not a problem with all cameras, but it seems fairly common.


What sorts of issues? I forget if you discussed that somewhere. I guess they would mess up the distance of colours.

(Graeme W. Gill) #11

Because the spectral sensitivities of the camera are not the same as the standard observer.

The implications of this are that there are some spectra that the camera sees as different that we see as being the same, and vise-versa.
This means that it is impossible to map all the camera responses to the colors that we would have perceived from the original scene. So any conversion (profile) is a tradeoff. Simple conversions such as 3x3 matricies may have severe trade-offs. They could map all the camera values to real colors, but the accuracy is likely to be very poor. If they map common and critical colors more accurately, they may then push other colors outside the spectrum locus.

Of course when these colors get converted to other devices colorspace, they will probably be gamut clipped or compressed.

More fine grained conversions (profiles) such a 3D or 2.5D cLUTs may be able to make better trade-offs.


Phase One has their Trichromatic back which has less overlapping spectral sensitivities for the three colors, which promises better color reproduction for extremely saturated colors, but it was measured to have actually less color accuracy for normal, everyday things.

(Elle Stone) #13

Oh, that’s interesting - do you have a link? I’ve always wished camera manufacturers would lower the overlap, which seems to have actually increased over time, sacrificing color accuracy to allow greater dynamic range, or so was my assumption. But what you say about the Trichromatic back seems to put a different spin on things.


I can’t seem to find the review that tested the color accuracy on a test chart anymore. But it makes sense. If there’s little color overlap, then, for example, there can be no distinguishing between two different monochromatic red wavelengths.

It seems that the reduced overlap has benefits for chromatic aberration correction; when the channels overlap, CA turns into stronger blur and purple fringing.

They say the WB is “exactly the same” between both cameras but that probably just means in Capture One; with whatever light source they’re using the cameras clearly respond differently. It makes the color reproduction kinda hard to judge there.

(Elle Stone) #15

The first issue is does the target chart even have enough patches to make a good LUT profile, and are the color patches of sufficient, um, spectral diversity? is this a proper description? anyway, more than just laid down using three or four inks.

The most commonly used target charts are the MacBeth 24 patch and various IT8 target charts (around 150 patches depending on the particular version), none of which has enough patches to make a good LUT profile. Or at least this is my conclusion based on the recommended number of patches for making LUT monitor and printer profiles.

Then there’s the temptation to ask colprof to make a LUT profile that’s way more “accurate” than the target chart itself would support, resulting in “curve fitting” that makes the target chart shot look really good, but produces odd colors in actual image files.

I put up some examples of “curve fitting” profiles here:

When profiling your camera with ArgyllCMS, what type of profile should you make?

Over in the ArgyllCMS archives, if you search around, you’ll see that the people making LUT camera profiles are making these profiles from custom made targets under controlled studio lighting.

I don’t know enough about cameras and input profiles to know whether LUT profiles are even any good at all for general purpose “all lighting” camera input profiles - @gwgill?

The DNG camera input profiles have a LUT, but that LUT isn’t in “camera space” - there’s a matrix input profile that’s used to get from camera space to RGB (I think linear ProPhoto?) before the LUT is applied.

(Elle Stone) #16

That’s a fascinating read. My first reaction wasn’t “Oh look how great the trichromatic back colors look” (they do look nice) but rather “Why are the frames from the other back so awful looking?”.

Cadbury purple - that’s a nice excuse to go buy a Cadbury chocolate bar and take some photographs using my Sony A7-I (hardly the world’s greatest camera) and my old manual focus Nikkor 55mm f2.8 macro lens, to see how messed up the colors might be.

Maybe larger format cameras exaggerate chromatic aberration compared to smaller format cameras? Even when resizing frames to show the same amount of subject? Or the lenses being so much larger are harder to make?

Is it possible that some of the dramatic reduction in fringing/chromatic aberration is from internal processing post-capture but before saving the raw file? or somehow from “lens cap-color-specific” changes in focal points?

(Alberto) #17


Yes, see:


It’s that when you have purple/green first-order CA like


on a camera where the green channel overlaps red a lot (fairly common) you’d get

green green

Even if you try to correct the green offset, it never lines up fully correctly, because it’s been excited by the red channel too.

But when each channel captures relatively non-overlapping spectra, you can realign them much more cleanly.

(Elle Stone) #19

If the color of the Standard back really is more accurate as the “strollwithmydog” articles say, then presumably the two raw files in the f-stoppers article were given rather different white balances, because the colors in the Standard version are not good.

I did set up a test shot with a brand of pasta that comes with a purple wrapper that’s pretty close to cadbury purple, and also a package of Lindt milk chocolate that has a nice blue gradient, and a McCormick’s box with vanilla flavoring inside (for red and yellow), plus Andes thin mints (for green), with parts of an IT8 test chart showing.

I took the photograph in the same room as the monitor, keeping ambient lighting the same (mix of overhead halogen lights plus snowy, cloudy outdoor light from two large windows) for taking the photograph and viewing it on-screen, setting a custom in-camera white balance, processed with PhotoFlow, using my custom camera input profile, with the lens f-stop set to f2.8.

The real colors in the product packaging and the screen colors are visually very, very close. Certainly blue didn’t turn cyan and purple didn’t turn blue. And chromatic aberration is minimal, almost non-existent, showing only in the extreme corners along a straight white line on the target chart.

(Glenn Butcher) #20

Where I’ve found my challenge is with theatrical lighting. I think I already posted my “blue hell” in another thread; if interested, I’ll repost. I think we’ve already seen similar in another thread, a cityscape image where a bridge was illuminated in blue. I’m not sure of this, but both may be due to the use of LED lighting specifically tuned to a wavelenght in the blue part of the spectrum, with intense illuminance.

The offending theater is within walking distance of my house, going down there in the day and asking to take a couple of pictures with specific lighting and a color target isn’t out of the realm of my ambition. If you think there’s value in considering such, I’ll up it on my dance card…