Displays and color spaces, surface and emissive colors, Pointer's gamut

Hi @afre - I was looking into the question of colors that appear on surfaces (subtractive, reflective) vs colors that can only be made by light sources (additive, emissive) and ran across an interesting article that mentions CIELUV, and remembered your post. So here’s the link:

The Pointer’s Gamut: The coverage of real surface colors by RGB color spaces and wide gamut displays

Scroll down to the section “CIE 1976 u’v’ chromaticity diagram” where the author mentions several different studies that more or less contradicted each other.

Rather than posting in the post on GIMP and LCh, I decided to open a new topic in case anyone has comments, insights (in which case please share!), etc on topics mentioned in the TFT Central article, or perhaps a working link to Pointer’s original paper, or etc.

My specific interest in the TFT Central article was sparked by the following question:

How do we know what colors out there in the real world can actually be surface colors, vs colors that are only seen when looking at light sources?

I touched on this topic of surface vs emissive colors in an article about making useable LCh color palettes, specifically omitting colors that can’t be printed even on today’s really high end fine art printers. sRGB blue is such a color:

Apparently sRGB’s bluest blue is an emissive color - well, it’s one of the phosphors that was used in old CRTs, so that sort of makes sense.

But other than actually measuring colors reflected by as many real surfaces as one can get one’s hands on - which seems to be what Pointer did - how do we know whether any given color that we can produce in the digital darkroom actually can be a surface color?

By “surface color” I mean apart from specular reflections off a surface, which seems to be a special case - if the specular reflection is reflecting sRGB blue displayed on a monitor, well, that doesn’t make sRGB blue a surface color.

Thanks, I agree that this discussion is worth continuing. I don’t remember if I read this one. Hope some capable people would come and rescue us from these burning questions; i.e., in layman’s terms. :sunny:

Hi,

This is indeed an interesting question I find.

There’s a nice article about how surface reflection cannot be both bright and saturated in colour at the same time (0). Later, this gamut of possible surface reflectances has been explored some more and has even been plotted (1). This last work can be used to determine whether an rgb triplet is a valid reflectance or not, and you can perform gamut mapping if it’s outside.

(0) Schrodinger, E. Theorie der Pigmente größter Leuchtkraft. Annalen der Physik 367, 15 (1919)

(1) MacAdam, D.L. Maximum visual efficiency of colored materials. Journal of the OSA 25, 11 (1935)

PS I have code for the gamut mapping somewhere here

https://jo.dreggn.org/path-tracing-in-production/2017/index.html

2 Likes

I can’t find Pointer’s 1980 paper online, although the data is available from https://www.rit.edu/cos/colorscience/rc_useful_data.php. But Pointer and others have done more work since then. For example, http://www.color.org/events/colorimetry/Li-ColourGamutICC-CIED1-WorkshopLeeds.pdf shows that some real-world reflected colours fall outside the 1980 gamut.

So the statement in the TFT link…

… isn’t true.

We might say it is nearly true. Or that it is actually true, for a revised definition of “Pointer’s gamut”.

I can’t see a theoretical answer to that. And I suspect it doesn’t exist, because a chemist or engineer might devise a material that absorbs all light except a very narrow band of frequencies, so it reflects only that narrow band, placing it virtually on the rim of the CIE horseshoe.

But a practical answer? Well, just test whether the colour falls inside a (revised) Pointer’s gamut.

@hanatos @snibgo - thanks! for the links.

@snibgo Regarding whether there’s a theoretical way to separate “surface” from “emissive only” colors, and putting all self-luminous surfaces to one side as irrelevant to the question, I think maybe @hanatos provided a link to an article that does suggest a way to calculate whether a given color can be a surface color. But I haven’t yet downloaded everyone’s links and done the reading so maybe @hanatos can confirm or clarify.

I did read through the Leeds workshop article, and it’s interesting to note that product packaging has led to more and more saturated surface colors in our visual environment. I find myself hankering for an environment with no such colors anywhere to be seen, except in flowers and such.

@hanatos - if the gamut mapping code you mention can separate surface from “only emissive” colors, is it efficient enough to use with color pickers for programs like GIMP, Krita, MyPaint, etc to allow users to easily see which colors are surface colors? I have several practical reasons for asking this question:

First, I think artists and photographers would enjoy having a readily available check on whether a color is a surface color or not. It’s one thing to paint or saturate a color to deliberately make it highly or slightly unrealistic, for whatever artistic reasons. But it’s quite another to do this totally by accident.

Second, the very small sRGB color space already “pushes” users towards highly saturated colors simply because the primaries are very saturated compared to colors that aren’t near the primaries. Because of how easily and quickly our eyes adapt to the saturation level of the colors already on the screen, there is a constant temptation to paint or add saturation to photographs to produce ever more saturated colors, which is to say, colors that creep closer to the sRGB color space primaries.

Third, with the advent of Rec.2020 monitors, there will be a whole lot more surface colors that artists and photographers can use, especially in reds, greens, cyans which sRGB lacks. This will be a really nice thing, allowing digital artists and photographers access to more of the fairly large range of paint pigments and surface colors in nature that fall outside the very small sRGB color gamut. But of course the Rec.2020 primaries also are a whole lot more saturated, lending hugely more scope to unintentionally producing oversaturated and unrealistic images just from one’s eyes adjusting to the existing saturation of the colors on the screen.

As an aside that makes me feel really uneasy about what might happen to colors in images on the web, television, movies, etc after Rec.2020 displays become commonly available, there are a lot of articles on the internet about how to expand color gamuts of existing material to take advantage of the larger Rec.2020 color gamut, for example:

Perceptually-based Gamut Extension Algorithm for Emerging Wide Color Gamut Display and Projection Technologies

Imagine taking the typical already highly oversaturated “fall foliage” images and expanding their gamuts to fill Rec.2020:

https://www.google.com/search?q=fall+foliage&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjijJmkhIHdAhUwtlkKHddVANAQ_AUICigB&biw=1326&bih=1050

I’m really hoping any such gamut extension is done with an eye to not producing oversaturated visual garbage, though of course one person’s idea of visual garbage might be another person’s idea of really nice colors. Personally I think it would be better to just show existing material the way it was produced rather than artificially expanding its color gamut.

1 Like

Hi,

The gamut mapping code has a few variants, as you would expect. Optimised for min Delta E or same chromaticity for instance. It’s all based on a precomputed map of Max X+Y+Z for every xy chromaticity coordinate. As such it’s not for free but running it on a 2MP image should be in the milliseconds.

This distinction is, however, based on the assumption of surface reflectance without fluorescence. That is, you can have perfectly energy conserving processes that would lead to images with pixels outside this gamut, since these encode illumination times surface reflectance. Also, if a material fluoresces, it may bring energy from UV to the visible spectrum and result in bright and saturated colour. I think this is what happens often in fabric brighteners or specially printed product packagings.

That said I still think such a sanity checking tool as you propose makes a lot of sense. Let me know if/how I can help with the code (it’s really not much).

2 Likes

I think the “key” to understanding Pointer gamut is within the equation for converting spectral reflectance to XYZ. That is, you multiply the illuminant by the reflectance curve (to echo a few posts above). The illuminant is therefore the exchangeable ingredient that makes the surface color. The reflectance curve is the true identity of the material and static for a given object regardless of the illuminant.

We can experience objects under an infinite variety of illuminants. If you did a union of ALL the color perceptions for real world diffuse objects under every potential illuminant (including lasers). . . you’d have a gamut very close to the entire CIE chromaticity diagram.

So, the Pointer gamut is most meaningful in the context of Illuminant C, which is why the Li paper above, and Meng (and others) are trying to create a new gamut of real surface colors based on spectral reflectances. Maybe Li will have something to report to the CIE soon? It’s interesting to see how the CIE organizes their projects.

Still, I don’t know how useful a single gamut of “real” colors (spectral reflectances, rather) will actually turn out to be. It might be far more useful to have a database of “real” spectral reflectance curves for particular materials. This paper describes such a system. It would be a lot more work, obviously. . . but allow you to have believable surface colors for wood, concrete, common cosmetics, particular types of paints, etc.

1 Like

I guess this is where AI comes in. In the near future, when data are bountiful and detection and segmentation are good, it would be able to correct the impossible colours and match them with likely ones. :slight_smile:

Yes, ideally we know the reflectance curves of our materials, so we can apply any illumination within our path-tracers.

But that shifts the desired data from “what are all the real-world reflected chromaticities?” to “what are all the real-world reflectance curves?”

But when I look at a physical plant or tree or house or whatever, I don’t know the reflectance curve. And I can’t carry around a device that would measure it. I can measure the reflected colour under whatever illuminant the sun is providing that day, and simple tweaking provides a crude estimate of the colour that would be reflected under some other illuminant.

When I’m editing, I’d like the software to be capable of warning me that I’ve shifted some pixel out of the (revised) Pointer’s gamut. I can see benefit in that, and it’s easier than implementing a full-spectrum system.

That tree at midday might be in the Pointer gamut, but come sunset it might get pushed out of that gamut.

You could chromatically adapt your color to illuminant C and then check if it’s in pointer gamut. There’s even a helper function for that in the colour python library. But, I gathered from the Li paper that this was not that accurate and is one of the primary motivations for the "real"color gamut project.

The above link leads to this very interesting slide presentation:

which has slides on metamerism and cameras (you can’t just assume your camera is showing you scene-accurate colors even if you shoot raw). Then the slides move on to discuss computing indirect lighting using spectral vs RGB. Which discussion I’m guessing might not be all that relevant to photographers, but maybe I’m completely wrong, so I’ll just ask: Is it relevant to processing photographs?

I wanted to ask about the comparisons of different RGB color spaces with the spectral row, which is the bottom row in each comparison (slides 18, 19, 20, 24), and I’m guessing is the “right” row?

None of the slides seem to indicate that any of the tested RGB color spaces were much good at producing good results, not even ACEScg, which was ranked near Rec.2020 in Mansencal’s paper, and in the same paper Rec.2020 itself ranked as considerably better than other RGB color spaces such as sRGB:

https://www.colour-science.org/posts/about-rendering-engines-colourspaces-agnosticism/

Your slides would seem to indicate that picking the best RGB color space for rendering is just picking the best of a bad lot, yes? And that which is better and which is worse, depends on the actual thing that’s being tested? At least visually “which RGB space is closest to the bottom row” seems to vary from slide to slide.

But I really don’t know exactly what’s being tested from slide to slide, so I was hoping you could elaborate here :slight_smile: .

Indirect lighting. That is, multiplication. Number of bounces would be the number of interactions. Essentially doing what Thomas did over smaller sampled, extended by a number of indirect interactions.

And I suspect it doesn’t exist, because a chemist or engineer might devise a material that absorbs all light except a very narrow band of frequencies, so it reflects only that narrow band, placing it virtually on the rim of the CIE horseshoe.

With any broadband light source, its saturation will be high but its luminance will be very low. That is what sets a gamut limit for real world reflective surfaces.

On the other hand it seems feasible to exceed this gamut with a pathological selection of light sources and materials - i.e. U.V. and fluorescence, laser light sources etc.

heya,

yes these slides are about 3D light transport sim, so this is indirect lighting very simplified: you just multiply the colours (pretend to be multiple diffuse bounces inside a box with the same material or so). in a sense that’s relevant for photography if you do layer operations i guess. for rendering, it’s very simple: you just do whatever is in the physics book and do your computations in spectral domain, it’ll be correct. the other way around is very simple to argue, too, but i thought it’d be great to show it on the slides: choosing any RGB colour space and doing your compute there is just very wrong and you will see the difference.

for photography i’m not sure we care much since people do harsh things to their pictures and mostly dial all sliders to 11 and wouldn’t care about physical accuracy. for things like super accurate white balancing or curves with a particular effect on both brightness and colour saturation i think there may be some interesting aspects here.