What are the LCh and JCh values for the sRGB blue primary?

It was a statement in that the whole of the discussion around blues to violets and reds to yellows starts with the non-digital representations. In fact, a vast majority of hue linearity research returns to the physical Munsell swatches, for example.

The idea that it is something “isn’t just in the digital darkroom” implies that it started there. It didn’t. It started in the physical world of psychophysical responses.

My apologies to you and also to anyone else who might think that my comment that “violet-blue is an odd color no matter whether in the digital darkroom or out there in the real world mixing real paints” is so dumb a thing to say as to deserve comments like “what are you on about” and “egads”.

It isn’t.

That’s sort of like setting up a straw man so you can knock it down, yes?

My comment wasn’t meant as anything other than “well, that’s another odd thing about violet-blue”. I wasn’t implying any sort of directionality and I somewhat doubt whether anyone reading this long thread was misled to think otherwise.

The long conversation I’ve been having with @briend about mixing yellow and blue has led to a lot of “color exploration”, a nice opportunity to think about colors and how they interact both in the digital darkroom and when using real pigments. I even bought a somewhat better set of oil pastels just to do some more color exploration. And yes, this stuff does seem odd, unexpected, with violet-blue playing a major role in the “well, isn’t that odd”.

Who gave you the high ground to decide when it’s OK and when it’s not OK to think something is odd?

We have whole books and major sections of the handprint website that talk about the unexpected behavior of paint pigments when mixing green from blue and yellow. This sort of behavior of colors surprises people.

We have other color oddities such as darkening and desaturating bright yellow makes the perceived color go to brown or olive green, depending on the exact hue of yellow we start with. What about the fact that sometimes lightening a black pigment will actually make green. This sort of behavior of colors surprises people at least the first time they encounter it.

We have @briend’s amazing painting of a man wearing blue pants, when there isn’t a speck of blue in the painting, and in fact the entire palette is confined to a range of oranges. This surprises people. It surprised me so much that I used GIMP’s LCh functions to rotate the hues in his painting all the way around the hue ring, and those pants always looked not just “sort of” the complement of the dominant hues, but decidedly the complement of the dominant hue.

Color perception is full of things that surprise people when they first encounter this or that odd thing. Odd. Stuff nobody would think of as “actually the case” until they had a reason to pay attention to whatever “it” might be.

If color perception in general and in specific situations is such a “not odd” thing, how is it that we are still working on models for describing color perception? We don’t even have a really good model for perceptually uniform colors. Oh, and violet-blue is a significant part of the problem.

At least to me, so many things about color and color perception seem odd, unexpected on first encounter. I hope this never changes - color and color perception are wonderful, amazing, awesome parts of how we perceive the world.

No.

The same person that gets to declare it odd.

It isn’t complex. If it surprises people, they need to look at the spectral nature of light or perhaps they are weighed down by the baggage of years of broken models or myopic frameworks. How come a three light oppenent model can’t deliver green from blue and yellow? It’s a bit of a no-brainer, albeit an answer that isn’t going to reveal itself trapped under a particular mental model.

See the entirety of the hue linearity issue, in addition to the above mentioned spectral breakdown of pigments. Again, pretty essential things to consider before getting mired in CIE models. If one is interested in colour, they should start with spectral, and distill down to three light models.

There’s a nonlinear system at work, and that makes it a give and take between opposing poles. One could make a pretty strong case that the various CAMs do a pretty darn good job at making predictions. I’d call that really good. In terms of hue linearity, that is somewhat at odds of the basic tenets underlying CAMs, hence there is a struggle to develop a unifying model. There’s been some good research on the nature of hue linearity that, wait for it, potentially links it back to spectral compositions.

It isn’t really about violet-blue any more than it is about red-yellow, green-yellow, or nearly any other axis.

Additional reads for those interested:

Thanks @Elle for the compliment on my painting. I really do need to fit more of that into my schedule :slight_smile:
I do think the Abney effect is a significant part of the “problem”. I tried to recreate the color chart on the wikipedia page but with subtractive spectrally mixed colors. The Abney effect is almost entirely avoided for some reason:

imageScreenshot%20from%202018-09-03%2018-43-53

Here’s a side-by-side example of white on sRGB Blue. Same brush, same exact settings, weighted geometric mean mixing, linear light, etc. The only difference is the one on the left is 36 spectral and the one on the right is 3 RGB:

If you’re thinking, “ok, maybe just using wide band spectral is really all there is to this”, that’s not quite entirely true either. Here is the exact same brush again as the first (above, left), but instead of subtractive this is the normal additive mean. Still spectral though:

So we get the full Abney effect regardless of full spectrum versus 3 narrow bands, but really only with linear mixing models, and of course the wiki article only talks about “adding” white light. …

1 Like

It is, but only over a limited, but real world gamut. It wasn’t purely optimized to be perceptually uniform, although it is to a fair degree. Optimization (fitting) also pulled it in the direction of modelling viewing condition effects.

So it’s reasonable to use it for many tasks, but it shows its limits when you get near the edge of real world colors, or try to go beyond (i.e. ProPhoto primaries etc.)

No your assumption was correct! Perceptual uniformity is a required quality for a CAM or any Colour Model for that matter if you intend to use that model to make predictions about colours, measure colour differences, etc…

As @gwgill put it well (and described it somewhere above) the CIECAM02 suffers from having been fitted to a dataset that was probably not large enough. You end up with CAT02 generating negative values for some spectrally sharp colours or imaginary colours, thus related failures with Wide Gamut colourspaces, etc…

It is a very good CAM, maybe the best available to date (along the CAM16 flavour), so depending what you are doing it will make great predictions but when it fails, it does so greatly :slight_smile:

CAM02UCS (along the CAM16 flavour) is a good uniform colourspace based on CIECAM02.

For anybody wanting to get up to speed without reading Color Appearance Models by Fairchild (which I highly recommend you do anyway),

Luo, M. R., & Li, C. (2013). CIECAM02 and Its Recent Developments. In C. Fernandez-Maloigne (Ed.), Advanced Color Image Processing and Analysis (pp. 19–58). New York, NY: Springer New York. doi:10.1007/978-1-4419-6190-7

is a great compact read.

Cheers,

Thomas

1 Like

Even if you are not specifically interested in Color Appearance Models, it has some great introductory chapters on color and color perception.

@KelSolaar and @gwgill - thanks! for the references and for letting me know that CIECAM02 really is more perceptually uniform than LAB.

Regarding the problem of having been fitted to a data set that wasn’t large enough, even in the digital darkroom my own interest is strictly in colors that can be printed or painted (using the same pigments artists use) without resorting to optical brighteners or other exotic ways to make colors brighter or more saturated. So a practical question is this: When working in a large RGB color space like Rec.2020 (sRGB excludes far too many paintable and printable colors), and keeping as far away from the primaries as required to avoid unprintable/not-paintable colors, how badly would the “fitted to a too-small dataset” affect using JCh? In case this question is just plain confused, please enlighten!

A couple of years ago I got a copy of Fairchild’s Color Appearance Models from our local library and read it. At the time my reaction was “That book needs actual pictures to illustrate the disadvantages of each model, that in turn led to the creation of the next model”. But I’ll check it out again and try rereading it, and also see if I can find a copy of the Luo paper - fortunately we have a university library nearby that allows non-students to use their resources.

Regarding the possibility (that would surely make @afre happy I’m guessing!) of perhaps adding code to GIMP that would allow picking colors using JCh (some version of), just how complicated are the equations, assuming one sets the “appearance model” parts to be as simple as possible? I’ve been looking for something for JCh that’s along the lines of Lindbloom’s explanation of the XYZ to LAB equations, such that I could focus more on “turn the equations into code” and less on “by the way what do these equations actually mean” - is there such a reference? I’ve looked at the ArgyllCMS code, but haven’t really been able to figure out where to start in terms of “What does this code actually do to modify the input XYZ values”, such that maybe I could write similar code for GIMP.

I think that would be nice to find a balance between making images and improving image-making software. Maybe schedule in a painting between rounds of coding?

Regarding subtractive rather than additive, I got the impression that the Wikipedia article was only talking about adding white light to monochromatic light, and not about mixing actual paint pigments on a surface and looking at the resulting color, so only applies to additively mixed colors. Anyway, I pulled out my oil pastels and mixed blue with white to make a fairly long more or less even gradient, and the apparent hue didn’t change much if at all, stayed blue all the way. But maybe the blue wasn’t violet enough.

On the other hand, mixing bright yellow oil pastel with black (or rather a mixture of black and white) does make olive green precisley as discussed on MacEvoy’s website. So is the Abney effect only about adding white light to monochromatic light? Or is it also about adding black pigment to make darker colors from bright pigments?

I had never heard of the Abney effect so thanks! for the link.

Anyway, @briend - you’ve convinced me of the value of your spectral subtractive mixing - anything that avoids that awful transition from blue to purple when mixing in white has got to be a step in the right direction :slight_smile: .

Would you be willing to show some samples mixing magenta with cyan? I spent some time mixing oil pastels, using the nearest Crayola Portfolio yellow, cyan, and magenta working from MacEvoy’s page on triad color palettes handprint : "primary" triad palette, and was surprised by how much darker the purples, blues, and green-blues that you get from mixing the cyan and magenta oil pastels are, compared to the unmixed colors. Yes, I know, subtractive color mixing and all, of course the mixed color is darker. But “knowing” and “seeing” are two different things. So I’m curious as to how your spectral mixing algorithms handle colors like cyan and magenta (I think I already sent you approximate LCh values for the Portfolio pastels).

I googled for the “corrected” chromaticity diagram that the wiki page mentioned, and found this diagram from Fairchild’s book:
Screenshot%20from%202018-09-05%2021-16-51

So I don’t know if we really have an answer as to why white and blue pigments will seem to maintain a constant hue. I wonder if the white paint was a perfect reflector and the illuminant was E, if this situation would change.

Yes! I win! I win! ;-). Kidding aside, I learned a lot just arguing. I was so focused on blue+yellow and how much better the spectral mixing “feels”, that I never even noticed how crazy the Abney effect was. I think the Abney effect accounts for the majority of my dissatisfaction with digital painting all these years. It’s one of those “once you see it, you can’t unsee it” things.

Sure! So, remember with oil pastels you’re probably making “layers” as well, which is more of a multiply rather than the weighted geometric mean a true mixing would yield. So here’s three gradients, the top is just a mix (weighted geometric mean) and the next two introduce that multiplication/layering aspect:

Screenshot%20from%202018-09-05%2021-41-01

Here’s the same idea but on the canvas w/ brush settings, and a bit more intense multiplicative effect for the middle and bottom. It is much trickier to handle the multiplicative setting, because it is a compounding issue when dabs are being drawn on top of each other. Really need to implement a temporary stroke layer to fix this I think. Ideally most of these ideas could be implemented.
Screenshot%20from%202018-09-05%2021-41-10

Cheers,
Brien

That may be true to some degree, but creating such data sets is hard work, and it may not be practical to extend them as wide as one would like, and is certainly impossible to extend them to imaginary colors!

One of the nice things about the Kunkel and Reinhard approach is that they start with a Neurophysiology inspired model, and fit it to the data. CIECAM02 appears to be less constrained by human feasibility, and more in the direction of arbitrary mathematical modeling. Within the gamut of the training data set, this difference isn’t of great significance, but I think the former approach is ley to getting more rational outputs near the edges or beyond the real world color boundary.

1 Like

That is a really cool diagram - thanks! I tried experimenting with using GIMP’s LCh Hue-Chroma tool to see how many degrees I needed to rotate the blue layer with varying opacities of an overlaying white layer set to normal blend, to be able to say “yes, that looks like about the same hue” and sure enough the more white, the greater the required hue rotation.

Can you explain how “layers” is more of a multiply, compared to blending the two colors?

Actually for my “triad blending” experiment I sliced off chunks from the yellow, cyan, and magenta sticks and used a wooden ice cream stick to thoroughly blend the various color combinations.

Your rows go from sRGB magenta to sRGB cyan, and my magenta and cyan oil pastels surely aren’t even close to the resulting colors :slight_smile: . But in terms of saturation, your top row is the closest.

Consider two identical grey paints. If you mix them in a bucket in any kind of ratio the resulting color is going to be identical. That’s the weighted geometric mean at work. For example, just considering one channel/wavelength; a grey paint might be 0.5; reflects 50% of light. So if you mix 1 part of this with 3 parts of an identical paint, the formula would resolve this way:

0.5^0.25 * 0.5^0.75 = 0.5 (the same)

However, if these paints were translucent (transmissive) and stacked as 2 layers, the formula would be more like
0.5 * 0.5 = 0.25. (much darker)

But changing modes like this flips the assumption of the illuminant being above the plane to behind the plane. There are a lot of problems to sort out. . . I really don’t want to make a ray tracer. . . but. . . :smiley:

@Elle here are a couple more samples from the palette you sent me. Plain spectral WGM for all 4 rows (no multiply stuff):
Screenshot%20from%202018-09-06%2022-14-22

Below is a sample of two manual blends. The top is RGB, and the bottom is spectral. It’s a subtle difference in appearance, but what is unseen is how much easier it is to blend with the spectral model. For whatever reason, blending in RGB is more difficult; the swings between colors is much more sudden and abrupt. With the spectral model blending feels more dampened and smoothed out. It’s weird.
Screenshot%20from%202018-09-06%2022-14-04

1 Like

OK, I’m having trouble visualizing why multiply would be the correct formula for a translucent layer of paint of color 2 on top of an existing layer of paint of color 1. How translucent does color 2 need to be? Aren’t you describing “glazing”? When glazing for example translucent yellow paint (color 2) over dark green paint (color 1), is the resulting color darker or lighter than the original dark green paint?

What about mixing what MacEvoy calls “synthetic black”:
handprint : colormaking attributes and click on “black” at the top of the page, look in the notes for lamp black (PBk6 and PBk7).

When mixing the red, green, and blue oil pastels from my set of 12 Mungyo water-soluble oil pastels, the result really is “black” meaning a very dark color that anyone would look at on a white background and say “that’s black”, though after drying, or maybe just when looking under better light, it’s a dark gray.

@brien - would it be worth continuing an exploration of your spectral mixing colors compared to actual mixing of real media in a new thread over in the Digital Painting section of the forum? I’m very intrigued by how real paints actually mix to make colors, though my sample “real paints” are limited to various lines of oil pastels. For example, MacEvoy mentions somewhere that mixing yellow with black can make lovely shades of green, and by gosh he’s right, though surely “how green” and even “if green” depends on the particular black and yellow colors.

One aspect of mixing real paint is the lovely areas of “color texture” created by the “not completely blended” colors that results from applying and blending the colors on the substrate. This “textural colors” is something I’ve been experimenting with in GIMP, using things like more or less complicated gradients spanning several different colors, plus painting through a mask, and so on. But it seems to me that your spectral mixing might also allow “partial random mixing”, yes? no?

The topic came up once, a long time ago, in a different context, of using the xyY color space to create a color wheel.

Yesterday I wrote some code for babl to add conversions between XYZ and xyY, in the thought that it might be nice to add “xyY” to GIMP’s color picking tools, which would make visual exploration of things like “blue turns purple” a little more easy to do, not just for people who use command line tools like @KelSolaar 's wonderful colour-science, but also for ordinary users who are interested in color in the context of actual painting or image editing.

But then the question was “how to make a color wheel” from xyY. Obviously one can do a polar transform of the “xy” part of xyY, just as one can do a polar transform of the “ab” part of LAB or JAB to get C and h. But what would one call - how would one label - the xyY polar transform’s hue angle and square root of sum of x squared plus y squared?

Of course a prior question might be “how useful would such a wheel be in a painting/editing application”, other than for purely “what really happens” purposes, which of itself seems interesting and useful enough.

It’s a fair question.

Given the popularity of HSV interfaces, the clear answer is “quite useful”; we have to remember it is at the bottom of every UI whether we want to see it or not; it’s in every colour wheel ever used. The Abney effect is there too.

There is a an interesting side point to this which is one I’ve toiled with in attempting to deliver a “good” camera rendering transform. The question is “Where is the balance?”

If we take the Abney effect into account, we can use a contemporary JzAzBz to deliver perceptually uniform desaturation in rendering scene referred ratios to various outputs. Sadly though, grading / colorist work on such an image would then end up on the other side of the problem. Imagine trying to tweak a render of CGI / Vfx / photographic work after having run through such transforms, attempting to push or pull saturation around. Unless the JzAzBz[1] interface is built into the software (invertible, to get back to xyY), grading such an image will also result in colours losing their intention.

This is as relevant as anything given how critical desaturations are in a camera rendering transform.

[1] CIC 31 Home

A Colour Appearance Model based on Jzazbz Colour Space , Muhammad Safdar1,2,3, Jon Hardeberg1, Guihua Cui4, Youn Kim5, and Ming Luo2,6; 1Norwegian University of Science and Technology (Norway), 2Zhejiang University (China), 3COMSATS Institute of Information Tehnology (Pakistan), 4>]Wenzhou University (China), 5Huawei Technologies Co., Ltd. (China), and 6University of Leeds (UK)

1 Like

OK, given that an yY color wheel is or might be interesting, that brings me back to the practical coding questions:

In case the question I’m trying to ask isn’t clear (so often apparently it isn’t :slight_smile: ), I’m talking about what would go in the user interface, assuming GIMP or Mypaint or some other image editing program actually had an xyY color wheel. What would be the name/label in the user interface for:

  • xy “hue angle”
  • xy distance from x=y=0 “chroma”
  • I suppose “Y” would work for “Y” and the label would be “Luminance” .

Are there already standard labels and terminology for a polar transform of xyY?

I don’t think so, although given that Lab / Lch are essentially nothing more than xyY bent according to MacAdams , I wouldn’t expect there to be too much of a difference once back in the RGB encoding model in terms of visual appearances. It would amount to encoding efficiency of the circle.

I suppose you could use xyY to more quickly identify dominant wavelength and an idea of spectral purity of such a wavelength?

Are the circles themselves being managed via rolling through ICC transforms so folks on proper sRGB displays seeing D65 renderings?