More on LCH and JCH

Well, just the various blurring and noise-reduction algorithms, in theory. In practice, I’ve never used these in a finished image, except for blurring a layer mask, and I haven’t used the blurring algorithms since GIMP acquired an edge-respecting blur.

In theory I’d like to use the in-painting algorithms as per @patdavid 's wonderful tutorials on this topic. But so far I haven’t been able to get these to work as well as the old 8-bit resynthesizer plug-in. I suspect I’m just not doing all the steps correctly, so next time I want to do some in-painting I’ll try the gmic algorithms again.

I suspect gmic has some great sharpening algorithms, and the upsizing algorithms also sound appealing. But figuring out which algorithm with which parameters might work for a given editing goal seems to be a very time-consuming process, especially given that results seem to be specific to the image contents and image size.

Another reason I don’t use gmic is I’m never sure when a given algorithm expects sRGB input.

That’s probably a wrong reason. Most usual image processing algorithms do not care about the type of input data. E.g., convolution, sharpening, … are not mathematically defined relatively to the colorspace of input data.
I’d say even that it has often a quite low impact on the result (I know you probably won’t agree, but if you look at people who are designing image processing algorithms in research lab for instance, they mostly don’t care if the input are encoded in sRGB, linear RGB, or Lab, because this is of few importance compared to what the algorithm itself computes).

I agree that most image processing algorithms are not defined relative to the colorspace of the input data. The exceptions are things like:

  • Any algorithm that calculates relative luminance - which requires Y from XYZ, and so is dependent on the RGB color space primaries and also on the RGB color space TRC, because calculating relative luminance requires removing the color space companding curve in order to operate on linear RGB - otherwise you get Luma.

  • Any algorithm that uses luminance or luma as input also must take the RGB color space primaries and TRC into account, or else produces wrong results.

  • Of course algorithms that convert from RGB to XYZ and then perhaps to LAB or LCH must take the RGB color space primaries and also the color space TRC (the companding curve) into account, or else produce wrong results.

“How wrong” results of using the wrong primaries and TRC when converting to LAB of course depends on how far off the actual primaries and TRC are from the assumed primaries and TRC.Here are a couple of examples (one very wrong, one slightly wrong) from using the wrong TRC when converting from sRGB to LAB:

Well, I’m sure that you’ve read a lot more papers on image processing algorithms than I have. But from the ones that I’ve read, often there really is a conspicuous absence of any description whatsoever of what color space the input image is presumed to be in before the algorithm is applied. However, a failure on the author’s part to mention the RGB primaries and TRC does not in any way logically imply that the user’s choice of RGB primaries or channel encoding won’t make a visible and obvious difference in the result of applying the algorithm.

With respect to the channel encoding (TRC, companding curve), consider “gamma artifacts” from editing using perceptually uniform RGB instead of linear RGB. Default GIMP is built around the premises that:

  1. Gamma artifacts are important (I agree)
  2. Users should be able to edit their images without worrying too much about gamma artifacts (I agree, but users who know what they are doing should have the option to do other than what might be technically correct, and also there are a few operations for which “technically correct” is not really applicable, and recent default GIMP code does make it possible for users to choose to go against what’s technically correct)

Examples of gamma artifacts from painting in the regular sRGB color space with its more or less perceptually uniform TRC, compared to painting using linear sRGB:

The difference the RGB color space TRC (companding curve, channel encoding) makes when using “posterize” to make a step-wedge:

Adding noise to regular vs linear sRGB:

Auto-stretch-contrast:

For more examples of the difference the image’s RGB TRC (channel encoding, companding curve) makes, see:

Linear Gamma vs Higher Gamma RGB Color Spaces: Gaussian Blur and Normal Blend Mode

and

Is your image editor using an internal linear gamma color space? Should it?

OK, now let’s look at the notion that the RGB color space primaries don’t matter:

White-balancing an sRGB camera-saved jpeg (White balancing camera-saved sRGB jpegs) that was shot using the wrong white balance, using linear sRGB to pick the white point:

Same as above, except using linear Rec.2020:

Color correcting an image using a known neutral spot - for the second image, tyvek is very close to neutral white:

Following is a partial list of GIMP editing operations for which results are entirely independent of the color space RGB primaries, assuming the channel encoding (TRC, companding curve) is linear (all bets are off once you mix non-linear channel encodings into the mix):

Blend modes: Addition
Blend modes: Dissolve
Blend modes: Grain extract
Blend modes: Grain merge
Blend modes: Normal
Blend modes: Subtract
Colors: Brightness/Contrast
Colors: Desaturate luminosity
Colors: HDR Exposure, exposure and offset
Colors: Invert Colors
Colors: Levels Value Channel, upper/lower sliders
Colors: Mono Mixer, straight luminosity
Filters: Apply lens
Filters: Edge Detect difference of gaussians
Filters: Emboss
Filters: Gaussian Blur
Filters: Lens distortion
Filters: Noise Spread
Filters: Pixelize
Filters: Unsharp mask
Filters: Vignette - black, white, gray
Filters : Noise Pick
Filters : Noise Slur
Paint Tools: Normal, etc blend modes
Tools/gegl op: High Pass
Tools/gegl op: Mantuik06
Tools/gegl op: Mirror
Tools/gegl op: Radial Gradient
Tools/gegl op: Gaussian blur
Transforms: Crop
Transforms: Flip
Transforms: Rotate
Transforms: Scale
Transforms: Other transforms

Here’s a partial list of GIMP editing operations for which results are very dependent on the RGB primaries, even if the channel encoding is linear:

Blend modes: Burn
Blend modes: Color
Blend modes: Darken only
Blend modes: Difference
Blend modes: Divide
Blend modes: Dodge
Blend modes: Hard light
Blend modes: Hue
Blend modes: Lighten only
Blend modes: Multiply
Blend modes: Overlay
Blend modes: Saturation
Blend modes: Screen
Blend modes: Soft light
Blend modes: Value
Channel data: Using channel data as an editing layer
Channel data: Channel-based selections
Colors: Alien Map HSL or RGB
Colors: Auto strech contrast
Colors: Auto stretch contrast HSV
Colors: Channel Mixer
Colors: Color Balance
Colors: Colorize
Colors: Curves, RGB channels
Colors: Curves, Value channel
Colors: Desaturate average
Colors: Desaturate lightness
Colors: HDR Exposure, gamma
Colors: Hue-Lightness-Saturation
Colors: Levels RGB channels, upper/lower sliders (See Figure 2 below)
Colors: Levels gamma slider adjustments, RGB and Value channels (Also see Figure 1 below)
Colors: Mono Mixer, anything except straight luminosity (See Figure 3 below)
Colors: Threshold
Colors: Posterize
Colors: Value Invert
Filters: Artistic Cartoon
Filters: Artistic Soft glow
Filters: Edge Detect Laplace
Filters: Edge Detect Sobel
Filters: Noise RGB
Filters: Red Eye Removal
Filters: Tile Seamless
Filters: Vignette - color
Paint Tools: Multiply, etc blend modes
Tools/gegl op: Box Max
Tools/gegl op: Box Min
Tools/gegl op: Fattal2

For more information on the difference the RGB working space primaries make, and for links to example images, see the following article, which discusses why sRGB isn’t suitable as a universal color space for editing. Similar problems obtain regardless of what RGB color space one might choose as the one and only color space for editing, be that color space sRGB or ProPhotoRGB or ACES or whatever:

Limitations of unbounded sRGB as a universal color space for image editing:

Also see the following article, which discusses reasons why ACES isn’t a good universal RGB working space:

About Rendering Engines Colourspaces Agnosticism:

To summarize:

Addition and subtract are chromaticity-independent operations, but results do depend on the TRC.

Multiply and divide by any color other than gray are chromaticity-dependent operations, and results also depend on the TRC, not just “technically” but also visibly and obviously. So are operations that depend on retrieving individual channels for use in further editing steps.

“Gamma” adjustments and Curves also are chromaticity-dependent except when operating on all three channels by exactly the same amount, again, not just technically, but visibly. And results do depend on the TRC, not just technically but also visibly.

I will absolutely agree that sometimes the difference between sharpening on linear vs perceptually uniform RGB is subtle. But sometimes that subtle difference is visually important.

2 Likes

Wow that’s a lot of (very interesting) information! Perhaps this is a case of differing definitions rather than opinions? I think the point about G’MIC is that the core commands are calculations over a data set / signal and don’t “care” about what you’re operating on - it’s entirely up to the user.

Having said that, the community filters are very free-form so obviously it’s difficult to tell exactly what’s going on (I’m probably guilty of any number of misinformed colourspace travesties). I don’t think that means it should be avoided though!

That’s exactly the spirit yes. That’s why G’MIC also works with images that doesn’t represent ‘colors’ at all. Feed it with e.g. MRI datasets where each pixel/voxel value represents a response to a magnetic field, and it will work the same. Do it with X-Ray images, satellite images, and so on… it will work the same. You will be able to perform blur, convolution, sharpening, and all usual image processing operators on those images. The user has to know what kind of data he gives to the tool.

At the end, this means the tool is generic, that’s the point.

This also means you cannot say things like : “I don’t use G’MIC because I don’t know what kind of data is expected as input”. This is a nonsense, from a G’MIC perspective.
The reality is color images represent a very small planet in the whole image processing universe. I’m aware we are on a photography forum, so we mainly talk about RGB images here, but image processing algorithms are just mathematically defined, they mostly don’t give a shit about colors.

Of course, we should take care of applying the algorithms in the more accurate color representation, when algorithms are applied on images. But in general, no need to be ‘exact’, close is enough. LinearRGB is known to be more adapted for doing color averaging, but that is only because it is more close to how we, human perceive the averaging of colors. All the examples illustrated by @Elle are well known, but try with a slightly different transformation, like using a gamma of 2.2 instead of 2.4. I’m 100% sure you won’t see the difference, as long as the inverse transformation is also well defined.
Anyway, at the end people do not perceive colors the same.
Thus, I think people shouldn’t be obsessed by numbers when representing colors. Taking care of the 2th digit after the decimal point is definitely useless when talking about a color transformation.

Most of the time, it is more than sufficient to know that :

  • RGB colors in usual file formats (JPEG, PNG, …) are encoded in sRGB.
  • Doing a “rough” sRGB->LinearRGB transform is a good idea to make usual color arithmetic more close to our visual perception (or sometimes use the Lab color space instead).
  • Do the LinearRGB->sRGB transform at the end, to store the result back in a file.

The conversion formula is finally of little importance. Just be close enough and you’re good.

I’ve met a lot of people whose job is to design image processing algorithms (that’s also my job), and they all do that, roughly. I think I have to say it again: it’s more than sufficient for most of the real cases. I don’t believe in statements like “this color representation is not exact to be able to process the image with this algorithm”. Sounds more like the delirium of a maniac to me :slight_smile:

Something close enough to how our perception works is enough.
The effect of the 5th digit after the decimal point is ridiculously negligible compared to the kind of operations a smart image processing algorithm performs.

Good night :slight_smile:

3 Likes

Interesting perspectives

What I would say is that there is always a tension between perception, theory, practice, standards and “vernacular”; also personal emphasis and predisposition.

Just look at the packaging and placement in grocery stores of different philosophies and regions from around the world, for example. The labeling, design and marketing are all very different. In Asia, you have ISO this ISO that. In Europe, you have that ℮ sign everywhere. In America, food images look indulgent on the box but less remarkable on the inside. Then there is the health food store, fitness, etc. I digress but hope you get my point.

In terms of this particular thread, I would say that the attempt to adhere to standards is a good thing, especially when many of us on discuss prefer to have a closed system that is color managed and color accurate (however that is decided; I don’t think many of ICC’s determinations are ISO yet :stuck_out_tongue_closed_eyes:).

Take my workflow for instance. Recently, I have been experimenting with a mix of photoflow, gmic and gimp processing. As seen in my PlayRaw attempts, I sometimes come up with something nice but many times it becomes down right terrible (and I silently remove those entries hoping people don’t notice :rofl:). That is because I cannot make them play nice with each other. Oh, but what a joy it is to get them to cooperate! At least, it is a fun exercise for me!

The point is that their philosophies to things are so different and it would be great if I knew a way to go from one app to the other without too many roadblocks. I don’t think that I would be able to reconcile their differences any time soon, so I will have to accept that and educate myself as much as possible to mitigate any outrageous inconsistencies.

S1 (Back to Chroma)

I am still not sure whether my original question has been answered. Maybe I didn’t pose it all that well or am not getting what I expected. I guess the main concern is that I am not sure whether I know how to use the C channel of LCH anymore.

I thought I knew, then I realized that the max C for every H is different given the space. Well, I left brain knew but it hit me harder recently. It is different from saturation in that saturation happens at clipping. But in LCH space and in unbounded floating point ranges, it isn’t as simple, or at least I am not at that level of comprehension yet.

I hope other people besides @Elle and @David_Tschumperle would pitch in too. Though I named them specifically, since I have been in discussion with them on similar topics before, I would appreciate more perspectives from more people. Any suggestions @patdavid?

Follow up questions

Were you planning to link something there? Nothing is after the colon currently :slight_smile:.

Bear with me, I still don’t quite get the addition and subtraction v multiplication and division thing. I have read your articles but it isn’t clicking. I don’t know if you could “explain like I’m five” so to speak. I feel that it is important that I grasp this stuff moving forward.

2 Likes

In fact, I think it is more simple in LCH than HSV/HSL.

Let’s put it this way: HSV/HSL is a color representation, not a colorspace, and an HSV triplet does not correspond to an unique color (exactly like an RGB triplet does not define an unique color, unless you also specify the color space like sRGB, AdobeRGB, ProPhoto…). Hence, the same saturation value corresponds to different visual saturations depending on the RGB colorspace from which the HSV values have been derived.

Moreover, the HSV representation is not perceptually uniform, and therefore the visual saturation does not stay constant when you scan the H values at fixed S.

On the other hand, the CIELCh colorspace has been designed to be perceptually uniform, and to approximate a constant visual saturation for a given C value across different Hue values.

Coming back to your original question, the fact that the three sRGB primaries have different C values is simply a consequence of the fact that the blue sRGB primary is closer to the spectral locus than the red and green ones (as can be seen here), and therefore a “pure” sRGB blue is “visually more saturated” than a “pure” red or green. This statement is probably not 100% correct from the mathematical point of view, but should give you the idea…

The bottom line: CIELCh is a better representation than HSV/HSL if you want to edit colors in an intuitive and device-independent way. For example, you can fix C and change h in order to make colors warmer or cooler without affecting the resulting “perceived saturation”.
By the way, RawTherapee has LCH curves to play with, and I have a few LCh-based editing tools in the coding pipeline for PhotoFlow…

Hope this helps!

1 Like

Nothing I’ve said about g’mic should be interpreted as a reason for anyone to avoid using g’mic. If a user likes the results of using g’mic algorithms, that’s the only thing that counts.

I pointed out the discrepancy in LCH values for the sRGB primaries not because this is a reason to avoid g’mic for editing - again, if the user likes the results, that’s what counts. However, the slightly incorrect LCH values that @afre got from g’mic, when provided as “these are the values for the sRGB primaries” - that’s a different use from actual image editing. If the goal is finding the LCH values for particular colors, g’mic isn’t the best tool to use. It would be more accurate to use GIMP’s color picker or ArgyllCMS xicclu or a spreadsheet or etc.

There are two reasons why I myself almost never use g’mic:

The first and more important reason has nothing at all to do with whatever it is that g’mic does with data that it receives. g’mic provides a lot of options for various algorithms, and is also a bit like a “black box” - data goes in, results come out. I like to understand how the editing tools I use actually work. And I like to explore all the options that come with any given tool, so that I have an idea what to expect when using a given algorithm with a given set of options on a given type of image.

It would take me a long, long time to build up the expertise with g’mic that @garagecoder and @afre have. I’m guessing long-time g’mic users do have an idea of what will result from using various g’mic algorithms on different types of images. I don’t have any idea, and I haven’t found the time or motivation to make an effort to learn.

There are a lot of editing tools out there. I’ve spent time recently on PhotoFlow filmic and on RawTherapee CIECAM02 as these seem to provide (and do provide, as it turns out) answers to specific editing problems I want to solve. Someday something in g’mic might seem like the answer to an editing problem that I’m trying to solve.

The second reason why I don’t use g’mic does have to do with the fact that g’mic does have hard-coded sRGB values in the code. I mostly edit photographs in the Rec.2020 color space, and I paint in a custom color space. So if I were to use g’mic, any editing algorithm that converted from RGB to XYZ/LAB/LCH would produce technically wrong results. And any g’mic algorithms that use code that removes the sRGB companding curve, well, I use linear gamma color spaces most of the time. So “linearizing” the already linear RGB data by removing the non-existing sRGB companding curve would produce data that is quite far away from being linear, and so would produce gamma artifacts of the opposite type than the standard gamma artifacts (too light between blended colors instead of too dark).

Maybe I’m being really silly here, but knowing that “some” of g’mic’s algorithms assume my RGB data uses the sRGB primaries and TRC just plain bothers me. If the day ever comes that g’mic is “color space aware” instead of just assuming sRGB, that might increase my interest in learning the particulars of using g’mic.

Again, these are my reasons for not using g’mic. I’ve always thought very highly of the sophisticated algorithms provided by g’mic and equally highly of the artistic thought that goes into the many algorithms provided through the g’mic GIMP plug-in.

Hmm, yes, really there is! The box after the colon has the link. I didn’t make the box appear. There’s some sort of metadata in the header of the html that makes these little boxes appear. They don’t appear for links to articles on my website because I don’t use these metadatas in the headers for my html pages.

I don’t think I can explain like you are five - sorry! Assuming floating point precision without clipping, these two procedures produce the same resulting color, that is, the same final XYZ channel values:

  1. If you add two RGB colors together (channel by channel) in any given linear gamma RGB matrix color space, and then convert the result to XYZ.

  2. If you convert two RGB colors to XYZ and add the the XYZ channel values.

You can use my GIMP-CCE to experiment with adding two colors together, say in the linear gamma sRGB color space - set the bottom layer to Normal and the top layer to Addition, and make new from visible. And then convert the XCF stack to linear gamma Rec.2020. Hide and unhide the “make new from visible” layer and you’ll see that the result of addition will be the same before and after the conversion.

Please note: The RGB channel values for the sum will be different in different linear gamma RGB working space. But the actual color that you see - the actual color in XYZ space - will be the same.

You can also verify results using ArgyllCMS xicclu at the command line.

Here’s a worked example:

Now try the same thing, but this time multiply the two colors (set the bottom layer to Normal, the top layer to Multiply), for which the color space primaries do matter quite a lot. The “Jupyter Notevook Viewer” link that I provided earlier provides a nice worked example.

If you really do want to understand what happens with various algorithms, even something as simple as addition and multiply, experimenting for yourself is the best route to getting a feel for what happens. The first time I stacked solid red, solid blue, and solid green, with red at the bottom set to Normal blend, blue and green set to addition, and got white, that seemed very odd! even though I knew what to expect!

Physically, addition is like shining a light on a piece of paper, and then shining another light, that might be the same or a different color, on the same spot on the same piece of paper. Lightwaves add.

Physically, multiply is like putting a filter over a light - the resulting color depends not only on the wavelengths absorbed by the filter, but also on the color of the light before it passes through the filter. The exception is if the filter is neutral gray, which merely attenuates without also changing the color of the light that makes it past the filter.

1 Like

@Carmelo_DrRaw Sorry for the confusion: throughout the thread my use of the term saturation wasn’t refering to HSV but to where values clip in the upper range. I should stop using it that way…

That explains things! I thought about limiting how much I could increase chroma for any given hue. Perhaps, I can do so in a way that would protect already colorful colors from being clipped; like vibrance vs saturation controls you might see in some image editors. Or,

I could rely on tools to accomplish that without getting my hands dirty and mind confused lol. In any case, GUI is conducive to learning about this stuff.


@Elle Thanks for your explanation and taking the time to do it. It might not be for a 5 year old but I certainly understand it much better now :clinking_glasses:. I still don’t see the link though. I opened the post in raw format and still see nothing after the colon… weird.

Try this link: http://colour-science.org/posts/about-rendering-engines-colourspaces-agnosticism/

Oh, I see. Both

Also see the following article, which discusses reasons why ACES isn’t a good universal RGB working space:

and

About Rendering Engines Colourspaces Agnosticism:

are pointing to the same article.

That is a false assertion. As G’MIC algorithms doesn’t care about the input color space (as most image processing algorithms, they basically do arithmetic with pixel values), You cannot say it returns “wrong” results. You have to know what meaning to the input you give, and that is most often the kind of output you’ll get (except for a very few commands, basically those doing colorspace conversions, but if you don’t like them, you are free to not to use them, or even to re-define them).

That’s not correct either. If the default G’MIC sRGB<->RGB conversion does not suit your needs, you are free to define your own conversion formula. I even doubt you can find a tool that allows such flexible and generic behavior, to be honest.

1 Like

That’s exactly the point: any command that uses XYZ/Lab/LCh values DOES imply a colorspace conversion, and therefore has to care about the input color space. If you assume sRGB input, you will get technically wrong results whenever the input data is in some other RGB colorspace.

Sorry to join the chorus that insists on this point, but that’s the knot of the rope…

You have some good examples in the FOSS world, like the VIPS image processing library that is used by several projects, including photoflow.
The image representation used internally by VIPS allows to attach together image data and meta-data, and thus associate an ICC profile to RGB images. The ICC profile is then used in colorspace conversions to obtain correct results independently of the input RGB colorspace.

If there could be a mechanism to associate meta-data to CImg objects, then something similar could be easily implemented in G’MIC as well. One could always assume sRGB as a fallback if the ICC meta-data is missing…

1 Like

Then just redefine commands rgb2srgb_ and srgb2rgb_ to fit your input data. I’ll make this thing easier in next version of G’MIC (2.1.5), but this is already possible.

No, that’s not the spirit. Both CImg and G’MIC assume the user knows what kind of pixel data he manipulate, I don’t see why adding this info as a metadata would be useful in any way.

It’s a little known fact that one of my primary reasons for setting up pixls was also for me to learn from people far, far smarter than I on exactly these types of topics. :wink:

I’m not even remotely qualified to speak with any authority on the subject other than to say that I am ridiculously thankful for those that can grok these topics and make them available to us laypersons! Honestly, I’m probably nowhere close to pushing the boundaries of what sRGB provides for me for the most part (artistically for results that make me happy). But I’m always lurking to learn!

1 Like

What @Carmelo_DrRaw says is of course correct, except for the word “saturation”. “Saturation” in image editing has far too many definitions:

  • We all use the word “saturation” loosely speaking to refer to “more or less colorful colors”.

  • We also use it technically to refer to “Saturation” calculated using HSL, HSI, etc. Let’s put HSL/HSI/etc to one side as not relevant to a discussion of LCH - as @Carmelo_DrRaw notes, these HSL/HSI/ect saturation values change every time you change the RGB working space, and this includes changes to the color space TRC as well as changes to the RGB primaries.

  • The word “saturation” has a precise definition in color appearance models such as CIECAM02, and that definition is different than the definition of “chroma”. This page by Mark Fairchild presents some nice definitions of colorfulness, saturation, and chroma as used in color appearance models:

http://www.rit-mcsl.org/fairchild/WhyIsColor/Questions/4-8.html

Here’s the TOC for the whole series of “WhyIsColor”, which strives to hit the “explain it to me as if I were five” sweet spot, but I still find the explanations of chroma, colorfulness, and saturation difficult to visualize:

http://www.rit-mcsl.org/fairchild/WhyIsColor/map.html

LAB/LCH is not a color appearance model (it’s a “color difference” model), so “colorfulness” technically doesn’t apply to LCH. That doesn’t keep people from coming up with equations for colorfulness in LAB/LCH. This Wikipedia page gives a nice equation for calculating “colorfulness” - or maybe its really “saturation” - in the LAB/LCH color space. The page also has some very confusing (to me) sections on other topics:

In the LCH color space, LCH blend modes and color pickers allow to keep Chroma constant while changing Lightness or Hue. But keeping the appearance of colorfulness/saturation constant is a different matter, requiring some workarounds. In GIMP, if you want to change tonality of an image and also keep colorfulness constant (or is it saturation? the definitions are not easy for me to visual!), use Luminance blend to blend the new tonality with the original image colors.

OK, putting all this technical stuff to one side, what does any of this mean for actual image editing? Here’s an example using GIMP-2.9’s Lightness and Luminance blends to show the difference between LCH Chroma and colorfulness/saturation (apparently you have to click on the image to all of it - the “4. Luminance blend” version is at the bottom):

What you can’t tell from the above image is that some of the colors in the umbrellas in images #1 and #4 (but not in #3) are out of gamut with respect to the sRGB color space. One could use a mask to blend in some of the result of the Lightness blend, to bring the Luminance blend colors all back into gamut. The good news is that GIMP 2.9 (default, not yet my patched version) now has a really nice clip warning for floating point images.

If anyone wants to experiment with color appearance model terminology by changing the various parameters, the RawTherapee CIECAM02 module allows you to do just that, which might help quite a lot in acquiring a practical understanding of the difference between saturation, chroma, and colorfulness, and various other color appearance model terms. Plus the sliders and curves in the RT CIECAM02 model are just plain fun to experiment with.

1 Like

Bookmarked :slight_smile:.

Yes, the wiki entries on color could be confusing due to wording, typos and factual inaccuracies.

I think that answers my question. I do have both GIMPs installed. It is just that it is rather taxing on my system (and my stamina) to switch between applications repeatedly.

Argh! Sorry, I messed up! I forgot that

gmic h rgb2lab

    rgb2lab (+):
                        illuminant={ 0=D50 | 1=D65 } |
                        (no arg)

gmic 50,50,1,3 fill_color 0,0,255 rgb2lab 0 lab2lch s c k.. p

whereas

rgb2lch --> rgb2lab 1 lab2lch

∴ 0,0,255 → 133.80 131.208, or if you keep more figures, 131.20828247070312.

1 Like

Cool - I’m really impressed by people who can use gmic from the command line! And the way you redid the calculations, the result matches GIMP/xicclu - yes?