More on LCH and JCH

Thatā€™s exactly the spirit yes. Thatā€™s why Gā€™MIC also works with images that doesnā€™t represent ā€˜colorsā€™ at all. Feed it with e.g. MRI datasets where each pixel/voxel value represents a response to a magnetic field, and it will work the same. Do it with X-Ray images, satellite images, and so onā€¦ it will work the same. You will be able to perform blur, convolution, sharpening, and all usual image processing operators on those images. The user has to know what kind of data he gives to the tool.

At the end, this means the tool is generic, thatā€™s the point.

This also means you cannot say things like : ā€œI donā€™t use Gā€™MIC because I donā€™t know what kind of data is expected as inputā€. This is a nonsense, from a Gā€™MIC perspective.
The reality is color images represent a very small planet in the whole image processing universe. Iā€™m aware we are on a photography forum, so we mainly talk about RGB images here, but image processing algorithms are just mathematically defined, they mostly donā€™t give a shit about colors.

Of course, we should take care of applying the algorithms in the more accurate color representation, when algorithms are applied on images. But in general, no need to be ā€˜exactā€™, close is enough. LinearRGB is known to be more adapted for doing color averaging, but that is only because it is more close to how we, human perceive the averaging of colors. All the examples illustrated by @Elle are well known, but try with a slightly different transformation, like using a gamma of 2.2 instead of 2.4. Iā€™m 100% sure you wonā€™t see the difference, as long as the inverse transformation is also well defined.
Anyway, at the end people do not perceive colors the same.
Thus, I think people shouldnā€™t be obsessed by numbers when representing colors. Taking care of the 2th digit after the decimal point is definitely useless when talking about a color transformation.

Most of the time, it is more than sufficient to know that :

  • RGB colors in usual file formats (JPEG, PNG, ā€¦) are encoded in sRGB.
  • Doing a ā€œroughā€ sRGB->LinearRGB transform is a good idea to make usual color arithmetic more close to our visual perception (or sometimes use the Lab color space instead).
  • Do the LinearRGB->sRGB transform at the end, to store the result back in a file.

The conversion formula is finally of little importance. Just be close enough and youā€™re good.

Iā€™ve met a lot of people whose job is to design image processing algorithms (thatā€™s also my job), and they all do that, roughly. I think I have to say it again: itā€™s more than sufficient for most of the real cases. I donā€™t believe in statements like ā€œthis color representation is not exact to be able to process the image with this algorithmā€. Sounds more like the delirium of a maniac to me :slight_smile:

Something close enough to how our perception works is enough.
The effect of the 5th digit after the decimal point is ridiculously negligible compared to the kind of operations a smart image processing algorithm performs.

Good night :slight_smile:

3 Likes

Interesting perspectives

What I would say is that there is always a tension between perception, theory, practice, standards and ā€œvernacularā€; also personal emphasis and predisposition.

Just look at the packaging and placement in grocery stores of different philosophies and regions from around the world, for example. The labeling, design and marketing are all very different. In Asia, you have ISO this ISO that. In Europe, you have that ā„® sign everywhere. In America, food images look indulgent on the box but less remarkable on the inside. Then there is the health food store, fitness, etc. I digress but hope you get my point.

In terms of this particular thread, I would say that the attempt to adhere to standards is a good thing, especially when many of us on discuss prefer to have a closed system that is color managed and color accurate (however that is decided; I donā€™t think many of ICCā€™s determinations are ISO yet :stuck_out_tongue_closed_eyes:).

Take my workflow for instance. Recently, I have been experimenting with a mix of photoflow, gmic and gimp processing. As seen in my PlayRaw attempts, I sometimes come up with something nice but many times it becomes down right terrible (and I silently remove those entries hoping people donā€™t notice :rofl:). That is because I cannot make them play nice with each other. Oh, but what a joy it is to get them to cooperate! At least, it is a fun exercise for me!

The point is that their philosophies to things are so different and it would be great if I knew a way to go from one app to the other without too many roadblocks. I donā€™t think that I would be able to reconcile their differences any time soon, so I will have to accept that and educate myself as much as possible to mitigate any outrageous inconsistencies.

S1 (Back to Chroma)

I am still not sure whether my original question has been answered. Maybe I didnā€™t pose it all that well or am not getting what I expected. I guess the main concern is that I am not sure whether I know how to use the C channel of LCH anymore.

I thought I knew, then I realized that the max C for every H is different given the space. Well, I left brain knew but it hit me harder recently. It is different from saturation in that saturation happens at clipping. But in LCH space and in unbounded floating point ranges, it isnā€™t as simple, or at least I am not at that level of comprehension yet.

I hope other people besides @Elle and @David_Tschumperle would pitch in too. Though I named them specifically, since I have been in discussion with them on similar topics before, I would appreciate more perspectives from more people. Any suggestions @patdavid?

Follow up questions

Were you planning to link something there? Nothing is after the colon currently :slight_smile:.

Bear with me, I still donā€™t quite get the addition and subtraction v multiplication and division thing. I have read your articles but it isnā€™t clicking. I donā€™t know if you could ā€œexplain like Iā€™m fiveā€ so to speak. I feel that it is important that I grasp this stuff moving forward.

2 Likes

In fact, I think it is more simple in LCH than HSV/HSL.

Letā€™s put it this way: HSV/HSL is a color representation, not a colorspace, and an HSV triplet does not correspond to an unique color (exactly like an RGB triplet does not define an unique color, unless you also specify the color space like sRGB, AdobeRGB, ProPhotoā€¦). Hence, the same saturation value corresponds to different visual saturations depending on the RGB colorspace from which the HSV values have been derived.

Moreover, the HSV representation is not perceptually uniform, and therefore the visual saturation does not stay constant when you scan the H values at fixed S.

On the other hand, the CIELCh colorspace has been designed to be perceptually uniform, and to approximate a constant visual saturation for a given C value across different Hue values.

Coming back to your original question, the fact that the three sRGB primaries have different C values is simply a consequence of the fact that the blue sRGB primary is closer to the spectral locus than the red and green ones (as can be seen here), and therefore a ā€œpureā€ sRGB blue is ā€œvisually more saturatedā€ than a ā€œpureā€ red or green. This statement is probably not 100% correct from the mathematical point of view, but should give you the ideaā€¦

The bottom line: CIELCh is a better representation than HSV/HSL if you want to edit colors in an intuitive and device-independent way. For example, you can fix C and change h in order to make colors warmer or cooler without affecting the resulting ā€œperceived saturationā€.
By the way, RawTherapee has LCH curves to play with, and I have a few LCh-based editing tools in the coding pipeline for PhotoFlowā€¦

Hope this helps!

1 Like

Nothing Iā€™ve said about gā€™mic should be interpreted as a reason for anyone to avoid using gā€™mic. If a user likes the results of using gā€™mic algorithms, thatā€™s the only thing that counts.

I pointed out the discrepancy in LCH values for the sRGB primaries not because this is a reason to avoid gā€™mic for editing - again, if the user likes the results, thatā€™s what counts. However, the slightly incorrect LCH values that @afre got from gā€™mic, when provided as ā€œthese are the values for the sRGB primariesā€ - thatā€™s a different use from actual image editing. If the goal is finding the LCH values for particular colors, gā€™mic isnā€™t the best tool to use. It would be more accurate to use GIMPā€™s color picker or ArgyllCMS xicclu or a spreadsheet or etc.

There are two reasons why I myself almost never use gā€™mic:

The first and more important reason has nothing at all to do with whatever it is that gā€™mic does with data that it receives. gā€™mic provides a lot of options for various algorithms, and is also a bit like a ā€œblack boxā€ - data goes in, results come out. I like to understand how the editing tools I use actually work. And I like to explore all the options that come with any given tool, so that I have an idea what to expect when using a given algorithm with a given set of options on a given type of image.

It would take me a long, long time to build up the expertise with gā€™mic that @garagecoder and @afre have. Iā€™m guessing long-time gā€™mic users do have an idea of what will result from using various gā€™mic algorithms on different types of images. I donā€™t have any idea, and I havenā€™t found the time or motivation to make an effort to learn.

There are a lot of editing tools out there. Iā€™ve spent time recently on PhotoFlow filmic and on RawTherapee CIECAM02 as these seem to provide (and do provide, as it turns out) answers to specific editing problems I want to solve. Someday something in gā€™mic might seem like the answer to an editing problem that Iā€™m trying to solve.

The second reason why I donā€™t use gā€™mic does have to do with the fact that gā€™mic does have hard-coded sRGB values in the code. I mostly edit photographs in the Rec.2020 color space, and I paint in a custom color space. So if I were to use gā€™mic, any editing algorithm that converted from RGB to XYZ/LAB/LCH would produce technically wrong results. And any gā€™mic algorithms that use code that removes the sRGB companding curve, well, I use linear gamma color spaces most of the time. So ā€œlinearizingā€ the already linear RGB data by removing the non-existing sRGB companding curve would produce data that is quite far away from being linear, and so would produce gamma artifacts of the opposite type than the standard gamma artifacts (too light between blended colors instead of too dark).

Maybe Iā€™m being really silly here, but knowing that ā€œsomeā€ of gā€™micā€™s algorithms assume my RGB data uses the sRGB primaries and TRC just plain bothers me. If the day ever comes that gā€™mic is ā€œcolor space awareā€ instead of just assuming sRGB, that might increase my interest in learning the particulars of using gā€™mic.

Again, these are my reasons for not using gā€™mic. Iā€™ve always thought very highly of the sophisticated algorithms provided by gā€™mic and equally highly of the artistic thought that goes into the many algorithms provided through the gā€™mic GIMP plug-in.

Hmm, yes, really there is! The box after the colon has the link. I didnā€™t make the box appear. Thereā€™s some sort of metadata in the header of the html that makes these little boxes appear. They donā€™t appear for links to articles on my website because I donā€™t use these metadatas in the headers for my html pages.

I donā€™t think I can explain like you are five - sorry! Assuming floating point precision without clipping, these two procedures produce the same resulting color, that is, the same final XYZ channel values:

  1. If you add two RGB colors together (channel by channel) in any given linear gamma RGB matrix color space, and then convert the result to XYZ.

  2. If you convert two RGB colors to XYZ and add the the XYZ channel values.

You can use my GIMP-CCE to experiment with adding two colors together, say in the linear gamma sRGB color space - set the bottom layer to Normal and the top layer to Addition, and make new from visible. And then convert the XCF stack to linear gamma Rec.2020. Hide and unhide the ā€œmake new from visibleā€ layer and youā€™ll see that the result of addition will be the same before and after the conversion.

Please note: The RGB channel values for the sum will be different in different linear gamma RGB working space. But the actual color that you see - the actual color in XYZ space - will be the same.

You can also verify results using ArgyllCMS xicclu at the command line.

Hereā€™s a worked example:

Now try the same thing, but this time multiply the two colors (set the bottom layer to Normal, the top layer to Multiply), for which the color space primaries do matter quite a lot. The ā€œJupyter Notevook Viewerā€ link that I provided earlier provides a nice worked example.

If you really do want to understand what happens with various algorithms, even something as simple as addition and multiply, experimenting for yourself is the best route to getting a feel for what happens. The first time I stacked solid red, solid blue, and solid green, with red at the bottom set to Normal blend, blue and green set to addition, and got white, that seemed very odd! even though I knew what to expect!

Physically, addition is like shining a light on a piece of paper, and then shining another light, that might be the same or a different color, on the same spot on the same piece of paper. Lightwaves add.

Physically, multiply is like putting a filter over a light - the resulting color depends not only on the wavelengths absorbed by the filter, but also on the color of the light before it passes through the filter. The exception is if the filter is neutral gray, which merely attenuates without also changing the color of the light that makes it past the filter.

1 Like

@Carmelo_DrRaw Sorry for the confusion: throughout the thread my use of the term saturation wasnā€™t refering to HSV but to where values clip in the upper range. I should stop using it that wayā€¦

That explains things! I thought about limiting how much I could increase chroma for any given hue. Perhaps, I can do so in a way that would protect already colorful colors from being clipped; like vibrance vs saturation controls you might see in some image editors. Or,

I could rely on tools to accomplish that without getting my hands dirty and mind confused lol. In any case, GUI is conducive to learning about this stuff.


@Elle Thanks for your explanation and taking the time to do it. It might not be for a 5 year old but I certainly understand it much better now :clinking_glasses:. I still donā€™t see the link though. I opened the post in raw format and still see nothing after the colonā€¦ weird.

Try this link: http://colour-science.org/posts/about-rendering-engines-colourspaces-agnosticism/

Oh, I see. Both

Also see the following article, which discusses reasons why ACES isnā€™t a good universal RGB working space:

and

About Rendering Engines Colourspaces Agnosticism:

are pointing to the same article.

That is a false assertion. As Gā€™MIC algorithms doesnā€™t care about the input color space (as most image processing algorithms, they basically do arithmetic with pixel values), You cannot say it returns ā€œwrongā€ results. You have to know what meaning to the input you give, and that is most often the kind of output youā€™ll get (except for a very few commands, basically those doing colorspace conversions, but if you donā€™t like them, you are free to not to use them, or even to re-define them).

Thatā€™s not correct either. If the default Gā€™MIC sRGB<->RGB conversion does not suit your needs, you are free to define your own conversion formula. I even doubt you can find a tool that allows such flexible and generic behavior, to be honest.

1 Like

Thatā€™s exactly the point: any command that uses XYZ/Lab/LCh values DOES imply a colorspace conversion, and therefore has to care about the input color space. If you assume sRGB input, you will get technically wrong results whenever the input data is in some other RGB colorspace.

Sorry to join the chorus that insists on this point, but thatā€™s the knot of the ropeā€¦

You have some good examples in the FOSS world, like the VIPS image processing library that is used by several projects, including photoflow.
The image representation used internally by VIPS allows to attach together image data and meta-data, and thus associate an ICC profile to RGB images. The ICC profile is then used in colorspace conversions to obtain correct results independently of the input RGB colorspace.

If there could be a mechanism to associate meta-data to CImg objects, then something similar could be easily implemented in Gā€™MIC as well. One could always assume sRGB as a fallback if the ICC meta-data is missingā€¦

1 Like

Then just redefine commands rgb2srgb_ and srgb2rgb_ to fit your input data. Iā€™ll make this thing easier in next version of Gā€™MIC (2.1.5), but this is already possible.

No, thatā€™s not the spirit. Both CImg and Gā€™MIC assume the user knows what kind of pixel data he manipulate, I donā€™t see why adding this info as a metadata would be useful in any way.

Itā€™s a little known fact that one of my primary reasons for setting up pixls was also for me to learn from people far, far smarter than I on exactly these types of topics. :wink:

Iā€™m not even remotely qualified to speak with any authority on the subject other than to say that I am ridiculously thankful for those that can grok these topics and make them available to us laypersons! Honestly, Iā€™m probably nowhere close to pushing the boundaries of what sRGB provides for me for the most part (artistically for results that make me happy). But Iā€™m always lurking to learn!

1 Like

What @Carmelo_DrRaw says is of course correct, except for the word ā€œsaturationā€. ā€œSaturationā€ in image editing has far too many definitions:

  • We all use the word ā€œsaturationā€ loosely speaking to refer to ā€œmore or less colorful colorsā€.

  • We also use it technically to refer to ā€œSaturationā€ calculated using HSL, HSI, etc. Letā€™s put HSL/HSI/etc to one side as not relevant to a discussion of LCH - as @Carmelo_DrRaw notes, these HSL/HSI/ect saturation values change every time you change the RGB working space, and this includes changes to the color space TRC as well as changes to the RGB primaries.

  • The word ā€œsaturationā€ has a precise definition in color appearance models such as CIECAM02, and that definition is different than the definition of ā€œchromaā€. This page by Mark Fairchild presents some nice definitions of colorfulness, saturation, and chroma as used in color appearance models:

http://www.rit-mcsl.org/fairchild/WhyIsColor/Questions/4-8.html

Hereā€™s the TOC for the whole series of ā€œWhyIsColorā€, which strives to hit the ā€œexplain it to me as if I were fiveā€ sweet spot, but I still find the explanations of chroma, colorfulness, and saturation difficult to visualize:

http://www.rit-mcsl.org/fairchild/WhyIsColor/map.html

LAB/LCH is not a color appearance model (itā€™s a ā€œcolor differenceā€ model), so ā€œcolorfulnessā€ technically doesnā€™t apply to LCH. That doesnā€™t keep people from coming up with equations for colorfulness in LAB/LCH. This Wikipedia page gives a nice equation for calculating ā€œcolorfulnessā€ - or maybe its really ā€œsaturationā€ - in the LAB/LCH color space. The page also has some very confusing (to me) sections on other topics:

In the LCH color space, LCH blend modes and color pickers allow to keep Chroma constant while changing Lightness or Hue. But keeping the appearance of colorfulness/saturation constant is a different matter, requiring some workarounds. In GIMP, if you want to change tonality of an image and also keep colorfulness constant (or is it saturation? the definitions are not easy for me to visual!), use Luminance blend to blend the new tonality with the original image colors.

OK, putting all this technical stuff to one side, what does any of this mean for actual image editing? Hereā€™s an example using GIMP-2.9ā€™s Lightness and Luminance blends to show the difference between LCH Chroma and colorfulness/saturation (apparently you have to click on the image to all of it - the ā€œ4. Luminance blendā€ version is at the bottom):

What you canā€™t tell from the above image is that some of the colors in the umbrellas in images #1 and #4 (but not in #3) are out of gamut with respect to the sRGB color space. One could use a mask to blend in some of the result of the Lightness blend, to bring the Luminance blend colors all back into gamut. The good news is that GIMP 2.9 (default, not yet my patched version) now has a really nice clip warning for floating point images.

If anyone wants to experiment with color appearance model terminology by changing the various parameters, the RawTherapee CIECAM02 module allows you to do just that, which might help quite a lot in acquiring a practical understanding of the difference between saturation, chroma, and colorfulness, and various other color appearance model terms. Plus the sliders and curves in the RT CIECAM02 model are just plain fun to experiment with.

1 Like

Bookmarked :slight_smile:.

Yes, the wiki entries on color could be confusing due to wording, typos and factual inaccuracies.

I think that answers my question. I do have both GIMPs installed. It is just that it is rather taxing on my system (and my stamina) to switch between applications repeatedly.

Argh! Sorry, I messed up! I forgot that

gmic h rgb2lab

    rgb2lab (+):
                        illuminant={ 0=D50 | 1=D65 } |
                        (no arg)

gmic 50,50,1,3 fill_color 0,0,255 rgb2lab 0 lab2lch s c k.. p

whereas

rgb2lch --> rgb2lab 1 lab2lch

āˆ“ 0,0,255 ā†’ 133.80 131.208, or if you keep more figures, 131.20828247070312.

1 Like

Cool - Iā€™m really impressed by people who can use gmic from the command line! And the way you redid the calculations, the result matches GIMP/xicclu - yes?

Several years ago when I spent some quality time exploring the GIMP gmic filter options, the filter that I used to make a more or less finished image (with some subsequent tonal adjustments in GIMP) was an anisotropic noise reduction filter. For some reason I find anisotropic noise reduction very pleasing in the way it ā€œbrushes awayā€ details. FWIW, hereā€™s a copy of the image - the tonality looks darker than I intended when viewed against the white pixls.us background, but here is it anyway:

Essentially the same, except for the sixth decimal figure, which we donā€™t need to worry about.

The problem with CLI is that it is susceptible to typos and human oversight. In this case, I neglected to consider the fact that rgb2lch uses the D65 illuminant.

The advantage is that I have more control over what Gā€™MIC does. I.e., in GIMP, many Gā€™MIC filters do indeed assume sRGB (and some 256 levels), which may be the primary reason for your misgivings re Gā€™MIC.

(However, it goes without saying that it is difficult for anyone to make something that anticipates all possible conditions, like GIMP evolving to support higher bit depths. Commercial software like Matlab do a better job because its users pay good money to depend on it.)

That is not to say that Gā€™MIC does not have its quirks in CLI as well, but this is true for all apps. No app can output exactly the same images.

I canā€™t say I like the result there but then I donā€™t have the before image with which to compare. I have seen good uses of anisotropic filtering, it is just that they often come with too many parameters for me to adjust. I tend to use guided for its low parameter count, modernness and cool concept. I still havenā€™t used it to its full potential. E.g., I still havenā€™t figured out how it can mask fine details like hair.

Well, to my mind how good or bad an image looked ā€œbeforeā€ is irrelevant when assessing the image. FWIW, I agree with your assessment - there are aspects of the image that I like, but not enough to that Iā€™d post it in one of my website galleries.

Iā€™ve tried using anisotropic filters in a couple of other images, but never managed to make a finished image that I liked. I blame this on my skill level, not on the algorithm.

Subject #2 (S2)

Now that I am satisfied with S1, on to my next question. I have been wondering about CIECAM02. It appears to be much more complex than CIELAB and has many more parameters. I will start with a general question:

Do you apply CIECAM02 and derivatives like JCH to your workflow? If so, how do you make sense of all of the complexities that go with it? I have only briefly read about it but would like to hear the thoughts of more experienced devs and adventurers :slight_smile:.

1 Like