More on LCH and JCH

This thread is meant to collect (my) thoughts and questions on these two color spaces.


Subject #1 (S1)

At the moment, I am pondering on how CLCH is distributed. Observe the last figure in this post and that

RGBLinear → CLCH
255,0,0 → 104.55
0,255,0 → 119.77
0,0,255 → 133.80 131.208

NB The figure has HSVsRGB mapped onto the LCH coordinate map. My numbers come from linear RGB via gmic 50,50,1,3 fill_color 0,0,255 rgb2lch s c k.. p. Correct me if I am wrong, does 240° correspond 0,0,255? I get confused when gamma is involved. @Elle @David_Tschumperle

How should I scale, increase, decrease or otherwise manipulate CLCH if saturation max CLCH appears to depend on hue HLCH?

Answer The posts that best answered my question were Post 12, Post 22 and Post 24.

Hi @afre - As an aside to the question you are actually asking, gmic numbers are slightly different than what you will get from using GIMP’s “change foreground color” tool. ArgyllCMS xicclu utility, default GIMP and my GIMP-CCE all agree that the correct LCH “C” value for sRGB 0,0,255 is 131.2, not 133.8. You will probably find these slight differences throughout any sRGB to LCH conversion done by gmic vs GIMP. I noticed a similar discrepancy awhile back. comparing gmic results to GIMP/xicclu results.

Yes, 240 HSV does correspond to 0,0,255. As far as “gamma” is concerned, in any RGB working space, as long as every channel value is either 0 or 255/1.0 floating point/etc, then the TRC encoding doesn’t matter - the HSV values point to the same XYZ color. Anywhere in between the TRC makes a difference. This is also true when converting from RGB to LCH, as per the following example using xicclu:

$ xicclu -pL -ir sRGB-elle-V2-g10.icc
0.000000 0.000000 1.000000 [RGB] → 29.565320 131.208289 301.364760 [LCh]
0.000000 0.000000 0.500000 [RGB] → 20.165219 104.140088 301.364760 [LCh]

$ xicclu -pL -ir sRGB-elle-V2-srgbtrc.icc
0.000000 0.000000 1.000000 [RGB] → 29.565320 131.208289 301.364760 [LCh]
0.000000 0.000000 0.500000 [RGB] → 11.256500 78.486856 301.364760 [LCh]

Notice the LCH numbers match for 0,0,1 floating point, but not for 0,0,0.5 floating point.

Hmm, my apologies, I don’t understand the question. Are you asking how to manipulate LCH Chroma if HSV Saturation depends on HSV Hue? If so, I still don’t understand the question :confused: - the two color spaces just don’t have a lot in common mathematically - it might help if you could give a concrete example of the editing task you have in mind.

I haven’t used xicclu before. Would you say that it is more accurate than gmic? I don’t have the insight in the mathematics, color science or programming to tell.

Oops, I meant

How should I scale, increase, decrease or otherwise manipulate CLCH if saturation max CLCH appears to depend on hue HLCH?

Since red ≈ 104.55, green ≈ 119.77 and blue ≈ 133.80; it means the reddest red, greenest green and bluest blue have different max chroma. (You mentioned these won’t change with different gamma: how about color gamuts?)

I wouldn’t be able to adjust the chroma of an image without knowing its color composition. Otherwise, I would risk exceeding the max chroma for certain colors, while under-delivering others…

If I sound like I don’t know what I am talking about, that is just it :blush:.

Yes, xicclu is part of ArgyllCMS. ArgyllCMS code is very reliable and very accurate. ArgyllCMS is actually an entire ICC profile Color Management System, specializing in delivering accurate results for making device ICC profiles, among other things.

You ask an important question: How do you decide which software to trust for which particular application?

As an example, back in 2014 I wrote a patch for the babl code (extensions/CIE.c) that does LAB/LCH conversions for GIMP. To write this patch, I started with Lindbloom’s equations for converting from RGB to XYZ to LAB and LCH and back again. I set up spreadsheets and tracked sample colors forwards and backwards, and compared GIMP output using my code modifications with what my spreadsheet said should result, and also with ArgyllCMS xicclu output.

I also compared results with output from Lindbloom’s own online calculators (I’m not sure if those still are available and functioning). Lindbloom’s values do differ slightly from ArgyllCMS values and from my spreadsheet values even though the spreadsheet used Lindbloom’s equations. Why? Because Lindbloom uses ASTM values for D50 and D65, and ArgyllCMS uses the D50 values given in the ICC profile specifications and the D65 values used in various color space specifications.

So I trust Lindbloom’s equations for LAB/LCH and for Bradford chromatic adaptations. But I don’t trust his output, or rather I trust that his output is accurate given the input values he uses for D50 and D65. But he doesn’t use the values that are actually used in ICC profile color space conversions.

So obviously my own “chain of trust” includes relying on the Lindbloom equations to be accurate, and relying on output from ArgyllCMS utilities to be accurate, which trust was reinforced by the closeness of ArgyllCMS results to results from my spreadsheet equations, after setting up a spreadsheet to do the calculations given on Lindbloom’s website.

I haven’t only relied on these sources, I’ve read fairly widely, though not particularly deeply, on the efforts of color scientists to mathematically capture how we see colors, and I’ve never seen anything that makes me think “Oh, gee, maybe all these people who’s work I rely on are getting it wrong”.

I will leave it up to you and the gmic devs to determine how accurate gmic results are. I seldom use gmic, partly because it’s hard-coded to use the sRGB color space, and I don’t usually edit in the sRGB color space. On the other hand, a while back I picked an image and experimented with all the then-available gmic algorithms, and did “bookmark” one particular algorithm to come back to someday, at least for use with that particular image. Also I really like some of the gmic blurring algorithms.

If you like results of using gmic algorithms for image editing, then the accuracy of underlying calculations doesn’t really matter, does it? But if you are using gmic to determine accurate LCH values, it’s the wrong tool for the task.

Yes, the larger the RGB working space color gamut, the greater the chroma will be for the primaries. See Figure 3 on this page, comparing chromas of a selected swath of sRGB and Rec.2020 colors:

“Using LCH to pick complementary colors and for making hue-based color harmonies”: https://ninedegreesbelow.com/photography/lch-complements-and-color-harmonies.html

And compare the available ranges of chroma for given hues in sRGB and Rec.2020, with the chromas of pigments used in painting as given on the handprint.com website color wheels:

LAB: handprint : CIELAB ab plane
CIECAM: handprint : artist's color wheel (CIECAM version)

Yes, that is a drawback of using LCH - it’s very easy to produce out of gamut colors at floating point precision, and clipped colors at integer precision.

I’m not sure what you mean by under-delivering.

I’m looking forward to the day when Rec.2020 monitors are affordable and people are able to use larger color gamuts even for images posted to the web. But I’m not looking forward to the avalanche of oversaturated images that I sort of expect people will starting producing and posting.

With respect to using LCH, the “win” of editing in larger RGB color spaces isn’t that hues located near the primaries will have higher available maximum chromas. The “win” is that the hues in between the primaries will have higher available maximum chromas, allowing for example for a wider range of green-blues and blue-greens. From a “chroma” point of view, sRGB is very weak in these colors, compared not just to Rec.2020 but also to colors that can be printed using good quality photographic printers, to the pigments used in painting, and to the surface colors that are out there in the real world.

Thanks for the info and doing the homework so to speak. I wasn’t expecting a long answer :slight_smile:.

I agree that gmic does have its weaknesses, or at least things that I don’t fully understand, but I mainly use it because I can try things out fairly quickly compared to GUI-oriented apps, at least for my low end computer. It also allows me to experiment and learn new things about image processing, like I am doing here.

BTW, which algorithms and things have been of interest to you?

I totally forgot about this; bad memory. And I gave you feedback on this article too :blush:.

As shown in your figure, max chroma values depend on which hue and space we are talking about. So, when I raise the chroma of the in-between hues, there is a possibility of overdoing others. Since I have some difficulty with color (and lack the skill and color management), I worry that I cannot tell what is too much or little for any given hue. In your experience, how have you dealt with this problem?

1 Like

Well, just the various blurring and noise-reduction algorithms, in theory. In practice, I’ve never used these in a finished image, except for blurring a layer mask, and I haven’t used the blurring algorithms since GIMP acquired an edge-respecting blur.

In theory I’d like to use the in-painting algorithms as per @patdavid 's wonderful tutorials on this topic. But so far I haven’t been able to get these to work as well as the old 8-bit resynthesizer plug-in. I suspect I’m just not doing all the steps correctly, so next time I want to do some in-painting I’ll try the gmic algorithms again.

I suspect gmic has some great sharpening algorithms, and the upsizing algorithms also sound appealing. But figuring out which algorithm with which parameters might work for a given editing goal seems to be a very time-consuming process, especially given that results seem to be specific to the image contents and image size.

Another reason I don’t use gmic is I’m never sure when a given algorithm expects sRGB input.

That’s probably a wrong reason. Most usual image processing algorithms do not care about the type of input data. E.g., convolution, sharpening, … are not mathematically defined relatively to the colorspace of input data.
I’d say even that it has often a quite low impact on the result (I know you probably won’t agree, but if you look at people who are designing image processing algorithms in research lab for instance, they mostly don’t care if the input are encoded in sRGB, linear RGB, or Lab, because this is of few importance compared to what the algorithm itself computes).

I agree that most image processing algorithms are not defined relative to the colorspace of the input data. The exceptions are things like:

  • Any algorithm that calculates relative luminance - which requires Y from XYZ, and so is dependent on the RGB color space primaries and also on the RGB color space TRC, because calculating relative luminance requires removing the color space companding curve in order to operate on linear RGB - otherwise you get Luma.

  • Any algorithm that uses luminance or luma as input also must take the RGB color space primaries and TRC into account, or else produces wrong results.

  • Of course algorithms that convert from RGB to XYZ and then perhaps to LAB or LCH must take the RGB color space primaries and also the color space TRC (the companding curve) into account, or else produce wrong results.

“How wrong” results of using the wrong primaries and TRC when converting to LAB of course depends on how far off the actual primaries and TRC are from the assumed primaries and TRC.Here are a couple of examples (one very wrong, one slightly wrong) from using the wrong TRC when converting from sRGB to LAB:

Well, I’m sure that you’ve read a lot more papers on image processing algorithms than I have. But from the ones that I’ve read, often there really is a conspicuous absence of any description whatsoever of what color space the input image is presumed to be in before the algorithm is applied. However, a failure on the author’s part to mention the RGB primaries and TRC does not in any way logically imply that the user’s choice of RGB primaries or channel encoding won’t make a visible and obvious difference in the result of applying the algorithm.

With respect to the channel encoding (TRC, companding curve), consider “gamma artifacts” from editing using perceptually uniform RGB instead of linear RGB. Default GIMP is built around the premises that:

  1. Gamma artifacts are important (I agree)
  2. Users should be able to edit their images without worrying too much about gamma artifacts (I agree, but users who know what they are doing should have the option to do other than what might be technically correct, and also there are a few operations for which “technically correct” is not really applicable, and recent default GIMP code does make it possible for users to choose to go against what’s technically correct)

Examples of gamma artifacts from painting in the regular sRGB color space with its more or less perceptually uniform TRC, compared to painting using linear sRGB:

The difference the RGB color space TRC (companding curve, channel encoding) makes when using “posterize” to make a step-wedge:

Adding noise to regular vs linear sRGB:

Auto-stretch-contrast:

For more examples of the difference the image’s RGB TRC (channel encoding, companding curve) makes, see:

Linear Gamma vs Higher Gamma RGB Color Spaces: Gaussian Blur and Normal Blend Mode

and

Is your image editor using an internal linear gamma color space? Should it?

OK, now let’s look at the notion that the RGB color space primaries don’t matter:

White-balancing an sRGB camera-saved jpeg (White balancing camera-saved sRGB jpegs) that was shot using the wrong white balance, using linear sRGB to pick the white point:

Same as above, except using linear Rec.2020:

Color correcting an image using a known neutral spot - for the second image, tyvek is very close to neutral white:

Following is a partial list of GIMP editing operations for which results are entirely independent of the color space RGB primaries, assuming the channel encoding (TRC, companding curve) is linear (all bets are off once you mix non-linear channel encodings into the mix):

Blend modes: Addition
Blend modes: Dissolve
Blend modes: Grain extract
Blend modes: Grain merge
Blend modes: Normal
Blend modes: Subtract
Colors: Brightness/Contrast
Colors: Desaturate luminosity
Colors: HDR Exposure, exposure and offset
Colors: Invert Colors
Colors: Levels Value Channel, upper/lower sliders
Colors: Mono Mixer, straight luminosity
Filters: Apply lens
Filters: Edge Detect difference of gaussians
Filters: Emboss
Filters: Gaussian Blur
Filters: Lens distortion
Filters: Noise Spread
Filters: Pixelize
Filters: Unsharp mask
Filters: Vignette - black, white, gray
Filters : Noise Pick
Filters : Noise Slur
Paint Tools: Normal, etc blend modes
Tools/gegl op: High Pass
Tools/gegl op: Mantuik06
Tools/gegl op: Mirror
Tools/gegl op: Radial Gradient
Tools/gegl op: Gaussian blur
Transforms: Crop
Transforms: Flip
Transforms: Rotate
Transforms: Scale
Transforms: Other transforms

Here’s a partial list of GIMP editing operations for which results are very dependent on the RGB primaries, even if the channel encoding is linear:

Blend modes: Burn
Blend modes: Color
Blend modes: Darken only
Blend modes: Difference
Blend modes: Divide
Blend modes: Dodge
Blend modes: Hard light
Blend modes: Hue
Blend modes: Lighten only
Blend modes: Multiply
Blend modes: Overlay
Blend modes: Saturation
Blend modes: Screen
Blend modes: Soft light
Blend modes: Value
Channel data: Using channel data as an editing layer
Channel data: Channel-based selections
Colors: Alien Map HSL or RGB
Colors: Auto strech contrast
Colors: Auto stretch contrast HSV
Colors: Channel Mixer
Colors: Color Balance
Colors: Colorize
Colors: Curves, RGB channels
Colors: Curves, Value channel
Colors: Desaturate average
Colors: Desaturate lightness
Colors: HDR Exposure, gamma
Colors: Hue-Lightness-Saturation
Colors: Levels RGB channels, upper/lower sliders (See Figure 2 below)
Colors: Levels gamma slider adjustments, RGB and Value channels (Also see Figure 1 below)
Colors: Mono Mixer, anything except straight luminosity (See Figure 3 below)
Colors: Threshold
Colors: Posterize
Colors: Value Invert
Filters: Artistic Cartoon
Filters: Artistic Soft glow
Filters: Edge Detect Laplace
Filters: Edge Detect Sobel
Filters: Noise RGB
Filters: Red Eye Removal
Filters: Tile Seamless
Filters: Vignette - color
Paint Tools: Multiply, etc blend modes
Tools/gegl op: Box Max
Tools/gegl op: Box Min
Tools/gegl op: Fattal2

For more information on the difference the RGB working space primaries make, and for links to example images, see the following article, which discusses why sRGB isn’t suitable as a universal color space for editing. Similar problems obtain regardless of what RGB color space one might choose as the one and only color space for editing, be that color space sRGB or ProPhotoRGB or ACES or whatever:

Limitations of unbounded sRGB as a universal color space for image editing:

Also see the following article, which discusses reasons why ACES isn’t a good universal RGB working space:

About Rendering Engines Colourspaces Agnosticism:

To summarize:

Addition and subtract are chromaticity-independent operations, but results do depend on the TRC.

Multiply and divide by any color other than gray are chromaticity-dependent operations, and results also depend on the TRC, not just “technically” but also visibly and obviously. So are operations that depend on retrieving individual channels for use in further editing steps.

“Gamma” adjustments and Curves also are chromaticity-dependent except when operating on all three channels by exactly the same amount, again, not just technically, but visibly. And results do depend on the TRC, not just technically but also visibly.

I will absolutely agree that sometimes the difference between sharpening on linear vs perceptually uniform RGB is subtle. But sometimes that subtle difference is visually important.

2 Likes

Wow that’s a lot of (very interesting) information! Perhaps this is a case of differing definitions rather than opinions? I think the point about G’MIC is that the core commands are calculations over a data set / signal and don’t “care” about what you’re operating on - it’s entirely up to the user.

Having said that, the community filters are very free-form so obviously it’s difficult to tell exactly what’s going on (I’m probably guilty of any number of misinformed colourspace travesties). I don’t think that means it should be avoided though!

That’s exactly the spirit yes. That’s why G’MIC also works with images that doesn’t represent ‘colors’ at all. Feed it with e.g. MRI datasets where each pixel/voxel value represents a response to a magnetic field, and it will work the same. Do it with X-Ray images, satellite images, and so on… it will work the same. You will be able to perform blur, convolution, sharpening, and all usual image processing operators on those images. The user has to know what kind of data he gives to the tool.

At the end, this means the tool is generic, that’s the point.

This also means you cannot say things like : “I don’t use G’MIC because I don’t know what kind of data is expected as input”. This is a nonsense, from a G’MIC perspective.
The reality is color images represent a very small planet in the whole image processing universe. I’m aware we are on a photography forum, so we mainly talk about RGB images here, but image processing algorithms are just mathematically defined, they mostly don’t give a shit about colors.

Of course, we should take care of applying the algorithms in the more accurate color representation, when algorithms are applied on images. But in general, no need to be ‘exact’, close is enough. LinearRGB is known to be more adapted for doing color averaging, but that is only because it is more close to how we, human perceive the averaging of colors. All the examples illustrated by @Elle are well known, but try with a slightly different transformation, like using a gamma of 2.2 instead of 2.4. I’m 100% sure you won’t see the difference, as long as the inverse transformation is also well defined.
Anyway, at the end people do not perceive colors the same.
Thus, I think people shouldn’t be obsessed by numbers when representing colors. Taking care of the 2th digit after the decimal point is definitely useless when talking about a color transformation.

Most of the time, it is more than sufficient to know that :

  • RGB colors in usual file formats (JPEG, PNG, …) are encoded in sRGB.
  • Doing a “rough” sRGB->LinearRGB transform is a good idea to make usual color arithmetic more close to our visual perception (or sometimes use the Lab color space instead).
  • Do the LinearRGB->sRGB transform at the end, to store the result back in a file.

The conversion formula is finally of little importance. Just be close enough and you’re good.

I’ve met a lot of people whose job is to design image processing algorithms (that’s also my job), and they all do that, roughly. I think I have to say it again: it’s more than sufficient for most of the real cases. I don’t believe in statements like “this color representation is not exact to be able to process the image with this algorithm”. Sounds more like the delirium of a maniac to me :slight_smile:

Something close enough to how our perception works is enough.
The effect of the 5th digit after the decimal point is ridiculously negligible compared to the kind of operations a smart image processing algorithm performs.

Good night :slight_smile:

3 Likes

Interesting perspectives

What I would say is that there is always a tension between perception, theory, practice, standards and “vernacular”; also personal emphasis and predisposition.

Just look at the packaging and placement in grocery stores of different philosophies and regions from around the world, for example. The labeling, design and marketing are all very different. In Asia, you have ISO this ISO that. In Europe, you have that ℮ sign everywhere. In America, food images look indulgent on the box but less remarkable on the inside. Then there is the health food store, fitness, etc. I digress but hope you get my point.

In terms of this particular thread, I would say that the attempt to adhere to standards is a good thing, especially when many of us on discuss prefer to have a closed system that is color managed and color accurate (however that is decided; I don’t think many of ICC’s determinations are ISO yet :stuck_out_tongue_closed_eyes:).

Take my workflow for instance. Recently, I have been experimenting with a mix of photoflow, gmic and gimp processing. As seen in my PlayRaw attempts, I sometimes come up with something nice but many times it becomes down right terrible (and I silently remove those entries hoping people don’t notice :rofl:). That is because I cannot make them play nice with each other. Oh, but what a joy it is to get them to cooperate! At least, it is a fun exercise for me!

The point is that their philosophies to things are so different and it would be great if I knew a way to go from one app to the other without too many roadblocks. I don’t think that I would be able to reconcile their differences any time soon, so I will have to accept that and educate myself as much as possible to mitigate any outrageous inconsistencies.

S1 (Back to Chroma)

I am still not sure whether my original question has been answered. Maybe I didn’t pose it all that well or am not getting what I expected. I guess the main concern is that I am not sure whether I know how to use the C channel of LCH anymore.

I thought I knew, then I realized that the max C for every H is different given the space. Well, I left brain knew but it hit me harder recently. It is different from saturation in that saturation happens at clipping. But in LCH space and in unbounded floating point ranges, it isn’t as simple, or at least I am not at that level of comprehension yet.

I hope other people besides @Elle and @David_Tschumperle would pitch in too. Though I named them specifically, since I have been in discussion with them on similar topics before, I would appreciate more perspectives from more people. Any suggestions @patdavid?

Follow up questions

Were you planning to link something there? Nothing is after the colon currently :slight_smile:.

Bear with me, I still don’t quite get the addition and subtraction v multiplication and division thing. I have read your articles but it isn’t clicking. I don’t know if you could “explain like I’m five” so to speak. I feel that it is important that I grasp this stuff moving forward.

2 Likes

In fact, I think it is more simple in LCH than HSV/HSL.

Let’s put it this way: HSV/HSL is a color representation, not a colorspace, and an HSV triplet does not correspond to an unique color (exactly like an RGB triplet does not define an unique color, unless you also specify the color space like sRGB, AdobeRGB, ProPhoto…). Hence, the same saturation value corresponds to different visual saturations depending on the RGB colorspace from which the HSV values have been derived.

Moreover, the HSV representation is not perceptually uniform, and therefore the visual saturation does not stay constant when you scan the H values at fixed S.

On the other hand, the CIELCh colorspace has been designed to be perceptually uniform, and to approximate a constant visual saturation for a given C value across different Hue values.

Coming back to your original question, the fact that the three sRGB primaries have different C values is simply a consequence of the fact that the blue sRGB primary is closer to the spectral locus than the red and green ones (as can be seen here), and therefore a “pure” sRGB blue is “visually more saturated” than a “pure” red or green. This statement is probably not 100% correct from the mathematical point of view, but should give you the idea…

The bottom line: CIELCh is a better representation than HSV/HSL if you want to edit colors in an intuitive and device-independent way. For example, you can fix C and change h in order to make colors warmer or cooler without affecting the resulting “perceived saturation”.
By the way, RawTherapee has LCH curves to play with, and I have a few LCh-based editing tools in the coding pipeline for PhotoFlow…

Hope this helps!

1 Like

Nothing I’ve said about g’mic should be interpreted as a reason for anyone to avoid using g’mic. If a user likes the results of using g’mic algorithms, that’s the only thing that counts.

I pointed out the discrepancy in LCH values for the sRGB primaries not because this is a reason to avoid g’mic for editing - again, if the user likes the results, that’s what counts. However, the slightly incorrect LCH values that @afre got from g’mic, when provided as “these are the values for the sRGB primaries” - that’s a different use from actual image editing. If the goal is finding the LCH values for particular colors, g’mic isn’t the best tool to use. It would be more accurate to use GIMP’s color picker or ArgyllCMS xicclu or a spreadsheet or etc.

There are two reasons why I myself almost never use g’mic:

The first and more important reason has nothing at all to do with whatever it is that g’mic does with data that it receives. g’mic provides a lot of options for various algorithms, and is also a bit like a “black box” - data goes in, results come out. I like to understand how the editing tools I use actually work. And I like to explore all the options that come with any given tool, so that I have an idea what to expect when using a given algorithm with a given set of options on a given type of image.

It would take me a long, long time to build up the expertise with g’mic that @garagecoder and @afre have. I’m guessing long-time g’mic users do have an idea of what will result from using various g’mic algorithms on different types of images. I don’t have any idea, and I haven’t found the time or motivation to make an effort to learn.

There are a lot of editing tools out there. I’ve spent time recently on PhotoFlow filmic and on RawTherapee CIECAM02 as these seem to provide (and do provide, as it turns out) answers to specific editing problems I want to solve. Someday something in g’mic might seem like the answer to an editing problem that I’m trying to solve.

The second reason why I don’t use g’mic does have to do with the fact that g’mic does have hard-coded sRGB values in the code. I mostly edit photographs in the Rec.2020 color space, and I paint in a custom color space. So if I were to use g’mic, any editing algorithm that converted from RGB to XYZ/LAB/LCH would produce technically wrong results. And any g’mic algorithms that use code that removes the sRGB companding curve, well, I use linear gamma color spaces most of the time. So “linearizing” the already linear RGB data by removing the non-existing sRGB companding curve would produce data that is quite far away from being linear, and so would produce gamma artifacts of the opposite type than the standard gamma artifacts (too light between blended colors instead of too dark).

Maybe I’m being really silly here, but knowing that “some” of g’mic’s algorithms assume my RGB data uses the sRGB primaries and TRC just plain bothers me. If the day ever comes that g’mic is “color space aware” instead of just assuming sRGB, that might increase my interest in learning the particulars of using g’mic.

Again, these are my reasons for not using g’mic. I’ve always thought very highly of the sophisticated algorithms provided by g’mic and equally highly of the artistic thought that goes into the many algorithms provided through the g’mic GIMP plug-in.

Hmm, yes, really there is! The box after the colon has the link. I didn’t make the box appear. There’s some sort of metadata in the header of the html that makes these little boxes appear. They don’t appear for links to articles on my website because I don’t use these metadatas in the headers for my html pages.

I don’t think I can explain like you are five - sorry! Assuming floating point precision without clipping, these two procedures produce the same resulting color, that is, the same final XYZ channel values:

  1. If you add two RGB colors together (channel by channel) in any given linear gamma RGB matrix color space, and then convert the result to XYZ.

  2. If you convert two RGB colors to XYZ and add the the XYZ channel values.

You can use my GIMP-CCE to experiment with adding two colors together, say in the linear gamma sRGB color space - set the bottom layer to Normal and the top layer to Addition, and make new from visible. And then convert the XCF stack to linear gamma Rec.2020. Hide and unhide the “make new from visible” layer and you’ll see that the result of addition will be the same before and after the conversion.

Please note: The RGB channel values for the sum will be different in different linear gamma RGB working space. But the actual color that you see - the actual color in XYZ space - will be the same.

You can also verify results using ArgyllCMS xicclu at the command line.

Here’s a worked example:

Now try the same thing, but this time multiply the two colors (set the bottom layer to Normal, the top layer to Multiply), for which the color space primaries do matter quite a lot. The “Jupyter Notevook Viewer” link that I provided earlier provides a nice worked example.

If you really do want to understand what happens with various algorithms, even something as simple as addition and multiply, experimenting for yourself is the best route to getting a feel for what happens. The first time I stacked solid red, solid blue, and solid green, with red at the bottom set to Normal blend, blue and green set to addition, and got white, that seemed very odd! even though I knew what to expect!

Physically, addition is like shining a light on a piece of paper, and then shining another light, that might be the same or a different color, on the same spot on the same piece of paper. Lightwaves add.

Physically, multiply is like putting a filter over a light - the resulting color depends not only on the wavelengths absorbed by the filter, but also on the color of the light before it passes through the filter. The exception is if the filter is neutral gray, which merely attenuates without also changing the color of the light that makes it past the filter.

1 Like

@Carmelo_DrRaw Sorry for the confusion: throughout the thread my use of the term saturation wasn’t refering to HSV but to where values clip in the upper range. I should stop using it that way…

That explains things! I thought about limiting how much I could increase chroma for any given hue. Perhaps, I can do so in a way that would protect already colorful colors from being clipped; like vibrance vs saturation controls you might see in some image editors. Or,

I could rely on tools to accomplish that without getting my hands dirty and mind confused lol. In any case, GUI is conducive to learning about this stuff.


@Elle Thanks for your explanation and taking the time to do it. It might not be for a 5 year old but I certainly understand it much better now :clinking_glasses:. I still don’t see the link though. I opened the post in raw format and still see nothing after the colon… weird.

Try this link: http://colour-science.org/posts/about-rendering-engines-colourspaces-agnosticism/

Oh, I see. Both

Also see the following article, which discusses reasons why ACES isn’t a good universal RGB working space:

and

About Rendering Engines Colourspaces Agnosticism:

are pointing to the same article.

That is a false assertion. As G’MIC algorithms doesn’t care about the input color space (as most image processing algorithms, they basically do arithmetic with pixel values), You cannot say it returns “wrong” results. You have to know what meaning to the input you give, and that is most often the kind of output you’ll get (except for a very few commands, basically those doing colorspace conversions, but if you don’t like them, you are free to not to use them, or even to re-define them).

That’s not correct either. If the default G’MIC sRGB<->RGB conversion does not suit your needs, you are free to define your own conversion formula. I even doubt you can find a tool that allows such flexible and generic behavior, to be honest.

1 Like

That’s exactly the point: any command that uses XYZ/Lab/LCh values DOES imply a colorspace conversion, and therefore has to care about the input color space. If you assume sRGB input, you will get technically wrong results whenever the input data is in some other RGB colorspace.

Sorry to join the chorus that insists on this point, but that’s the knot of the rope…

You have some good examples in the FOSS world, like the VIPS image processing library that is used by several projects, including photoflow.
The image representation used internally by VIPS allows to attach together image data and meta-data, and thus associate an ICC profile to RGB images. The ICC profile is then used in colorspace conversions to obtain correct results independently of the input RGB colorspace.

If there could be a mechanism to associate meta-data to CImg objects, then something similar could be easily implemented in G’MIC as well. One could always assume sRGB as a fallback if the ICC meta-data is missing…

1 Like

Then just redefine commands rgb2srgb_ and srgb2rgb_ to fit your input data. I’ll make this thing easier in next version of G’MIC (2.1.5), but this is already possible.

No, that’s not the spirit. Both CImg and G’MIC assume the user knows what kind of pixel data he manipulate, I don’t see why adding this info as a metadata would be useful in any way.