Is there a PR for that? Maybe I’m duplicating efforts…
I don’t think there is a pull request but I think I heard @aurelienpierre talk about working on it
Imho, raw processing is mainly in RGB space. The input (raw files) mostly is in rgb space (with some very rare exceptions) and the demosaic is also done in RGB space (with some exceptions, as in xtrans Markesteijn and bayer AHD demosaic, where an intermediate conversion to Lab or YPbPr is done).
After demosaic the image is not raw anymore, means the conversions after demosaic don’t fit to the topic of this thread.
It may help to remember that RGB profiles are essentially three components:
Chromaticity of the white point, expressed as either x and y, or X and Y and Z. (In theory, chromaticity might be expressed as a* and b* of L* a* b*, but it isn’t usually.)
Likewise chromaticity of the three primaries.
Transfer function, aka transfer curve or tone response curve (TRC). This might be a simple power (aka gamma) curve (eg AdobeRGB1998), or a linear portion plus a power curve (sRGB, Rec709, Rec2020, ProPhoto). This doesn’t affect chromaticity.
(1) and (2) can be combined with a 3x3 chromatic adaptation transfer (CAT) matrix to make either a 3x3 chromatic adaptation matrix (CAM), or a look-up table (LUT).
Common CATs include Bradford, CMCCAT2000 and CAT02.
Or all three components could be combined with a CAT to make a single LUT. I don’t think this is normally done.
From any profile that has been correctly specified, we can easily reverse-calculate to find the chromaticities of the WP and primaries.
Assigning a profile merely attaches this metadata to the image (aka embedded profile).
Converting to a profile also recalculates image pixels. Converting an image is always from one profile to another profile. If it has no attached profile, it can’t be converted. However, when there is no attached profile, software may assume some version of sRGB.
We also need to know that there are many versions of sRGB or Rec2020 etc profiles, partly because there are many versions of “standard” illuminants D50, D65 etc, partly because the best guesses for physical “constants” changes with the years, and partly because different levels of precision are used.
Analogies may be helpful, but I suspect they often add confusion. A raster image in a computer is essentially a large quantity of numbers for the pixels, and associated metadata that specify what colours should be produced from the numbers. Data and metadata.
At work, after having to undo misunderstanding based on a couple of bad analogies, I’m much less a fan of them.
Yes, I have read some of those papers. I recall there being proposals for an HDR-CIELAB. I wonder though at what point do we move on to other models such as CIECAM02 or CIECAM16, etc.
I’m still cogitating a post of some sort that incorporates the cartoon all of you have helped with, but I’m sidetracked right now doing work with the nascent librtprocess library. I’m thinking of a few more cartoons, ones that describe things like the transform operation (inputprofile -> PCS -> outputprofile), and the like, surrounded by a post of prose…
I always thought of CIECAM as less of a color model and more of perception model useful for adjusting for certain difference in for example surrounding, but are not really intended as spaces to edit pictures in.
More cartoons please! Vert useful - really! - and collect great feedback almost immeditely.
Apologies, my attention diverted to … SQUIRREL!!!
Sorry, back again. Okay, here’s R3:
I made a profile icon from a ppmcie graphic. Note that the Maxwell gamut triangle gets smaller as the workflow progresses, in the manner a camera -> working -output color migration would happen in practice. Not sure what to say about that on the chart; in my mind, that’d be described in the accompanying prose.
I do intend to make something of this, but my attention span will be a bit fragmented in the next couple of months
Pretty pictures always helps with the experience.
Note that the Maxwell gamut triangle gets smaller as the workflow progresses
I always thought that the working profile should have a colour space as large as possible in order not to loose any colour information during processing? RT e.g. has as a colour space ProPhoto.
Where do the rendering intents enter in your graph?
Most cameras spaces I’ve considered are bigger than the “popular” working spaces, which are in turn mostly larger than the rendition spaces (although high gamut displays are challenging that) so my iconography attempts to illustrate the progression.
With regard to rendering intent, I was beginning to consider a separate cartoon about the transform act, maybe try to provide context around both PCS transforms and the apparent “VFR-direct” ACES/OCIO approach.
Oh, VFR-direct is an aviation term for “go directly there”…
This is getting into “print it and hang it on the wall” territory. Very understandable.
How are you going to handle the clipping of the visible gamut horseshoes?
I can’t see what you’re talking about…
Well, does your png have a profile?
Beware that this applies only to well behaved RGB devices. In the real world, devices aren’t all well behaved, and the “whitest point” may not be at R=G=B=1.
Note also that there is nothing preventing an image having its white be at some other device values than R=G=B=1. Such situations occur in proofing, or because the images creator intends something specific. i.e. it is easy to confuse the encoding space characteristics with the image characteristics, since traditional output rendered images makes them both the same, but conceptually they are distinct, and in practice they become distinct when large gamut encoding spaces are in used.