Color space management when extracting or compositing channels

I have a color managed workflow using lcms2. Assigning profiles to images and converting between color spaces works fine. However I’m not sure what the correct approach is when dealing with extracting a channel from a RGB image, or combining separate channels into an image.

When extracting a channel it has been suggested to me that I should generate a gray profile for the extracted channel by creating a transform from the original image profile to XYZ, and using this to transform the set of points 0…256 to make a tone curve from the original channel component to Y, and then create a gray profile using this tone curve and a D50 white point.

What isn’t clear to me is how I should construct a RGB profile if compositing several previously-extracted mono channels, each of which having a profile like the one generated above, back into a RGB image.

I have a nagging feeling I’m massively overcomplicating things - splitting and recombining channels is a fairly basic thing to want to do, after all. So what approach do other programs take?

What are you doing with the channels between the extraction and subsequent re-combination? In any event, I’d say just make sure the data is in the tone and colorspace you want before splitting them, and then just assuming they’re in the same space when you recombine them. I think trying to tease out a per-channel transform is going to provide no end of headache…

The extracted channel might end up being the final image. I’m dealing with astronomical data so for example if someone has taken an image using a dual narrowband filter (this has passbands for Hydrogen alpha which gets detected by the red CFA elements and Oxygen III which gets detected by the green and blue CFA elements) they might just decide they want to make a monochrome hydrogen-alpha image. Split out the red channel and there you have it, and I have a method of assigning a profile to the result. On the other hand they might wish to save the split-out channel and combine it with data taken of the same object through a different filter at a later date.

On reflection, perhaps what I’ve done for splitting out a channel is fine, but for combining mono channels into a RGB image I should simply ignore any ICC profiles in the input mono images and just ask the user what profile they want to assign to the result, and leave it to their common sense not to expect good results if they try to composite a linear R channel with a sRGB G or B channel.

Ah, I get it.

sRGB should be a rendition transform, when ready to make a final image. Until then, I think data should be treated as linear. Establishing that convention lets folk have a confident expectation of what they’re getting from others. In that regard, it’s probably prudent to transform the RGB data prior to splitting to a wide-gamut colorspace like ProPhoto, keeping it linear, and using that color and tone convention for data exchange.

Thinking aloud here, FWIW…

I do treat the data as linear initially. In particular, some processing relies on the data still being linear (like fitting gaussian profiles to star shapes, some noise reduction routines and deconvolution). But lcms’s linear-to-sRGB transforms are slooow. Since linear astronomical data looks pretty much like white specks until it’s heavily stretched, I just don’t bother doing a display transform for the linear data. There is an auto-stretch screen transfer function that is computationally cheap and works well for getting a decent preview of what linear data will look like when stretched. But at the point when the user starts applying their own stretches the program prompts them to convert to a working profile, usually sRGB or Rec2020 if they want to work in a wider gamut. This is much cheaper for lcms2 to perform a display transform (essentially no transform needed with sRGB if also using a sRGB display, and transforms between two nonlinear color spaces are much better accelerated by lcms2. That’s the point where they will typically want to turn off the auto-stretch display mode and start to see what their image really looks like.

I like your suggestion (assuming I’ve understood it correctly): during the extraction process, I can transform the RGB data from whatever color profile it’s currently in back to a linear RGB profile, and then assign a linear Gray profile to the split-out channels. If the user only ever gets to compose RGB images from linear channel data then the result will always be linear RGB and they can convert it to their preferred working profile as desired.

1 Like

Hmmm, I’d think assigning the original RGB profile to each channel dataset would be preferrable, then, when/if the channels are recombined they’re treated as wide-gamut. Assigning a gray profile loses that information; if its decided to treat a single such channel as ‘gray’ for a rendition, then all you’re really concerned with at that point is the TRC, the tone curve.

Interesting. How would that work though? The extracted channel is saved as a single channel image file, so I can’t make sense of saving it with a 3-channel profile. The profile wouldn’t know which channel in the profile the data represented, surely?

With regard to recombining, if you take 3 single channel images with a linear Gray profile and composite them as RGB, and assign the result a profile with (say) the Rec2020 primaries and white point and a linear TRC, and then transform them to the Rec2020 profile with its standard TRC don’t you still end up with a wide gamut image? That’s essentially what happens in the first place when constructing the original image from 3 narrowband filter mono images.

Assigning the single-channel files the profile they were represented by as an RGB amalgam retains that information with the data. What you describe will work if the data was originally Rec2020; if you’re doing all the work yourself you can keep track of that, but if others are involved it may not be so clear.

The essential point is to retain knowledge of what the single channels were originally, so that same colorspace can be used after they’re recombined.

I don’t do astrophotography, but:

(this has passbands for Hydrogen alpha which gets detected by the red CFA elements and Oxygen III which gets detected by the green and blue CFA elements)

Does this mean that the red, green and blue channels have defined primaries? Or are they invented simply to make the image look good?

If they have defined primaries, then is this defined by an ICC profile (EDIT or named colorspace, etc)? If so, then the same ICC profile can apply to each of the separated images. Each separated image would still have three channels, but two of the channels would contain entirely zeros. This enables each separated image to be shared with other people, and the metadata explains what the pixel data represent.

If the separated data isn’t to be shared, and are just temporary within some workflow, then you could save the ICC somewhere, and make each separated image a single-channel grayscale. When you eventually combine the images into three-channel RGB, you need to remember which image is which, and assign the original profile.

1 Like