Yes. Your example colors are given using integer values, so just to be on the safe siide I’ll mention that any random color in any random well-behaved RGB working space does require using unbounded floating point conversions to avoid clipping.
There are some restrictions on what the destination color space’s TRC (“tone response curve”) can be. For RGB working spaces with “pure gamma” TRCs that aren’t “gamma=1.0”, LCMS conversions to such working spaces actually clip any RGB channel values that would otherwise be less than zero.
So for V2 profiles, the destination color space needs to be a linear gamma color space (gamma=1.0) to avoid clipping in the shadows, if any of the source colors are out of gamut wrt to the destination color space. For V4 profiles, if the TRC is the right sort of parametric curve (has a linear portion in the shadows, such as the sRGB and LAB companding curves as the TRC), then out of gamut colors are not clipped in the shadows.
Well, quantization from ICC profile color space conversions is not the only thing to worry about. At 32-bit floating point precision, this type of quantization is probably not really a practical concern. I’d be curious as to which was a bigger source of quantization: working in a color space that’s hugely larger than the actual image color gamut (linear gamma matrix input camera input profiles tend to have a lot of wasted space - space occupied by entirely imaginary colors), or doing a conversion to a less wasteful color space and then performing a bunch of subsequent edits.
Which is not at all to say that “smaller color spaces are better than bigger color spaces”, because the best color space to be working in depends on your editing goals and the specific editing steps you make along the way.
Putting quantization issues to one side, there are two basic mathematical operations that are used to modify pixel values: Addition and Multiply. Subtract is just the inverse of Addition. Divide is just the inverse of Multiply. And “raising to a power” is basically also a form of Multiply.
As long as every single RGB operation you perform on your image pixels only uses Addition/Subtract (which also includes Multiplying and Dividing by gray - think about it and you’ll see this is true), it doesn’t matter what linear gamma well-behaved RGB color space the image is in. Final colors will be exactly the same. But if the image is in a non-linear RGB working space, then results will be different even for Addition/Subtract operations, depending on the color space primaries and the TRC.
As soon as any of your RGB operations involve Multiplying or Dividing the RGB channel values by a non-gray color, then the RGB working space primaries matter a whole lot even when the RGB working space has a linear gamma TRC, because Multiply/Divide give different editing results depending on the RGB color space primaries, even if the TRC is linear.
Hmm, two separate issues here: The first issue I already mentioned: Multiply/Divide by non-gray colors produce different results in different RGB working spaces, even if all the RGB color spaces are using a linear gamma TRC.
The second issue is that working in a more or less perceptually uniform RGB working space is certainly sometimes appropriate (depending on your editing goal). But it also introduces “gamma” artifacts. Avoiding gamma artifacts requires working in a linear gamma RGB working space.
OK, you are using RawTherapee, and RawTherapee does a lot of operations in the LAB color space. Assuming an unbounded ICC profile conversion from whatever source color space to LAB, then once in LAB, the source color space doesn’t matter.
Regarding RawTherapee, there’s a lot I don’t know: For example, I don’t know:
-
The exact sequence of color space conversions in RawTherapee, and whether any or all of these conversions are “unbounded”.
-
Which operations are done in LAB and which in the user-chosen RGB working space.
-
Which operations are performed using linear gamma vs perceptually uniform RGB.
Hopefully one of the RawTherapee experts can look at your list of RT operations. It sounds to me like your procedure is sound, though I’m not sure about the contrast and sharpening operations, or even the defringing - are these appropriately done before focus stacking?
As RT does only export at 16-bit integer, the next question is what does Zerene Stacker do?
One possibility is that Zerene stacker converts everything to some internal linear gamma RGB color space, in which case it might be best to export from RT in a perceptually uniform RGB color space to avoid possible 16-bit quantization in the shadows, which probably isn’t really a practical issue, meaning you’d never notice this quantization, but I don’t know for sure.
I’ve never done focus stacking - is this something that’s best done on perceptually uniform RGB? Or should it be done on linear RGB?
I’ll save an outline of what camera input profiles are good for a later post.