I was searching for a solution to use my custom camera profile as a working and output color profile in RawTherapee, when I found this thread: Restriction on user chosen output profile in RawTherapee
And in one of your answers, you said:
“Generally speaking, camera input ICC profile color spaces are horrible for image editing”
“Even if you make and use a well-behaved custom camera simple linear gamma input ICC profile … these color spaces (are) not suitable for general editing.”
Ok. Now I’m officially confused…
I should start by explaining what do I do with my pictures, and why (I think) I need well behaved camera profiles used as working color spaces all over my image processing-editing workflow: my main concern is with focus stacking of macro images, because it implies a huge amount of mathematical processing of pixel color values.
So I proceed this way:
-
take say 200 pictures of a subject, with constant light intensity, aperture and speed (so the only change is the focus depth),
-
load them in my raw processor of choice (RawTherapee)
-
make sure the input profile is the one I custom made for my camera, by following your instructions in Well Behaved Camera Profiles
-
check the working profile of RawTherapee (and here is why I was reading that thread in this forum), and select what I think is the “best” available profile (although not being linear)
-
select the output profile as what I think will be the “best” available profile
-
perform some tweaks to my images (all of them receive same processing): demosaicing algorithm (amaze), chromatic aberration correction, dead/hot pixel removal, custom white balance (targeting a picture of gray patch of a colorchecker card), slight wavelet denoising, slight wavelet contrast enhancing, slight sharpening (usually by unsharp mask) and defringe. If I have to correct exposure, I usually doesn’t change anything but “exposure compensation” slider. Not too much processing, but either way, not purely “scene-referred”
-
export to tiff, 16bit
-
process them in Zerene Stacker. This software allows you to process images with their own profile (the profile embedded in images). In this processing there is a huge amount of mathematical calculations based on pixel color values (it resizes images, detects which parts of each image are in focus, then blends the focused parts together)
-
retouch the resulting stacked image within Zerene (correcting mistakes done by algorithms; as in Introduction to Retouching)
-
save final tiff, and finally process it in an image editor, for final touch-ups
As I understand it, working color spaces are just well behaved color spaces, and those are just subsets of reference color spaces. So given a color in a reference space, different working spaces will point to it with different sets of coordinates, but the color will always remain the same. E.g.: magenta will be (255,0,255) in sRGB, while it will be (219,0,250) in AdobeRGB. And the same should happen with a well behaved camera profile, used as a working color space: different coordinates, but the same color.
If this is right (although maybe not technically correct), I also understand that translations between working spaces won’t be perfect, thus resulting in not exactly the same color (this problem caused by “quantization errors”). So, wouldn’t it be ideal if I could use the same coordinates from start to finish in all the process?
My thinking is that pixel RGB values (coordinates) are just data that mathematical formulas will use to give a result, be it a real color or not. If we work in a perceptually uniform color space, and there are no translations, there would be no quantization errors, so more precision in the resulting color. Then the display profile will work its wonders to show me a real image on screen that won’t show any color outside the gamut of the display. And I would be able to judge if the result is good enough or not.
I don’t mind if somewhere there are not-real colors while processing images, but I do care about color values near boundaries of “standard” color spaces. That is, if all that mathematical processing while stacking keeps moving color values from its original location, they may end inside or outside the limits of the final color space, but there will be mathematical detail that the program will use to detect image details.
I have post-processing tools to recover that information to some extent (force some of the nearest outside values to fall inside the working space), and given that the stacking algorithms tend to be quite picky while deciding what is in focus and what not, I feel as not being ideal losing details just by clipping colors while translating between working color spaces.
Zerene works with tiff images, so currently unbounded profiles are not an option in RawTherapee, as we can’t save floating point images.
Thus, as detail is essential in focus stacking, I thoght it would be good to use my well behaved custom camera profile as a working profile all through the entire process. An after all post-processing, convert the final image as sRGB.
What am I missing?
Another concern I have now is how the “CIE Color Appeareance Model 2002” settings will degrade color fidelity I get from my custom color profile, if applied before stacking images (step 6), as it may “beautify” my images, and deviate them from “scene-referred”. Or it doesn’t work like that?
Any “for dummies” help will be highly appreciated.
As a side note: has Bruce Lindbloom’s Uniform Perceptual Lab and blue turns purple in http://www.brucelindbloom.com/UPLab.html
something to do with your problems with that Canon camera profile?