@ggbutcher - Hi Glenn. I read through your github rawproc page and rawproc sounds like a very nice raw processor. My name seems to have been invoked several times in this thread, so I thought I would chime in. These points might be relevant to the general questions that I think you might be asking:
I saw a reference to “dark” in the github rawproc write-up and I think also above. If the image looks darker than it should, this almost certainly indicates a color management issue that should be dealt with by assigning the correct ICC profile (if that’s the problem), or else by finding and fixing whatever “calibrate/profile/color-manage” issues there might be in the workflow and display chain.
If the monitor isn’t calibrated and profiled (both require a color measuring instrument), then the monitor isn’t showing correct colors (unless the monitor has a decent sRGB preset and hasn’t drifted over time since it left the factory, in which case you can get by with using sRGB as the monitor profile).
I’m assuming rawproc uses LCMS color management to send the image to the screen. That is, LCMS converts the image RGB values from the image ICC profile to the screen ICC profile, and then the converted RGB values are sent to the screen. So there is absolutely no reason to worry about matching ICC profile “gamma” values. LCMS takes care of this. As relevant notes:
-
If the monitor’s color gamut is significantly smaller than sRGB, the monitor can’t show correct colors even for sRGB images.
-
For images in other color spaces, the monitor can’t show correct colors for any image colors that fall outside the monitor’s color gamut (as defined by the monitor profile, which underscores the need to have a good monitor profile). “Perceptual intent” doesn’t mitigate this problem, even if your monitor profile actually does have a perceptual intent table. Matrix monitor profiles (most common case) don’t have perceptual intent tables, in which case asking for “perceptual intent” actually gets you relative colorimetric intent, which simply clips all out of gamut channel values.
For the chain from camera raw file to interpolated image file, I would recommend the following (but take my recommendations with a grain of salt, or perhaps the whole salt shaker ):
-
Use the equivalent of “dcraw -4 -T -o 0” to output a linear gamma, “raw” color file.
-
Use LCMS to assign an appropriate camera input profile. Where to get camera input matrix profiles:
- Make your own camera input profile.
- Or use PhotoFlow (requires the linear gamma branch?) plus ArgyllCMS extracticc to extract the camera matrix profile for your specific camera and save it to disk.
- Or use the dcraw adobe_coeff table to construct the actual ICC profiles.
- Or use the augmented matrix profiles available from various free/libre raw processors.
-
Use LCMS to convert from the assigned camera input profile to the chosen RGB working space ICC profile (over the years I’ve encountered several issues with dcraw color management). My recommendation is to use Rec.2020 or ACEScg as the large-gamut RGB working space:
- Both have fewer imaginary colors than ProPhotoRGB.
- Both seem to produce “nicer than ProPhotoRGB” results for chromaticity-dependent editing operations (anything involving multiplication or division by a non-gray color; anything that involves using individual channel data, including non-luminance-based conversions to black and white).
-
At this point you have the digital equivalent of a “flat print”. Except that it’s actually more flat than a “wet” darkroom black and white “straight print” because film and paper involve shoulder and toe rolloff, and digital does not (unless your camera settings involved shadow lifting/highlight compression/similar algorithms).
-
For actually editing the “flat print” to suit one’s artistic intent for the image, use two versions of the chosen RGB working space profile, with the same chromaticities but different TRCs - one version with a linear TRC and one version with a perceptually uniform TRC. Depending on the editing algorithm and the artistic intent, use LCMS to convert the layer or entire layer stack between the two profiles as desired, or else program in the two TRCs with some way to flip back and forth between the two TRCs. In my opinion, these are the only two TRCs that are technically relevant to high bit depth photographic editing:
- linear gamma (gamma=1.0), for proper color mixing (most layer blend modes including Normal, blurring, down-scaling, Levels, Curves, etc).
- the LAB companding curve (perceptually uniform). This second TRC is for things like most “find edges” algorithms, soft light blend mode, adding uniform RGB noise, and such.
Regarding ProPhotoRGB:
- The gamma=1.8 TRC was chosen by Kodak because it’s closer to linear than gamma=2.2, thus mitigating somewhat the gamma artifacts from processing non-linearly encoded RGB images. Also ProPhotoRGB is a very large color space, which also helps mitigate gamma artifacts. It’s better to just use a linear gamma RGB working space and avoid the gamma artifacts altogether.
- The ProPhotoRGB primaries are better primaries for general editing than the sRGB primaries, for various reasons covered in the Kodak documentation (http://photo-lovers.org/pdf/color/romm.pdf). But they aren’t as good as some other editing spaces that have been promulgated since ProPhotoRGB was released. http://colour-science.org/posts/about-rendering-engines-colourspaces-agnosticism/ “ranks” various color spaces. But I’m also speaking from my own testing, and I encourage other people to make their own tests.