Camera calibration: unexpected result when applying resulting ICC to reference TIFF

I made my own image profile using a measured IT8 target, scanin and DCamProf.

Next to a DCP I made an ICC and then embedded it into the reference TIFF (used by scanin) for evaluating the ICC. It should have the same effect on the linear reference TIFF as the DCP has on the original DNG file.

While the DCP is working perfectly in Lightroom or RT as DNG input color profile, I would expect the mutated TIFF to be rendered very similar to the DNG converted to the working space (as shown in Lightroom or RT), but it is only brighter but still greenish as the original reference file. Shouldn’t it be rendered perfectly after the resulting ICC profile was embedded?

Commands:

scanin -v -p -dipn targetcapture.tiff it8.cht values.txt
dcamprof make-profile -i D65 -I D50 -n 'Nexus 5X-LGE-google' targetcapture.ti3 'profile.json'
dcamprof make-dcp -t acr -n 'Nexus 5X-LGE-google' -d "profile" 'profile.json' 'profile.dcp'
dcamprof make-icc -t acr -n 'Nexus 5X-LGE-google profile' 'profile.json' 'profile.icc'
exiftool targetcapture.tiff "-icc_profile<=profile.icc"
# Now compare targetcapture.tiff to target shown with input profile "profile.dcp"

in order to make an ICC, you need a white balanced reference iirc. that’s probably the problem.

You are right, ICC operates on whitebalanced data (like most parts of DCP, too). make-icc must be corrected:

dcamprof make-icc -t acr -W -n 'Nexus 5X-LGE-google profile' 'profile.json' 'profile.icc'

The resulting colors match now but tones are a bit darker than with DNG+DCP.

If you intend the camera profile to represent the white-balanced raw, yes. Not what Mother Nature intended, but I’ve made .icc camera profiles from uncorrected raw target shots, eliminated the white balance correction in raw processing, and gotten acceptable RGBs, with what seemed to be increased saturation, or maybe, less decreased saturation due to not having to do the WB multiplier slewing. A comparative example is here:

I now understand one wouldn’t want to feed non-WB-corrected data to demosaic, but I need to study that further…

I now understand one wouldn’t want to feed non-WB-corrected data to demosaic, but I need to study that further…

For DNG, you may be wrong: See the DNG workflow here: demosaic, calculate color temperature (if AWB), derive forward matrix, then converting from camera’s native RGB to CIE XYZ(D50), … – Or you may be right (Android camera2). It may depend on the software you use. I think, it makes more sense to demosaic after WB correction.

I have just accepted the fact that it works this way. At first I thought, the profile should not care about WB and just transform colors independent from the light source with the D50 profile, and then, in XYZ or Lab space apply white balancing. But since a camera’s and our eyes’ trichromaticity never perfectly match, the existing workflow is beneficial.

One thing that’s often missed with regard to the camera color primaries is that they are anchored to a particular white point. David Coffin, the fellow who wrote dcraw, established the D65 white point convention for the camera primaries he collected in dcraw.c, and a lot of software continues to honor the tradition in their collections. Even Adobe dual-illuminant DCPs tend to use a D65 white point for their upper-bound colormatrix (thanks @Morgan_Hardwood for pointing that out to me…) In my experimentation, I just chose to let the uncorrected media white point anchor my primaries, danged be if it didn’t work…

Me, bear-of-little-brain here, just surmised that, if one had to pull the camera colors around to conform to some working or export/display profile, why not start with what the camera thought was white and combine the two operations. I’m fan of, the fewer the number of operations, the better… :smile:

Cheers!

I’m fan of, the fewer the number of operations, the better… :smile:

My raw files tell me the opposite :star_struck:

1 Like

Well okay, you do need a few essential ones… smarty-pants… :smile:

The one that drives me nuts is the movie folks’ log transform. So, you lift the raw data with a log curve, image now looks washed-out, then turn right around and apply a S-curve to restore a decent amount of contrast? If I were a shadow pixel I’d be rummaging for a barf bag…

This is primarily just to allow the encoders to work with fewer bits yet still encode a decent amount of dynamic range.

Recording raw sensor data for video can be difficult (too much data), but recording logarithmic data that has been demosaiced is a compromise that gives some of the benefits of a raw workflow without the difficulties of having raw sensor data. It has the side benefit of frequently playing nice with existing video codecs with little to no modification.

1 Like