How do I use dcamprof to color match to the color target values?

I use a Canon EOS R100.

Sadly, no joy. A rather new camera, not much opportunity yet for some crazy spectral data nut like me to get hold of one to measure.

A bit dodgy, but the data from another related Canon camera (24MP sensor… ??) could be used. The manufacturers tend to specify their CFA filters consistently; I’ve used my Nikon D7000 data to make a profile for my Z 6 and it worked okay. Really though, it’s best to measure the specific camera model.

If you’re interested in a close deltaE spectral data is the way to go. There are other benefits: no fiddley target shooting, you can make profiles for different illuminants with the same data, and you can use various training datasets to make profiles tailored to specific shooting, e.g., profiles trained with the Lippman 2000 skin tone dataset for portraits. An optical lab setup with a monochromator is the gold standard for measurement, but costly to assemble if you can’t borrow one (local university?). I measured my cameras with a simple spectroscope setup (wooden box with a diffraction grating on one end and a slit on the other, about $130US investment) and I was able to make profiles within about .1 max dE of monochromator-based profiles.

Cogitating after the last post, decided to retrieve and post the dE TIFFs for my D7000. First, the monochromator profile based on the rawtoaces project data:

patch-errors-DE_rawtoaces.tif (2.5 MB)

And, the spectroscope profile I measured with my wooden box:

patch-errors-DE_spectroscope.tif (2.5 MB)

I don’t know much about monochromator profile, how is it different from the color target + color matching method?

I don’t know if it’s relatable but I have tried the Colormunki Photo, Spectro 1 spectrophotometers and they both have problems with brighter/saturated color like pink and orange in my color chips.

I didn’t know jack about it about 4 years ago, then started working on an image where blue LED lighting was horribly posterized…

Long story short, simple matrix profiles based on a color target shot don’t usually handle extreme hues well, no information to inform pulling them into gamuts like sRGB with a bit of gradation. So, found out that you can make camera profiles with a LUT color transform replacing the matrix transform, and that LUT can be structured to do a better job with the extreme hues. But, informing that LUT with a simple 24-patch target shot doesn’t do much better than the 3x3 matrix of primaries.

Enter camera spectral data. Essentially, you measure the color filter array of the camera at the range of wavelengths comprising visible light. Since that filter array has three “channels”, so-called red, green and blue corresponding to high, medium, and low ranges of the visible spectrum, you’re measuring three values for, say, 10nm wavelength intervals between 380nm and 730nm. For my Nikon D7000, a lab measurement campaign by some smart people produced this data:

380,0.016100,0.032400,0.032200
385,0.012500,0.024700,0.027200
390,0.009000,0.017100,0.022100
395,0.007100,0.010000,0.016700
400,0.005200,0.002900,0.011200
405,0.004500,0.004500,0.019400
410,0.003800,0.006100,0.027600
415,0.024600,0.043100,0.237900
420,0.045400,0.080100,0.448300
425,0.052100,0.109800,0.598200
430,0.058700,0.139600,0.748000
435,0.055000,0.152200,0.791000
440,0.051200,0.164800,0.834000
445,0.044300,0.181000,0.873800
450,0.037400,0.197200,0.913600
455,0.035300,0.227500,0.933700
460,0.033300,0.257800,0.953700
465,0.036600,0.324000,0.942400
470,0.039900,0.390200,0.931000
475,0.041900,0.423600,0.897700
480,0.043900,0.457000,0.864400
485,0.042100,0.465400,0.801700
490,0.040300,0.473800,0.738900
495,0.041800,0.555100,0.619400
500,0.043400,0.636400,0.499900
505,0.049600,0.717700,0.417500
510,0.055700,0.798900,0.335100
515,0.070200,0.859500,0.278000
520,0.084700,0.920200,0.220900
525,0.096400,0.960100,0.188700
530,0.108100,1.000000,0.156500
535,0.084100,0.971300,0.127200
540,0.060100,0.942700,0.097900
545,0.047400,0.906800,0.079800
550,0.034600,0.871000,0.061700
555,0.036600,0.812000,0.045100
560,0.038600,0.753000,0.028400
565,0.071700,0.687100,0.022900
570,0.104800,0.621200,0.017300
575,0.254800,0.554300,0.014700
580,0.404900,0.487400,0.012000
585,0.570400,0.415500,0.010200
590,0.735900,0.343500,0.008300
595,0.720900,0.273000,0.006600
600,0.705800,0.202400,0.004900
605,0.648600,0.153100,0.004100
610,0.591400,0.103700,0.003200
615,0.538900,0.082300,0.003100
620,0.486400,0.060800,0.003000
625,0.439600,0.051600,0.003100
630,0.392900,0.042400,0.003200
635,0.358200,0.037800,0.003400
640,0.323600,0.033300,0.003600
645,0.281900,0.028100,0.004200
650,0.240200,0.022900,0.004700
655,0.209400,0.020500,0.004700
660,0.178600,0.018100,0.004800
665,0.138300,0.015300,0.004100
670,0.098100,0.012400,0.003400
675,0.064000,0.008800,0.002400
680,0.030000,0.005100,0.001400
685,0.018400,0.003300,0.001000
690,0.006800,0.001500,0.000700
695,0.004400,0.001300,0.000700
700,0.002000,0.001000,0.000700
705,0.001800,0.000800,0.000700
710,0.001600,0.000600,0.000600
715,0.001400,0.000600,0.000600
720,0.001200,0.000500,0.000600
725,0.001000,0.000500,0.000500
730,0.000900,0.000400,0.000500
735,0.000700,0.000300,0.000400
740,0.000600,0.000300,0.000300
745,0.000400,0.000200,0.000200
750,0.000200,0.000100,0.000100
755,0.000200,0.000100,0.000200
760,0.000200,0.000100,0.000200
765,0.000200,0.000100,0.000200
770,0.000200,0.000100,0.000200
775,0.000200,0.000200,0.000200
780,0.000200,0.000200,0.000200

Looks more interesting if you graph it:

Nikon_D7000

The beauty of this data is that dcamprof can use it instead of a target shot, particularly to inform a better LUT for the color transform.

But this is still going to fall back into the limit of dcamprof that it can only do the color transformation with no RGB curve?

With the -t parameter in either make-icc or make-dcp, you can supply a JSON-formatted tone curve that’ll take the image data out of linear. Format for the JSON is illustrated in data-examples/tone-curve.json.

I’m looking for the program to do it automatically, like Silverfast or 3D LUT Creator. They color matched the color target image to the reference values of the target in both brightness and color.

Ah, got it. Out of my wheelhouse…

There is a commercial version of dcam… lumariver. You can see much of the capability visually here… dcam should be able to do pretty much all of this but you just need to master the command line…

You can see it Martin’s video example with capture 1… lightness and color are matched… you just have to know how the raw software you use applies any tone curve…

If you don’t pay for this version well then of course you need to workout the command line workflow…

The second video was not matching the brightness to the color target, he did not create the curve using the color target. He was using the curve from one of Capture One’s profile and he was matching the brightness to what was shown in Capture One.

Now, matching manipulating the tone (“brightness”?) and color are two distinct things. Which are you after?

If you’re using the current reference file workflow, these settings are ignored for the reference file.

I’ve got an open pull request for an alternate workflow that doesn’t use that code path and instead sets up an appropriate processing profile. Useful if you want to open the reference image in other software (GIMP to pull fiduciary coordinates, Hugin to defish a fixed-lens fisheye camera)

Note that if you want to match exposure settings that’s getting into land I don’t mess with, at least not with dcamprof. All of the standard RawTherapee profiles are designed to not depend on exposure.

If you want to match the JPEG tone curve, you’re in a completely different land of response function reverse engineering.

It’s not really matching exposure but a reconstruct of a linear curve using the color target. 3D LUT Creator did this after the profiling stage, I just want to see if I can do it during the profile stage.

What camera do you have that has nonlinear sensor response needing reconstruction??? I remember seeing an article saying a couple of models (I can’t remember which ones) have their ADC saturation set to a point where the photosites saturate first, and those had a noticeable nonlinear shoulder before clipping, but the vast majority of cameras are highly linear up to the point of hard clipping and any nonlinearities are going to be very minor and require a lot more than just a single ColorChecker shot to identify and characterize.

I think I see where this is coming from…

See this video…

That only applies to video (and in practice, JPEGs too) since those routinely have nonstandard transfer functions applied, and is of no relevance to raw profiling via the procedure OP linked in their first post.

Using a ColorChecker will only roughly approximate the transfer function. If you want a more exact transfer function you need to do an analysis of both the RAW and JPEG shots from a camera with no distortion correction applied (RawTherapee’s AMTC is one such approach) with a decent gradient in the image so you get data at all luminance points, or use a transfer function recovery algorithm such as Robertson’s algorithm ( OpenCV: High Dynamic Range (HDR) for some info on Robertson’s algorithm if you don’t want to read his paper, OpenCV, LuminanceHDR, and my own GitHub - Entropy512/camResponseTools: Miscellaneous tools for reverse engineering camera response curves are implementations of it), although both Robertson’s and Debevec’s algorithms tend to be inaccurate at estimating the response function near the clip point because they’re intended to support HDR reconstruction and the clip point has zero weight in that phase.

Or just shoot RAW which is linear to begin with for the vast majority of cameras on the market.

I just pulled up my raw calibration shot that was used to profile my A7M4 in daylight. If I:

  1. Choose the Neutral profile
  2. Set white balance on the brightest white patch
  3. Adjust exposure compensation so that the brighest white patch has an L value of 96.3 when I hover over it, per https://xritephoto.com/documents/literature/en/ColorData-1p_EN.pdf

Edit: BTW, this is after generating the profile, so the DCP profile generated using “standard RawTherapee method” is used. This will potentially adjust luminance based on hue/saturation, but is a purely linear function of input luminance. e.g. a “2.5D” LUT. The shot is, by definition, captured at the exact illuminant conditions that were used to generate the DCP, since it IS the shot I generated the Sunlight part of the DCP from.

Then I get the following L values for the patches, with the target values from that datasheet listed for comparison:

Patch Measured Reference
19 96.5 96.539
20 81.7 81.257
21 67.7 66.766
22 50.7 50.867
23 36.5 35.656
24 20.5 20.461

Basically within measurement error

1 Like

I agree I just think and I could be wrong that the OP drifted a couple of times to talking about 3D Lut creator and in that second video the presenter uses those words reconstruct the linear response or something like that so I figured this was the likely source of those comments

How is the color compares to the target after adjusting the exposure?

Obviously if you adjust exposure away from what lands patch 19 at an L value of 96.5, none of the L values will match any more.