Question about DCP forward and color matrixes

I converted a DCP camera profile of my Nikon D5200 to a readable format with dcpTool to learn about forward and color matrixes (as I first came across in Android’s camera2 API here and here )
As the Android documentation explains, the forward matrix is for transforming from sensor to XYZ space, and the color matrix for transforming from XYZ to sensor space.

What is the use of a re-transformation to sensor colorspace? I see that this color matrix even come first in the XML produced from DCP files in dcpTool:

<dcpData>
  <ProfileName>Adobe Standard</ProfileName>
  <CalibrationIlluminant1>17</CalibrationIlluminant1>
  <CalibrationIlluminant2>21</CalibrationIlluminant2>
  <ColorMatrix1 Rows="3" Cols="3">
    <Element Row="2" Col="2">0.704100</Element>
    <Element Row="2" Col="1">0.075200</Element>
    <Element Row="2" Col="0">-0.036700</Element>
    <Element Row="1" Col="2">0.309800</Element>
    <Element Row="1" Col="1">1.260100</Element>
    <Element Row="1" Col="0">-0.534900</Element>
    <Element Row="0" Col="2">-0.046700</Element>
    <Element Row="0" Col="1">-0.374500</Element>
    <Element Row="0" Col="0">0.869300</Element> 111x... = 0.2977	0.9613	0.9672
  </ColorMatrix1>
  <ColorMatrix2 Rows="3" Cols="3">
    <Element Row="2" Col="2">0.639400</Element>
    <Element Row="2" Col="1">0.163800</Element>
    <Element Row="2" Col="0">-0.098800</Element>
    <Element Row="1" Col="2">0.217900</Element>
    <Element Row="1" Col="1">1.434200</Element>
    <Element Row="1" Col="0">-0.636700</Element>
    <Element Row="0" Col="2">-0.104700</Element>
    <Element Row="0" Col="1">-0.311200</Element>
    <Element Row="0" Col="0">0.832200</Element>
  </ColorMatrix2>
  <ForwardMatrix1 Rows="3" Cols="3">
    <Element Row="2" Col="2">1.015300</Element>
    <Element Row="2" Col="1">-0.216100</Element>
    <Element Row="2" Col="0">0.025900</Element>
    <Element Row="1" Col="2">-0.081200</Element>
    <Element Row="1" Col="1">0.804300</Element>
    <Element Row="1" Col="0">0.276800</Element>
    <Element Row="0" Col="2">0.096800</Element>
    <Element Row="0" Col="1">0.203800</Element>
    <Element Row="0" Col="0">0.663700</Element>
  </ForwardMatrix1>
  <ForwardMatrix2 Rows="3" Cols="3">
    <Element Row="2" Col="2">0.936800</Element>
    <Element Row="2" Col="1">-0.126600</Element>
    <Element Row="2" Col="0">0.014900</Element>
    <Element Row="1" Col="2">-0.163900</Element>
    <Element Row="1" Col="1">0.921500</Element>
    <Element Row="1" Col="0">0.242400</Element>
    <Element Row="0" Col="2">0.039400</Element>
    <Element Row="0" Col="1">0.371300</Element>
    <Element Row="0" Col="0">0.553600</Element>
  </ForwardMatrix2>
...

I wrote a test using mathjs to at least proof that the color matrix will recover the original sensor RGB:

const math = require('mathjs')

const fwd = math.matrix([
	[0.6637,0.2038,0.0968],
	[0.2768,0.8043,-0.0812],
	[0.0259,-0.2161,1.0153]]),
	invFwd = math.inv(fwd),
	color = math.matrix([
	[0.8693,0.3745,-0.0467],
	[-0.5349,1.2601,0.3098],
	[-0.0367,0.0752,0.7041]])

console.log("Inverted fwd", invFwd.valueOf())

var  sens = math.matrix([1, 1, 1]),
	xyz = math.multiply(fwd, sens)

console.log("xyz", xyz.valueOf())

sens = math.multiply(invFwd, xyz)
console.log("sens2 with inverted fwd", sens.valueOf())

sens = math.multiply(color, xyz)
console.log("sens2 with color", sens.valueOf())

Result:

Inverted fwd [
  [ 1.7190528372458245, -0.4901558556027806, -0.20309757718934465 ],
  [ -0.6091286267429826, 1.444302076495557, 0.17358512723348757 ],
  [ -0.17350159039084548, 0.319914030720774, 1.0270579860576783 ]
]
xyz [ 0.9642999999999999, 0.9999, 0.8251000000000002 ]
sens2 with inverted fwd [ 0.9999999999999998, 0.9999999999999999, 1.0000000000000002 ]
sens2 with color [ 1.1741963699999998, 0.9997859, 0.6207555800000001 ]

According to the definition, I’d expect that the inverted forward matrix is the same as the color matrix (with the same illuminant), but this is not the case; and the back transformation to sensor space results in a different value.

Either Android’s definition or the DCP profile is wrong or I have just misunderstood sth. important.

This might help:

https://www.ludd.ltu.se/~torger/dcamprof.html#cm_and_fm

Or confuse; it seems to be at odds with your interpretation of the Android doc. The doc I pointed you to is about dcamprof, a command-line DCP tool that is quite a bit more flexible than dcpTool. The JSON it produces is a whole lot more readable than the XML, for instance.

I’m very confused now. Should I care anymore and just use the DCP profiles as given? No, I want to get the perfect TIFF from my RAW files for post-processing. And for this I need to understand how things work.
(The Android doc is confusing, and I should not intermingle the terms from different worlds (although the semantics seem to be analog). I forget it for now.)

I’d like to have a clear workflow diagram. I could find this document of the internal data workflow if RT 2.4. But I can find nothing about internal workflow in the wiki nor any version of this document for a more recent RT version.

I don’t get why the camera transform matrixes provided by dcraw or RT are XYZ->CamRGB matrixes (as dcraw’s raw-identify tells me, and how it would fit the Android definition). So how does it or RT compute the forward “Camera2RGB matrix” for input transformation to XYZ/Lab? – it’s not the inverted XYZ->CamRGB matrix nor is it included as hardcoded values in the source code.

Ah, you’re teetering on the edge of the ‘rabbit hole’. You want what I wanted three years ago, and it took a lot of study and experimentation to even understand what that means, and I’m still not where I want to be.

For what it’s worth, I’d recommend you put aside DCPs for a bit; they offer significant capability that is hard to comprehend straight-away.

I wrote a missive about this a few months ago, necking down the essentials of managing color in raw processing:

It’s ICC-oriented, but it’s there that the essential dynamic of managing color is best understood to start.

The camera primaries in dcraw’s adobe_coeff table and RT’s camconst.json are actually cam->XYZ matrices, anchored to a D65 white point. In dcraw, inverses are calculated and used for various purposes, but the essential transform that gets you to a pleasing rendition of your image is cam → XYZ → output, where output=sRGB|Adobe|ProPhoto|etc. You could use dcraw to deliver 16-bit TIFFs in linear ProPhoto, and they’d be quite nice for your purposes.

Well, until you start to deal with extreme colors. Cameras capture light in gradations that produce colors well beyond our ability to comprehend, and that range eventually has to be crushed to something that can be accommodated by limited displays or printers. To keep some gradation in extreme colors, these transforms need to be augmented with behaviors that make more nuanced decisions about what lesser color each rich color gets transformed to. Anders Torger’s dcamprof page describes a lot of this quite well.

There’s one more reading I commend, one that’ll help both with understanding the essentials of color management as well as the more general endeavor of raw processing:

@Elle Stone used to post here, but her expansive writings live on to shape our heads in righteous ways…

2 Likes

Ah, you’re teetering on the edge of the ‘rabbit hole’. …

This is why I will keep DCPs aside and focus again on the results. Thanks for this article; it reflects the state of my knowledge pretty well.

The camera primaries in dcraw’s adobe_coeff table and RT’s camconst.json are actually cam->XYZ matrices, …

Then it is a bug that raw-identify prints “XYZ->CamRGB matrix”. Probably it’s named according to the order of the matrix for multiplying.

You could use dcraw to deliver 16-bit TIFFs in linear ProPhoto, and they’d be quite nice for your purposes.

I have already tested this: results are accepting, but the color transform of dcraw is incomplete – I get similar results to deactivating tonecurves /LUTs for my profile in RT. So I piped this to convert, and the results were really good but still not as good as with RT. And I like to have lens correction included which dcraw does not provide. I will use RT with a default profile for my camera, and the result will be slightly better than Lightroom. This is what I have found out so far …

I’m not a prisoner of a rabbit hole here :wink:

And also this:
https://www.ludd.ltu.se/~torger/dcamprof.html#dcp_white_balance

You can also use dcamprof:

./dcamprof dcp2json ~/dcp/"SONY ILCE-7M2 yellow filter.dcp" | less

Color management :joy: :sob:

You are indeed standing at the threshold of the event horizon, from which there is no return…

The only up to date info we have on the internal flow is:
http://rawpedia.rawtherapee.com/Toolchain_Pipeline

Maybe you’ll find something of interest here too:
http://rawpedia.rawtherapee.com/Color_Management/fr
http://rawpedia.rawtherapee.com/Color_Management_addon/fr

Good luck

2 Likes

There is also a detailed description of the color matrix use in the DNG spec, the maths is a bit more complicated than you initially assumed, due to white point adaptation I believe.

1 Like

Yup, I was going to say - Android documentation is not what I would consider authoritative on this matter. The Adobe DNG spec is.

1 Like

Yes, that’s because dcraw handles color and tone separately. -o in dcraw lets you choose from up to six alternatives for the output color transform, including doing nothing. -g in dcraw is the tone part, and you are limited to shaping a power curve.

You really need to tease apart the behaviors of the color and tone transforms in order to understand what’s going on. Profiles can vex this when they contain the information for both. That’s why I really like @Elle Stone’s collection of profiles; she has all the usual colorspace suspects with different power-based tone curves, including “g10” versions where the TRC doesn’t change the tone. And consider that the ICC TRC is really there to manage non-linear response in devices. DCP TRCs are used for all sorts of nefariouness, including “looks”; geesh…

@Elle’s ICC profiles:
https://github.com/ellelstone/elles_icc_profiles

Thanks, I will have a look.

As far as I understand, sensor raw data and data as value are always linear to brightness/light energy. Minor deviations must be compensated by tonecurves before transforming to another space with the matrix. Those are input curves.
Output curves are included in profiles like SRGB and are applied to compensate for human vision. (Which lets some room for interpretation.)

How does dcraw do the transformation?
It loads the raw camera RGB and using my input profile it corrects for the device’s curve deviations so that RGB becomes linear. But the color filters do not perfectly match the XYZ curves, so it is transformed to XYZ with the profile’s matrix (I have seen a profile with a table transform to Lab, too). Finally it transforms XYZ to the output space. Usually no additional tonecurve is needed because the output profile already includes one, so dcraw will only transform to output space with a matrix. But it does two things I don’t understand:

  1. It does not append the ouput profile. Exiftool won’t extract a profile when I choose a TIFF generated with -o 4 (ProPhoto). It will only append the profile if I define as a file. Why?
  2. It applies its own gamma although the profile already has it’s own tonecurve (for ProPhoto: gamma 1.801). To keep the profile as is, it must apply the gamma as compensation to this profile. So using default output gamma 2,2, it must change the gamma of the values to reach this value. This means that it mutates the values which are not linear any more.

Consequences:

  1. Use a file as output profile, so it is included, and
  2. Use an output profile that fits the -g value that I set for the output so that RGB data is only minimally mutated. I have tested this with Elle’s LargeRGB-elle-V2-g22.icc, and the result is the same, so my guess of changed values must be true.

Maybe the photosites respond linearly, but the values in a raw file aren’t always stored linearly. For example some Sony cameras have a non-linear raw compression curve. See “compression curve” in RawDigger:

If you’re referring to the sRGB gamma curve, it’s there to approximate CRT response and to improve compression quality based on tones we’re more likely to discern.

I haven’t tested whether this is the case, but what you describe makes sense. The only color space that programs are required to “know” is sRGB as defined by the Exif ColorSpace tag 0xA001: https://www.awaresystems.be/imaging/tiff/tifftags/privateifd/exif/colorspace.html
Any other colorspace requires embedding an ICC file. See also Embedded color space information

I’ve bee through the dcraw code a few times, and I don’t recall seeing any such manipulation.

I just did this:

$ dcraw -o 4 -T -4 DSG_3111.NEF
$ exiftool -G DSG_3111.tiff

and found this in the output:

[ICC_Profile]   Profile CMM Type                : 
[ICC_Profile]   Profile Version                 : 2.1.0
[ICC_Profile]   Profile Class                   : Display Device Profile
[ICC_Profile]   Color Space Data                : RGB
[ICC_Profile]   Profile Connection Space        : XYZ
[ICC_Profile]   Profile Date Time               : 0000:00:00 00:00:00
[ICC_Profile]   Profile File Signature          : acsp
[ICC_Profile]   Primary Platform                : Unknown ()
[ICC_Profile]   CMM Flags                       : Not Embedded, Independent
[ICC_Profile]   Device Manufacturer             : none
[ICC_Profile]   Device Model                    : 
[ICC_Profile]   Device Attributes               : Reflective, Glossy, Positive, Color
[ICC_Profile]   Rendering Intent                : Perceptual
[ICC_Profile]   Connection Space Illuminant     : 0.9642 1 0.82491
[ICC_Profile]   Profile Creator                 : 
[ICC_Profile]   Profile ID                      : 0
[ICC_Profile]   Profile Copyright               : auto-generated by dcraw
[ICC_Profile]   Profile Description             : ProPhoto D65
[ICC_Profile]   Media White Point               : 0.95045 1 1.08905
[ICC_Profile]   Media Black Point               : 0 0 0
[ICC_Profile]   Red Tone Reproduction Curve     : (Binary data 14 bytes, use -b option to extract)
[ICC_Profile]   Green Tone Reproduction Curve   : (Binary data 14 bytes, use -b option to extract)
[ICC_Profile]   Blue Tone Reproduction Curve    : (Binary data 14 bytes, use -b option to extract)
[ICC_Profile]   Red Matrix Column               : 0.79774 0.28807 0
[ICC_Profile]   Green Matrix Column             : 0.13518 0.71187 3e-05
[ICC_Profile]   Blue Matrix Column              : 0.03131 6e-05 0.82503

This is the dcraw-constructed D65 ProPhoto ICC, embedded in the TIFF. The three Tone Reproduction Curves contained therein are based on the default for -g, 2.222 4.5.

That’s the thing, the embedded profile’s TRC is defined by whatever you do for -g. And, dcraw will do the corresponding color and tone transform to make the output TIFF conform to that embedded profile.

To sum it up, for a TIFF, dcraw will construct and embed an ICC profile for any values provided with -o and -g, The exception is -o 0, which leaves the color in the camera space and doesn’t embed a profile. -g 1 1 will preserve the original camera tone, except for that danged autobright, one of the reasons I abandoned dcraw processing in Libraw. Back to dcraw, It will also apply the matrix and tone curve to the raw image, to make the output conform in both color and tone to the embedded profile.

1 Like

BTW: I installed libraw (Isn’t this the current dcraw?), and I use the dcraw_emu command.
I tested it again, and now it embeds the profile; I don’t know what caused this.

@Morgan_Hardwood RawDigger is really pragmatic, especially if you fine-tune the histograms, which for me includes a logarithmic count scale. I captured many photos (slide digitizations) with slight overexposure, because the input profile made the Lightroom overexp warning too liberal. And I checked the white level of my DSLR, but I probably won’t publish the results since lowering the limit does not lead to better highlights in RT. When compressing highlights it’s necessary to reconstruct them, too, else I get the pink tone – no matter how low I set the white point.

@ggbutcher Thanks for these clarifications. I always use -W, which gives better results for (nearly) overexposed images.

I’ve bee through the dcraw code a few times, and I don’t recall seeing any such manipulation.

I had the idea from Android – there is “a per-device calibration transform matrix that maps from the reference sensor colorspace to the actual device sensor colorspace”

On my Nexus 5X, this is a near identity matrix, as raw-identify -v tells me:

Camera2RGB matrix:
1.2852	-0.4611	0.1759
-0.1532	1.1341	0.0191
-0.0088	-0.3563	1.3650

XYZ->CamRGB matrix:
0.0000	0.0000	0.0000
0.0000	0.0000	0.0000
0.0000	0.0000	0.0000

DNG Illuminant 1: D65
DNG Illuminant 2: Illuminant A
camRGB -> sRGB Matrix:
1.2852	-0.1532	-0.0088
-0.4611	1.1341	-0.3563
0.1759	0.0191	1.3650

DNG color matrix 1:
0.8047	-0.2188	-0.1172
-0.3203	1.2656	0.0391
-0.0469	0.2266	0.4531

DNG color matrix 2:
1.0078	-0.2891	-0.2188
-0.5625	1.6328	-0.0469
-0.0703	0.2109	0.6328

DNG calibration matrix 1:
0.9922	0.0000	0.0000
0.0000	1.0000	0.0000
0.0000	0.0000	0.9922

DNG calibration matrix 2:
0.9922	0.0000	0.0000
0.0000	1.0000	0.0000
0.0000	0.0000	0.9844

DNG forward matrix 1:
0.5781	0.1562	-0.0156
0.2188	0.8438	-0.2891
0.1641	0.0000	1.1328

DNG forward matrix 2:
0.6875	0.2109	0.0000
0.0156	0.6797	-0.5391
0.2656	0.1016	1.3672

Strange that the XYZ->CamRGB is missing.

Interesting; it appears that, in order to accommodate the plethora of cell phone cameras, Android folk thought it prudent to map their colors back to a “reference camera space”, which I’d assume is what they’d use going forward as the “Camera Colorspace” in raw processing. @Entropy512, does that sound right? @Andi, note that it’s a color transform, not a tone transform.

I like to keep a copy of dcraw.c around; it appears you can get it from here:

https://www.dechifro.org/dcraw/

This is a recent move, David Coffin used to keep it at his website at cybercom.net, but that’s gone now.

The libraw folk test their releases to ensure they properly emulate dcraw, hence dcraw_emu. I use libraw in my software, and until recently I used its dcraw processing to get my starting RGB image, but dcraw has some hard-coded behavior, and libraw’s access to it can be unwieldiy. So now, I just take the libraw RawProcessor.imgdata.rawdata.raw_image, which is a pointer to unsigned shorts of the raw data right out of the file, and I apply my own operations in any order I so desire. Learned a lot doing that, especially about the things you’re asking about now.

Actually that’s in the general DNG spec for ANY camera.

In theory it’s supposed to allow calibrating out serial-number-specific deltas in color performance from a “reference” camera of that model.

In practice, I’ve almost never seen it. And in fact I am wondering if you actually would see deltas between the data for two phones of the same model… It would surprise me that Google is being more careful about calibrating every single device than any major ILC camera manufacturer!

They could possibly be doing it on a batch-by-batch basis?

I know the color profile in Google phones does not appear to be stored in the phone itself, as merely updating the Google camera app from the Play Store without installing any firmware updates resulted in a significantly updated color profile in the DNGs saved by a Pixel 4 XL. Google Camera 7.1 saved DNGs with color profiles so broken that completely disabling all color matrix processing gave better results. Google Camera 7.2 and above, however, save a really nice dual-illuminant profile that actually works.

Only people who got Pixel 4 devices through preorders ever saw files saved by Camera 7.1 - Devices were supposed to arrive on October 24, but people (including myself) had the camera in hand on the 23rd. 7.2 was released as a Play Store app update on the 24th.

1 Like

In the DNG spec makes a bit more sense. Use case, maybe, if someone built a special camera that was an array of mosaic sensors… ??

@Andi, note that it’s a color transform, not a tone transform.

This is clear.

… but dcraw has some hard-coded behavior, and libraw’s access to it can be unwieldiy

Please clarify a bit: Are there serious flaws?

@Entropy :

In theory it’s supposed to allow calibrating out serial-number-specific deltas in color performance from a “reference” camera of that model.

It’s indeed a per-device (and not model) specific transformation, as the doc says clear enough. Could be placed somewhere in the camera chip and be set after manufacturing. It seems to be constant across all captures from this camera.

In practice, I’ve almost never seen it. And in fact I am wondering if you actually would see deltas between the data for two phones of the same model… It would surprise me that Google is being more careful about calibrating every single device than any major ILC camera manufacturer!

This is one of Google’s devices, so they could test camera2 thoroughly. camera2 has a pretty rich feature set and is very complex – they really wanted to correct the idiotic faults of the first API. I suppose this because the sensor has an unusual rotation, and some output ratios are by intention different – to make the developers sweat :wink:
The color response of this device is astounding, but if preview is active for too long, the sensor beats you in the face with green and pink noise. 12 bit range adé … And this is why RAW is not the most important feature for a smartphone camera but the sensor. Even a Samsung Galaxy A3 (2104) (with optimized params) can make better JPEGs than the Nexus 5X. And a Sony Xperia X Compact (camera2 LIMITED) with some color noise removed (a must for any smartphone photo, JPEG or RAW) comes pretty close to an entry-level DSLR.

A camera profile that depends on camera app? Don’t do this, Google!

No flaws, just couldn’t bend the program to my will. :smile:

I went digging to recall specifically what it was (dementia is a horrible thing…), I think it was control of the application of white balance through the libraw parameters. It just pushed me harder to do the thing i set out to do in the first place: When a raw is opened in rawproc, what you’re first looking at is the program’s attempt to display the image as read from the raw file: single-channel, 16bits of 14bit measurements. Then one applies tools one-by-one, and watches the “nice” image gradually take shape…

Funny, early on this was my wonder and lament: why didn’t camera manufacturers deliver color primaries in the metadata? This poorly-named “smartphone” crap gives me pause to reconsider… :smile: