Deliberately difficult gamut colors in macro shot of hex nut

Regarding colorimetric accuracy of cameras, @JackH has a writeup well-worth reading, linked to in the PS of this post: View/Modify color matrix in DCP files - #23 by JackH

His blog is well-worth exploring…

I’m still a little lost on this point. Does the standard 3x3 matrix input profile (not a specifically made camera profile) yield the same space for all cameras, the same way rec 2020, or srgb, is always the same, or does it somehow vary based on sensor data? If it is always the same, then there must be a difference between what the camera can capture and what the input profile puts out, as each camera has its own unique sensitivity and colour filters. But if it varies, then I understand.

I am further lost, because changing the input profile worsens or fixes the highly saturated gradients issue (fringing around the nut), not changing the working space.

For example, on the hex nut I tried a few experiments. All modules were turned off, except for WB which was the same setting for all. With standard 3x3 matrix as input, comparing working spaces linear rec 709 to aces ap0 (downloaded from Elle’s site) they produced exactly the same result (noticeable fringing) - despite the fact one is small gamut and one is wide gamut. The aces ap0 gamut encompasses the whole visible spectrum, so I fail to see how the camera could have captured anything outside its gamut, therefore no need to push colour back IN gamut with relative intent, therefore no high saturation gradient fringing issues. Yet fringes remained.
Next experiment I set aces ap0 as input, with the same two working spaces. Changing the input improved the gradients, but again, the two working spaces produced exactly the same results.

From this I can only conclude fringing doesn’t occur because working space is smaller than input profile, and OOG colours are being pushed back in. Working space appears to have no effect. It is the input profile only that has an effect on the fringing. This is why I assumed there was a camera space which differed from the input profile, and the fringing occurred when the camera space was OOG, therefore pushed back IN, to the input profile. But if this is not the case, then I am lost again.

It truly boggles the mind.

Thanks, I saw that yesterday. Am steadily strolling through it, half understanding. Being far more artist than scientist, not all the technical language seeps in immediately. But if you get the science right, the art improves, so it is worth it.

I seem to learn best by asking stupid questions. Thanks for the reply, it’s helping. I will play around with your xyY tool.

Probably GGBUTCHER’s second post on this thread, where he used the alternative toolchain.

1 Like

As any transform between different color spaces is an approximation, that seems like the way to go to loose the least information on the original data (same in spirit as with the linear workflow: Only do the non-linear transformation once at the end). Are there tangible advantages to the additional detour over a work profile or is this simply a legacy behaviour?

I’m still sorting out this one. I think the answer is in the Elle Stone writings, particularly here: Well behaved working space profiles. It just hasn’t sunk in yet…

In the nut image, I’ve observed such a benefit, but when I compared working spaces, sRGB seemed to produce the best result. I’d have thought differently, but I’m now going to work on “thinking better”… :laughing:

There is no standard input profile; they’re all camera specific.

With respect to the color transform, all raw processors work this way:

  1. The ingested raw image is assigned a camera profile unique to that camera’s spectral capabilities. It may be a matrix, it may be a LUT, but it’s purpose is to describe how that camera’s data should be transformed to XYZ for the first color transform. This assignment doesn’t actually do anything, just associates the profile with the image data.

  2. Then, somewhere down the pipe, the camera data is transformed into a defined colorspace. This may be done once for output, as in dcraw, or it may be done multiple times for different purposes. For example, in rawproc the way I’ve been doing it is to transform camera > display repeatedly during the editing process, then camera > output when I finally save the file. What most raw processors do is camera > working once, then working > display repeatedly during editing and working > output at file-save time.

Meant to post this last night; it might help: Article: Color Management in Raw Processing

1 Like

If you haven’t already, go get Elle Stone’s Well-Behaved Profiles: GitHub - ellelstone/elles_icc_profiles: Elle Stone's Well-Behaved ICC Profiles and Code. You can download all the profiles from the repo’s profiles/ directory, but really the easiest way to get them is to do a ‘git clone’. Make sure you have git installed, then:

$ git clone https://github.com/ellelstone/elles_icc_profiles.git

She has all the working space favorites, all in different gammas, including g=1.0 which is equivalent to linear in ICC-world. I cloned her repo about two years ago, and that directory is now my profile “zoo”, where I put all my camera profiles as well as others I’ve tried…

Yes I’m familiar with Elle’s work. I may have accidentally stumbled upon a way to do a test on well behaved profiles, which seemed to match with Elle’s conclusions. I made a colour chart with a variety of different hue circles - from saturation to black, white and grey, and made it greyscale. Then converted the greyscale image to each colour space. When you do a conversion, the colours are supposed to stay the same (therefore grey) while the numbers change, but when converting to a couple of different spaces (pro photo and aces 2065-1) a few of the shades turned bluish. This didn’t happen with well behaved spaces like rec 2020 or aces cg. (However as we know, it will only appear grey when r=g=b, so perhaps my test didn’t really confirm the accuracy of a space, but the accuracy of a transform to those spaces?) This page, which Elle links to, suggests the delta-e between spaces is not significantly different (eg. 2 for aces cg, and 2.5 for aces 2065-1) yet the former is considered well behaved and not the later. It has a lot more graphs and detail which you might understand better than I. https://web.archive.org/web/20160412234235/http://colour-science.org/posts/about-rgb-colourspace-models-performance/

darktable’s default input profile is ‘standard color matrix’, and is what will be used on all images edited with that program, unless one puts their own camera profile in the relevant folder, or for some reason decides to use one of the working spaces as an input profile.

I’ll let dt folk say specific, but I think when they say ‘standard’, they mean the simple dcraw-style matrix, which is camera-specific. As opposed to separate profile files…

Most FOSS software uses the old dcraw adobe_coeff table as their source of last resort for camera primaries, as the nine numbers of the 3x3 matrix are often named. RT for instance has a file called camconst.json, where for a lot of cameras you’ll find a dcraw_matrix entry. I think they also have a copy of the dcraw table, for older and weird cameras.

The dcraw convention is to store the numbers as integers, which then need to be converted to float (divided by 10000) and inversed to make the 3x3 matrix. Here’s the extract from the adobe_coeff table for the Nikon D7000:

{ "Nikon D7000", 0, 0,
    { 8198,-2239,-724,-4871,12389,2798,-1043,2050,7181 } },

For other reasons of history, dcraw matrices are anchored to the D65 whitepoint, which vexes the D50-anchored ICC world.

I think that’s what’s meant by ‘standard’…

1 Like

Yes, am reading your article on color management in raw processing [love the ‘mechanical’ explanation - focusing on the mechanics is always the best way to make complex science understandable to the layman], and begin to understand this point now. I guess when this option is selected, darktable accesses an internal library of icc profiles, one for each camera - therefore not really a ‘standard color matrix’ so much as ‘unique color matrix for camera’. A confusion of terms. That explains the first part of where I was getting lost. It still doesn’t really explain the second. If aces 2065-1 encompasses all colours in the visible spectrum, and it is used as a working space, how can we possibly get that high saturation fringing? Is the answer that cameras are capturing colours outside the visible spectrum, which are then being pushed back in?

Transforms between different color spaces are exact, and in theory you do not lose any information going from one to the other (say in floating point, keeping negatives and values greater than one). On the other hand the link from those to raw/camera space is not unique and a Compromise, as mentioned. This is why most people cringe at calling raw/camera space a ‘color’ space. The others instead have a direct linear relationship to tones perceived by a typical human observer (e.g. the CIE 1931 2° Standard Observer), and that’s important because colors are only such when perceived by humans.

So we get raw data to XYZ via a Compromise Color (3x3) Matrix (1). XYZ is a perceptual space, meaning that if two tones have the same coordinates there they should be perceived as the same color by said Observer. While there we can compare what we got to a standard reference (e.g. Lab values of a CC24) and correct gross errors via a LUT (2) or otherwise. When done we are just one exact (3x3) matrix multiplication away (3) from a standard output color space like sRGB. sRGB’s specification blocks negative values and clips those above full scale so then tone information may be lost.

If no corrections are required then step (2) can be bypassed and only a single (3x3) matrix can be used to go from raw to output color space = matrix(3)*matrix(1).

An input device transform (IDT, what Adobe calls dcp profiles) just collects and specifies the matrices and the LUTs. I know nothing about ICC profiles.

Jack

@gwgill at his Argyll website has a nice description of what ICC profiles do with respect to gamut:

https://www.argyllcms.com/doc/iccgamutmapping.html

This is what I was getting at earlier, when I suggested camera space and input profile were two different things (but put a transform in the wrong place). Perhaps it is correct to say the input profile is the best representation of the raw data, but will still contain slight inaccuracies?

I don’t know aces 2065-1, but your output device certainly does not encompass the visible spectrum so some tones that were happily in their place in 2065-1 will have to be relocated to be squeezed into the output gamut. This relocation is not standardized and subjective - so may not be pleasing. That’s one of the reasons why rendering intents with names like ‘perceptual’ are often not implemented (e.g. in PS).

1 Like

Ok, so perhaps larger working profiles do solve the fringing issue, but I will never know unless I have a monitor that can display them.

The raw->XYZ 3x3 matrix (what is referred to as a Forward Matrix in the DNG spec) produces a linear projection but is a compromise, so some tones will be more ‘correct’ than others. To give you an idea the average error of my last two cameras compared to a reference CC24 in daylight was about 1.2 CIEDE2000. 1 CIEDE2000 is supposed to be a just noticeable difference. But that’s only for the 24 tones that were used to determine the raw->XYZ matrix. Many more tones in the sea …

Having said that, there is something comforting in knowing that you have followed a linear workflow, and I find that with my landscapes that is often good enough. You can take a look at a comparison of linear vs non-linear renderings here.

2 Likes

@JackH Great post I had read that in the past…need to revisit it. I was trying to create an ICC for an older (now) Lumia camera as I had a bunch of images and wanted to edit them in DT. The Lumia DNG was of a type with no forward matrix and I was trying to use Dcamprof to convert via DCP to ICC process but it needed the forward matrix. I think @ggbutcher did create a matrix but it may have been something different I can’t recall so I will have to go back and see if I add that to the json file if I can create an ICC…Awesome blog…and I love the name…

1 Like

I just tried this with my D7000:

  1. Get the Adobe Camera Standard profile for your camera from the Adobe DNG Converter.

  2. Do dcamprof dcp2json <camera_standard.dcp> <camera_standard.json>

  3. With that JSON, you can now make both matrix and LUT profiles with dcamprof make-icc -p matrix|xyzlut <camera_standard.json> yourcamera.icc

For the matrix profile, dcamprof used the ForwardMatrix1, which has a StdA whitepoint. I think you can use the .json of the DCP to make profiles with other illuminants, but I’ve run out of time to experiment…

That was exactly what I was trying to do but got stumped as there was no adobe dcp for my phone and you can essentially use the dng file itself as input instead of the dcp but then there was no forward matrix in the Lumia DNG…so that is where I left it……I still have the phone and took a few pictures of my color chart and ran a very quick darktable chart style creation ….I didn’t like the tone curve that I got but the lut portion actually produced a nice color correction at least to the eye over the current representation… so I may leave it there or out of curiosity see if I can derive a forward matrix and insert that in the json file before trying to run make-icc??

Thanks for your reply as always concise and informative……

Take care

Ar these dcraw matrices linear or non-linear?