The more I delve into this subject matter, the more confusing it becomes. There are a number of tutorials and opinions out there which all purport to be definitive. Although their messages often conflict, the common thing is that nobody really seems to know. To cover their butts, many qualify their nebulous advice by saying something along the lines of: “This isn’t an exact science, you’ll have to make your own adjustments and modifications to this procedure until you are satisfied”. This is rather disheartening.
Notwithstanding the incongruent advice given by many of these self proclaimed “experts”, some advocate saving the output of a camera profiling exercise as darktable “styes” (using darktable_chart). Others suggest saving them as ICC input profiles using argyllCMS. Which is the correct method to use?
Bear in mind that I want to profile my camera so that WIGIWIS (what I get is what I saw, ie. real colours) before I begin any further post-processing steps. I feel that I should clarify: I DON’T want my raw files to look like the camera’s jpegs; I merely want accurate colours, regardless if they meet somebody’s definition of “pretty” or not!
The word ‘profile’ is a bit overloaded with respect to cameras. There are two main categories:
Camera calibration characterization: You need this information to deal with raw files, no matter what. In this case, the profile provides the red, green, and blue primaries (three numbers, for each) that describe the particular camera’s spectral response. A lot of the softwares store a set of these internally for all the cameras they support; many use the adobe_coeff data from dcraw, David Coffin’s command line raw processor. Most good raw software lets you substitute that with an ICC or DNG profile you create using a test shot from your camera of a color target. In most raw processors, after the raw data is demosaiced and white-balanced, its camera profile is assigned to that RGB data. Before an image is saved from that data, the camera profile has to be used to convert the RGB data to some output colorspace such as sRGB.
“Looks”: These profiles alter the color and/or tones of the image to achieve someone’s notion of a “look”, e.g., “film”, “vivid”, etc., ICC or DNG files can be used to perform this endeavor, but there’s also a lot of work out there doing so with “look-up tables”, or LUTs.
If you tell me your make/model of your camera, I’ll extract its camera color data from dcraw and post it. You won’t be able to do anything directly with it in the mainstream softwares, but it’s interesting to have (well, I find it interesting… ) If you want a real camera profile that you can use, download Adobe’s DNG Converter; when installed, it has a directory of DCP profiles for a boatload of cameras, both for 1) calibration, and 2) looks.
That’s a breath of fresh air! Finally something which doesn’t slavishly toe the line.
Thanks for the above and for your other comments, Glenn. I’ll preface the rest of this by admitting that I’m a rather thick-skulled fella whose noggin is often initially impermeable to foreign concepts. With this said, please bear with me…
As mentioned, I don’t care about the “Looks” aspect. I’ll happily create my own Looks. After all, that’s why I shoot raw!
As noted earlier my desire is rather to achieve, I think, what you refer to as calibration characterization. Yet somehow, I still can’t wrap my head around this. Perhaps this is due to the plethora of rampant (but probably ‘unintentional’) misadvice floating in the 'net.
Is a camera’s accurate reproduction of colours not due to a combination of (at least) 3 factors? For example:
the camera sensor characteristics
the ambient lighting conditions
Are you suggesting that using only a single unique ICC or DNG profile will always result in true colours, regardless of any variations in the above? This woud also suggest that it’s not necessary to use tools like X-Rite’s ColorChecker Passport for anything other than an initial calibration, right?
In my opinion it’s best to have a good input/camera ICC profile in dt rather than start from a so-so input profile and correct the colours with some colour modules. You do the hard work upfront when making such an input profile for your camera but then the number of modules you use will be significantly smaller.
Personally I prefer making my camera profiles (both ICC for dt/RT and DCP for Lr/RT) with DCamProf + ArgyllCMS (the former has a very important addition of Tone Reduction Operator). I prefer them to the default matrix profiles for my cameras.
Disclaimer: I’m a programmer (or, according to my phone’s autospell, ‘prisoner’), not a color expert. What I described above is the pipeline, the what’s-done. The why-done is better the domain of color-ful folks like @elle and @gwgill.
That said, my understanding regarding the main external variable affecting measurement of a camera’s color response is lighting. Somewhere in her trove of prose, I recall @elle saying that a daylight profile will do for the majority of situations, but there’s probably more to it than what I’m recalling. Adobe DCPs do let you build dual-illuminant profiles where the gamut transforms are interpolated between the two, but I haven’t played with such.
With respect to influence of the lens, I’d say there is, but not significant for the mainstream products.
For my personal endeavors, I did an Argyll scanin-colprof ICC profile of a ColorChecker Passport shot at noon (hmm, sounds like a western gunfight…) , and I use it in the majority of my processing. Worked well until I came home with shots from a grandkid choral concert in a local venue that seems to like blue accent spots; those splashes of light were garishly posterized. After a lot of digging, especially into Anders Torger’s dcamprof documentation, I found discussion of the challenge of handling close-to-gamut-boundary colors, and for my particular blue problem a description of decreasing the Y value of the blue primary to round up such garishness. Using his dcamprof software, I was able to do such to my numbers and produce a profile that did a much better job with the blues.
My messing around to date is with camera profiles that only have the 9-value matrix of RGB primaries. With a few more brain cells than I have, one can switch to LUT profiles to get better characterization of spectral response. I’m saving such for after retirement, when I can devote endless daytime to reading sentences over and over until they sink in, or we pour evening wine…
Yes, and it would help if they were disambiguated. The first is a color device profile, the second is a look profile. A color device profile is not a calibration, since it doesn’t alter the camera response itself, instead it is a characterization - information about how the camera behaves.
Cameras generally don’t produce pictures composed of colors in a device independent sense - i.e. the numbers they give you are “their” version of color values, and can only be properly converted into other color spaces or interpreted as human perceived colors if you know what those numbers mean - i.e. if you have that cameras color device profile. (i.e. “RGB” values mean absolutely nothing without a device profile of some sort, either measured or assumed.)
A camera will only “see” colors exactly the way we see them if its spectral sensitivity characteristics (and that includes the sensor and lens) are similar to the human eye (the “Luther” condition).
If the light sensitivity of the camera is non-linear, then the color device profile will also have to try and characterize this aspect.
If the camera image processing makes changes (to give it a “look”), then the color device profile will have to be able to characterize this as well, so as to be able to undo it. This may well be impossible if it is dynamic, or more complicated than the model the color device profile can encode. This is one reason to shoot RAW.
White point adjustment is another complication - the camera or image processing typically tries to imitate the way the human eye and brain automatically adapt to different lighting. A color device profile generally has no facility to deal with this properly - it either operates in a mode where no white point adjustment has been performed (absolute), or operates in a mode where it assumes a white point adjustment has been performed in a way that doesn’t mess up the characterization (relative).
Unless you have a perfectly colorimetric camera (most aren’t), then this is not a reasonable assumption.
If you can control all the spectral aspects, then you can get quite good colorimetric reproduction using cameras. That means a controlled lighting condition with a fixed white point, and a test chart with similar spectral characteristics to what you want to reproduce, and a fixed camera and lens. As soon as you vary any of these, colorimetric accuracy will get worse.
often times inside ICC profiles you’ll get a colour matrix. that linear 3x3 transform is not enough to correct for spectral responsivity of the colour filter array on the sensor. you need a look up table. those are somewhat hard to control (can be embedded in ICC, too), and thus require to shoot a target with many patches (the 24 ColorChecker will likely not do, but these are often used to create matrices because they both play in the same league of precision…).
for my part i prefer darktable styles created from darktable-chart (which use colourchecker-lut + tonecurve) for a few reasons:
i wrote it, i know what it does. if it doesn’t work i can fix it
it’s a look up table, and because of thin plate spline interpolation okay robust wrt overshooting and interpolation artifacts
i can edit it manually if i don’t like the results or if there’s something obviously broken
you can do those for jpg/raw pairs or just use the spectral reference values which are known for the charts. if these are printed carefully and lit by the reasonably well known sun light, results can be close to what the real thing looked like.
for my part i don’t care about calibrating for lenses, i don’t think i have that many that different lenses. i care about sensor.
i also don’t want to correct for ambient lighting (no excessive white balancing, i want the colour of the real light, and adapting for background illumination is a bit too esoteric for my limited understanding).
I don’t quite “get” the technical gist of what you’re saying, Jo. However, since your view seems to be different than others’ posted here, I hope that it spurs further discussion (perhaps even heated). I’ll be standing back along the side lines, watching and hoping that a nugget or two of elementary (therefore to me: comprehendible) wisdom may fall off of the truck which I can recover and quietly try to digest.
From what little I’ve read “accurate real colours” are not possible to reproduce with current technology, even assuming we had a complete description of human vision (which we don’t). It’s an error minimisation at best and even then assumes your eyes fit a “standard observer”. In fact it seems amazing we get as good as we have. So perhaps the question is which is least wrong…
Yes, the original question is a bit “apples vs oranges”. I personally think each image I produce needs to have an ICC profile with at least a matrix characterizing the encoded gamut, so others’ color-managed software knows what to do with it. With regard to styles, DT or others, I’d just as soon do those manipulations individually to taste, now that I know how…