may I ask a fundamental question I am not clear about. If I shoot with my digital camera, the output JPG is supposedly in sRGB space. Some cameras allow also to choose AdobeRGB as output space. But what about the raw images? The data are supposedly be the unmodified values from the detector (but matrixed by the firmware, according to Hunt (2004) “The reproduction of colours”). How do I convert these data then into a well defined colour space in raw processing? With my scanner I know what to do: I scan an IT8-target, create an ICC-profile and use it to convert the scan data to my favourite colour space.
I had calibrated the still images of my movie camera with a printed IT8-target and got a vast improvement in colours for the “sRGB” JPGs delivered. Doing the same for my Sony camera, the colours were not changed at all. This does not refer to the raw images but the JPGs produced by the camera.
Raw data has no color space (also raw data isn’t in color, remember its a Bayer/xtrans CFA pattern). You assign it some sort of color space after demoasicing.
If you want the ICC profile, you make an input color profile.
Here I do not quite agree. It is the (device dependent) colour space of the camera, just like a scanner has its own colour space.
You have the R, G and B photosites. They define an RGB triplet of values via demosaicing for each pixel. I would call this the colour space of the device.
I.e. one shoots a printed IT8-target just as I described above and assigns it to each image? So e.g. in RT I go to the colour management tab and enter this ICC-profile. This takes effect after demosaicing I assume?
Would such a profile depend on the algorithm chosen for demosaicing?
To be specific, the raw data are measurements of light take through bandpass filters corresponding to the low, mid, and upper regions of the visible spectrum. They don’t define a ‘colorspace’; rather they define a ‘spectral response’. That the filters happen to aligin with what most of us consider to look like ‘red’, ‘green’, and ‘blue’ is just a coincidence to our perception.
In order for us to start regarding the data as ‘color’, the individual measurements need to be coalesced into encodings that correspond to our interpretation of color. That comes in the demosaic operation. Now, that operation still doesn’t give us renderable data, as the camera’s ability to resolve light far surpasses the display tech’s ability to present it. So, an operation that “crushes” the spectral response of the camera to the color gamut of the rendition media is needed. To do that, one needs a description of the input data, and a description of the gamut capability of the medium. Those are the input and output profiles of the first transform done in the color management chain. The input profile contains data that allows mapping of the camera data to XYZ, then the output profile contains data that allows mapping of the XYZ data to the rendition media colorspace. Now, that first transform may be to a working space, but there’s always a first translation from camera response to some colorspace.
I know you know most or all of this; I thought stitching it together might bring the insight needed to answer your question…
I think they do in the same way as the monitor’s or printer’s colour pigments do.
The “response” of the detector’s photosites is described by a triple of values ranging from 0 to 1 (normalized). That is true for a camera, a monitor or a printer. As such, the “colour space” of the device is a cube with axis R, G and B ranging from 0 to 1. The cube is completely filled and encompasses the colours available to the device. Now if I calibrate my device with an ICC profile, this cube is mapped into the e.g. Lab colour space. The volume filled there is the gamut of the device, its “colour space”.
The exact choice of the filters / detector response is not important, as long as they reasonably cover the R, G, B space.
This is my understanding. and please correct me if I am wrong Please ignore demosaicing for a moment and look at a camera with beam splitters and three separate detectors, covered with R, G and B filters respectively for simplicity.
I was right there with you until a couple of years ago; the first admonishment I received was in some forum thread elsewhere, were I called it “camera gamut”. I was politely but firmly corrected, told that cameras don’t have color gamut, they have spectral response. Later, you may recall the rather contentious thread on Unbounded data, where troy_s kept exhorting “It Isn’t Green!” in response to the cast over un-whitebalanced images, that was all about light vs. color. My recent work on spectral sensitivity camera profiles has just put a bow on the whole realization - cameras don’t measure color, they measure light, and the encoding mechanism is what gives us quantities that our heads can interpret as color…
That’s why the demosaic operation is important, that is where the light measurements through the bandpass filters (note I’m not referring to them as red, green, or blue) get congealed into an encoding that supports our interpretation of it as color. I know it’s hard to not think of those filters as colored, but the SSF plots I’ve recently been producing drive home the bandpass function:
I color the lines to help discern the parts of the spectrum: 1) low-UV (blue), 2) mid-VIS (green), 3) high-IR (red). ‘mid-VIS’ could almost as easily be yellow, or orange, depending on which part of the segment one would pick. Each line represents the sensitivity of respectively filtered sensels to light. Each filter allows only a segment of the visible spectrum to pass.
Here’s a rawproc screenshot of my camera’s capture of light diffracted into its discrete wavelengths:
That you and I see blue, green, and red hues is not because the monitor is showing us colors, its because our heads interpret certain wavelengths as certain colors. The colors you see here are the “spectral colors”, those that result from interpretation of a single wavelength rather than a combination of wavelengths.
Once I was able to separate the physical phenomenon of light from the perceptual notion of color, a lot of what we do in capturing and processing images makes more sense…
Jossie from your original question…sRGB or Adobe will be embedded into your camera created JPG files so nothing to do with DT unless DT needs to read that to render your JPG file. Your raw data will be assigned the colorspace you chose when processing. The input color profile will control this. Usually a matrix profile specific to your camera is used to create the colorvalues Then the working profile is used for the maniputlations and then you chose an output profile…Usually in DT your camera is recognized and a matrix profile is applied to it but if you make an ICC file you can apply that in the input profile as a substitute for this…so then you are applying a different matrix and or lut values etc based on the lighting for that icc. You select a working profile for the colorspace for the modules to work in and finally you specify your output …also you can specify your display which is ususally system or a profile if you have calibrated it…so basically you are looking at 4 profile settings that impact what you see on the screen and your exported files…way over simplified…I do think this is much clearer and better organized in RT and also they support DCP files as well as ICC in addtion to the ones built in to RT so you have a lot of options.
I just cannot agree! Demosaicing is just a side issue of the problem, as I see it.
What is the difference? We interpret the measurements of light in different wavebands as “colour” (see e.g. chapter 3 in Wyszecki & Stiles: Color Science (2000)).
But this is the whole issue of colour! It is something we make up in our mind. But this has a physical origin, i.e. the measurements of light fluxes in three different wavebands by our eyes, conveyed by the monitor signal.
Let me fall back to a domain where I am most familiar with, astronomy. We measure the light from an object in different bandpasses. Combining these, we talk about the colour of an object: if the signal in R is strongest, it is a red object etc.
I recently did a test of a Nikon scanner. The same would hold for a Nikon camera. I scanned an IT8-target and compared the resulting “colour space” with the setting of the scanner, which was sRGB. Comparing the colour spaces of the camera with sRGB using Argyll showed a very good coincidence. So Nikon did a fine job. I did the same with the still images of my Panasonic movie camera, the result was a disaster: It is not sRGB at all, but can be corrected for with an appropriate ICC-profile.
https://www.youtube.com/watch?v=GkBgFaSv1kE If you do this process for your Sony and copy the ICC to the color/in folder in the darktable config folder and then change your input profile to use that ICC and the colors don’t change then I guess your Sony must be pretty accurate…but I suspect you should see a change esp if you see one on the monitor…Maybe also try correcting your video in Davinci Resolve as it has a built in colorchecker corrections tool to use with common targets…
Thing is, you can’t equate each and every color to a single measurement of light energy. Magenta, for instance, requires light energy from both ends of the spectrum, and none from the middle, to create the perception in our heads.
But it’s the essential mechanism to associate wavelength energy measurements into the triplets that encode the perceptual notion. And, most important to our consideration, the mechanism for describing the camera space as ‘color’ doesn’t work until after the demosaic operation gives us a RGB-encoded image array.
That some colors are indeed the result of a single spectral measurement is where our confusion of the two is rooted. And, if those were the only colors we could concoct, using the terms ‘light’ and ‘color’ interchangeably would be valid. Alas, metamerism confounds that…
The point of input profiles is to bring the image to a common point, where after minimal processing you get an image that matches the ground truth (the original scene) as closely as possible.
Since we know the device is an imaging instrument for general consumer photography (that includes professional and cinema cameras and scanners), we have a good idea what to expect from the so-called raw files. These imaging devices aren’t scientific or medical instruments where the specs and tolerances are exacting and transparent. There is much more obfuscation and abstraction in photography.
Although things have improved substantially (users are more capable and manufacturers are providing more to tinker), there is much to be desired. Profiling, or more precisely characterization, of the device is bridging the gap between the black (or grey) box that is the camera. We would like to think it isn’t a black box but it is, unless we can fully reverse engineer it to the point of building a replica.
What I am saying is that the internals doesn’t matter so much. Nor can we calibrate the device. What matters is interpreting what a set of reference raw files is giving you for that particular camera (and settings) such that once the input colour, noise, white point, etc., profiles are gathered, the raw processor would have enough context to make an image that is close to the ground truth with minimal (neutral; natural) processing.
I digress. Let’s get back to the OP question. What is the problem exactly? Were you unable to reproduce the target colours with the Sony? If so, there is a problem with your method. I know you are well versed in making profiles, so this is puzzling.
There is no problem at all . I was just curious, on how you ideally progress from the raw data to a well defined colour space like sRGB or AdobeRGB.
My question started with a Nikon scanner, which is said to provide images in sRGB or AdobeRGB depending on your settings. The same is true for a Nikon camera. Since no ICC-profile is used, one relies on the calibration provided by Nikon. My question simply was, how close is the “Nikon” or the “Panasonic” colour space to a standard sRGB or AdobeRGB. You can see a comparison of the Nikon and my Panasonic colour spaces in a thread in the German scanning forum.
Coming back to original question, it is my understanding now that after demosaicing (perhaps always using the same algorithm) one should assign an ICC profile to the image to be safe in terms of colour accuracy. If not, one relies on the calibration of the manufacturer.
A side remark: Since we are talking about an input device, there remains the problem of metamerism: Different illuminations could result in the same colour with a given instrument.
Yes. Although, pedantically, the act of assignment doesn’t really change the image. I do it as the first tool in my rawproc chain, even before white balance. It’s the transform to the next colorspace that makes the image change…
I bring this up because I think it’s important to consider where the actual act occurs relative to the other processing, especially the tone conversion. Right now, my processing leaves the raw in original camera space (tone and color) until either display or file export; then is when the tone and color transforms take place. So, when I adjust my filmic curve, that happens before the display transform, so what I’m looking at is the amalgam of those two tone operations. And yes, I am not using a working profile right now, but I don’t see a downside in my final renditions. But, I haven’t done a well-considered comparision…
Order of operations is an important thing to manage. Now that darktable has exposed that to users, it’ll deserve more discussion so folk can understand the implications of their ordering choices…