What does linear RGB mean ?

I am sorry to point this out, but your claim was that babelcolor does something wrong. They aren’t. You have been supplied with arguments why they use a reflectance spectrum, which is an absolute reference when you claimed otherwise. And yes! you can know everything about the reflection and still not know anything about the lightsource. That is why it is a good idea to have reflectance as values between zero and one. you can shoot anything at it, you know how the reflectance will transform it. It behaves like a filter (I’d even say it is one). Of course you want to know what the filter does to which wavelength. This is the most versatile thing you can have for a set of color patches in that dimensionless form. And yes of course you want the same information about the CFA in the camera.

This!

That’s nice, but it is about spectral references for test targets. I am, at this point in the discussion, more interested in the camera SSF workflow. Your thoughts on how dcamprof handles that would be interesting to me.

  • The code is undocumented,
  • there is no paper or reference on the manual page,
  • dcam allows to embed a look curve (which has really no place in an input medium profile),
  • it only produces a RGB to XYZ LUT (or 2 if using double illuminant),

so the whole spectral part looks anecdotal to me : it appears to be only a storage format for IT8 charts reflectances later converted to XYZ once you input some illuminant (hoping that your light bulb is close to some reference illuminant). The only place where camera spectral sensitivity is mentioned says that you can use them to generate profiles, but you need to get these from somewhere else, and it will convert them into RGB → XYZ LUTs.

The whole soft is just a more complicated Argyll, there is no real spectral profiling happening in there.

If I had to code it myself, I would probably use this method : https://www.osapublishing.org/oe/abstract.cfm?uri=oe-27-14-19075

3 Likes

:thinking: PDF not loading for me. Will try again later…

1 Like

Loaded fine here

Must be my add-ons and / or internet connection.

1 Like

Doesn’t download for me either, but I’m on the crap computer in the basement. Will try on the better machine later…

Thanks for unwittingly calling my computer crap. :poop:

Try from here : https://www.osapublishing.org/oe/abstract.cfm?uri=oe-27-14-19075, the direct link doesn’t work for me anymore.

1 Like

Yup, link to abstract works great!

To take the experiment to the next step, I used the camera SSF data to make a matrix profile. This rendition turned out as bad as that from the target shot. So, to use the data, one needs more than matrix…

If I get ambitious, I’ll try to make a LUT profile from the target shot. Then all the combinations would be assessed…

1 Like

Told you soooo :musical_note:

3 Likes

Thanks!
For the equipment part I see that they basically build their own monochromator. Nothing too complicated I think, but they calibrate it with only three emission lines of a mercury-discharge lamp (not enough for my taste). They then check that calibration with a real spectrometer. The latter is going to be a problem for everyone to replicate.
So ‘How else could motivated individuals measure the spectral response of the CFA?’ I thought. And I found a diploma-thesis from 2009 where a guy actually used a set of interference filters to get rid of the ‘calibrated-monochromator’ problem by using interference filters.
https://www.image-engineering.de/content/library/diploma_thesis/christian_mauer_spectral_response.pdf
On page 53 he crosschecks his results of a CFA with monochromator results (of a Leica M8 CFA) and that looks acutally good.

Unfortunateley he uses 39 of those filters and they usually aren’t cheap. So it’s replacing a calibrated spectrometer/monochromator with an expensive filterset. Hmm :frowning_face:

1 Like

This community may be sufficiently populous enough to invest in and collaborate on this… :wink: Just a thought. :slight_smile:

I think we’re just backing into the same trade studies, explicit or implicit, that got the ‘world’ to daylight dcraw primaries in a humongous table.

After all this, I think I’m fine with those numbers, until I encounter the “LED-blue” situation. I now want to experiment with target shot fidelity vice matrix and LUT profiles, probably another trade study already done somewhere. For that, I need to procure an IT8 target, definitely a much cheaper endeavor than a monochromator/spectrometer bench.

Still, if anyone knows of a Nikon Z6 SSF data set… :grin:

2 Likes

From what I’ve read elsewhere (forum posts by Eric Chan, 2008 or so) a 3x3 matrix is usually good enough to get reasonable scene colourimetry estimates - provided that you use a calibration target with spectra similar to the things you are usually photographing. Using a ColorChecker 24 is only good for photographing CC24 targets. This is apparently why the Adobe matrices don’t match up with ones calculated using CC24 images - Adobe use thier own target design.
This makes sense because cameras are used to take pictures that people look at - if the colour reponse was too far from human vision, the pictures would look strange. In effect, cameras are mildly colourblind. The differences show up when you have something like narrow band LED lighting.

You could also take a look at the RAW to ACES project - they appear to be using a similar approach to dcamprof and generating a ‘virtual’ target image from the SSFs, a set of reflectance curves and an illuminant spectrum. They have thier own set of 190 patches. There is a paper linked from the github project that describes the method they use to calculate the IDT. They are using a 3x3 matrix.

edit: fixed link to RAW to ACES project, I had a link to the Colour Science fork - thanks @afre

1 Like

Hello,

now I am totally confused. What does all this have to do with the linearity of a digital camera? What the last posts are talking about is the ICC-calibration of a camera, same as ICC-calibration e.g. of a scanner. Or do I miss something fundamental?

Hermann-Josef

Fork of GitHub - AcademySoftwareFoundation/rawtoaces: RAW to ACES Utility (posted by me a couple of times elsewhere). SSF sets are obtained from various sources. Rare.

1 Like

I just went back through the thread, and I think it started in post 18 (sorry, @Matt_Maguire:grin:) when Matt suggested that one could possibly use a colorchecker shot at different exposures to assess the effect of the color filters to sensor response (pipe in if I got that wrong)

I think there is a relationship, in that preservation of the linear tonality across the channels is essential to preserving the camera’s intent to capture representative color, especially the non-spectral colors…

Sorry for that post @ggbutcher, it was a bit naïve. When Aurélien mentioned that the camera sensor was “non-linear”, I thought he was talking about the response of the individual photosites with respect to different intensity levels. If you read the subsequent comments, they are saying that the response is pretty close to affine (proportional plus or minus an offset), and we can basically ignore any error there. The real issue is the shape of the colour filters on the sensor. If I shift the colour hue by a certain distance, it will affect the output of the RGB values by certain amounts. If I take another equal sized step in hue shift, then the impact of the RGB values will be different, because of the overlap and particular shape of the colour filters on the sensor. Basically, it is not possible to come up with a set of linear coefficients (aka matrix) that will accurately transform the RGB values into a standardised colour space. What was discussed was to use a lookup table (LUT) to make this mapping. To avoid exhaustively measuring every value, we could measure for a sample of values of the LUT and apply some interpolation techniques, but the measurements need to be done under well-controlled conditions.

At least, that was my takeaway from the remarks that followed my post :slight_smile:

2 Likes