I understand that absolute units are not needed to generate a color profile in DcamProf, but there is still a big difference between the two for this application.
Quantum efficiency (QE) just considers the conversion efficiency of a photon into a photoelectron. On the other hand, Radiometric units consider the actual energy carried by a photon at a specific wavelength, with short wave photons carrying more energy than those at long wavelengths.
Welcome to the forum, and I need to start with a disclaimer: I know enough about this stuff to be truly dangerous .
I think you could use either data, normalized, to make a camera profile and the essential difference would be the white balance multipliers required to make white=white. My intuition says the important parts of SSF data are 1) where the peaks occur in the spectrum, and 2) how the bandpasses mix to affect color separation. The radiometric sensitivity of the sensor is just important to where the noise floor sits…
Now that I’ve said all that, I welcome any and all clarification, correction, and even “what-the-hells”…
If one would compare the QE-plot blue-channel with the radiometric red-channel, normalization would still result in different response curves. How much of this is a problem is the question when one can calculate one from the other. So this is just about: which of the two is expected?
Personally I’ve only encountered the QE plots in scientific environments (people counting photons or trying to assess photon yields in chemical reactions and such). I suspect it’s all about radiometric units for the engineers?
I’d also be very happy for someone who knows to chime in!
p.s.: oh and welcome to the forum, @sschmaus !!
p.p.s.: that sensor data sheet should be taken with a grain of salt IMHO as it shows horrible etaloning.
it was my understanding that QE is relevant here because of the photoelectric effect that has a certain band gap that will trade a photon (regardless of their actual energy as long as it’s within the gap) and one electron. fwiw that is also how you do spectral light transport simulation in rendering systems (consider wavelength for colour but count number of photons, not their energy as far as it varies over wavelength).
This is exactly my problem. It depends on how DCamProf handles spectral information internally. My first idea was to check in which units the illuminants are specified, as the camera response units should probably match that (provided they don’t get converted at some point)
Of course there is no unit attached, but plotting the data results in a very similar graph as the one from Wikipedia which is apparently given in units of power (as opposed to photon counts which should be the QE equivalent)
I don’t know in how big of a difference in color appearance using the wrong one would actually result in. It could be negligible for most applications…
Maybe it can be settled by comparing a profile generated from shooting a color target to two profiles created with the different SSF functions and seeing which one matches better.
Unfortunately, the cameras I’m trying to calibrate are on the Perseverance Mars rover, so it’s impossible for me to shoot reference images with them
That’s the thing: camera profiles are a compromise to get the camera data into well-defined rendition spaces. There’s really no single"correct matrix". Evaluating how well profiles transform measured reference colors is a valid statement of performance.
Of course. It’s always a compromise to transform the colors from your imaging system to match those the CIE standard observer would see.
So an imaging system whose response functions match those of the CIE observer better should also perform better in the color correction.
But in reality, camera response functions don’t match that observer very well, and what I was saying, is that from the two ssf options that I have, the one that produces a better calibration doesn’t necessarily have to be the one that actually represents the light sensitivity in the way DCamProf expects it for generating an accurate profile.
Sorry for that word salad
Anyways, I tried comparing the two options (for Mastcam-Z, the paper linked above), and the result only has a difference of 0.3dE, with the QE result being the slightly better match.
Interesting. I did dcamprof profiles with the data, got the following max dE:
C03 DE 4.13 DE LCh +3.62 +0.15 -2.00 (strong red)
Not sure, that might have to do with the lack of IR cutoff. Here’s the complete dE report:
D02 DE 0.00 DE LCh +0.00 +0.00 +0.00 (gray 80%)
D06 DE 0.20 DE LCh +0.01 -0.11 +0.16 (gray 20%)
D03 DE 0.58 DE LCh -0.03 +0.17 -0.55 (gray 70%)
D05 DE 0.62 DE LCh -0.03 -0.09 -0.61 (gray 40%)
D04 DE 0.66 DE LCh -0.04 +0.41 -0.51 (gray 50%)
A05 DE 0.94 DE LCh +0.58 -0.25 +0.59 (purple-blue)
B02 DE 0.98 DE LCh -0.11 -1.62 -1.22 (purple-blue)
C01 DE 1.17 DE LCh -0.19 -0.49 -1.47 (dark purple-blue)
A06 DE 1.18 DE LCh +0.27 -0.78 +0.84 (light cyan)
A03 DE 1.27 DE LCh +0.12 -0.88 +0.62 (purple-blue)
C06 DE 1.48 DE LCh +0.51 -0.30 +1.35 (blue)
B04 DE 1.51 DE LCh +1.33 -0.17 +0.68 (dark purple)
D01 DE 1.59 DE LCh -0.21 -1.23 -0.98 (white)
B03 DE 1.67 DE LCh +0.89 +0.45 -1.34 (red)
B01 DE 1.93 DE LCh +0.04 -1.65 +0.99 (strong orange)
B06 DE 1.94 DE LCh +0.35 -0.65 +1.79 (light strong orange)
A02 DE 2.10 DE LCh +0.48 -0.13 -2.04 (red)
A04 DE 2.37 DE LCh +0.38 +0.12 -2.33 (yellow-green)
A01 DE 2.53 DE LCh +0.27 +0.60 +2.45 (dark brown)
C02 DE 2.57 DE LCh +0.48 -2.50 +0.32 (yellow-green)
C05 DE 2.98 DE LCh +2.54 -0.07 +1.56 (purple-red)
B05 DE 3.13 DE LCh +0.20 -3.12 +0.04 (light strong yellow-green)
C04 DE 3.83 DE LCh +0.44 -3.80 +0.09 (light vivid yellow)
C03 DE 4.13 DE LCh +3.62 +0.15 -2.00 (strong red)
I used the graphs in A/W, i.e. sensor output current by input light power as that unit matches the commonly published and defined standard illuminants. Those use the relative spectral power distribution, example: Standard illuminant - Wikipedia
I’m surprised to see how the amplitude for the individual RGB channels in the graph in radiometric units match each other. That’s a bit suspicious like someone adjusted it to fit. But could be that much engineering time was spent in producing the sensor so the amplitudes match. But even for the simple 3-diode Viking Lander cameras that wasn’t done.
Can you translate etaloning? Do you mean the bias in red visible all over the spetrum? I wondered about that as well but also saw it in spectral sensitivity functions of other commercial camera sensors. Maybe that bias has some advantage for correcting other errors?
Imagine you have a transparent slab of glass and transmit a ray of light through it. At the front and back surface of the slab this will create reflections (in most cases at least). Those reflections will interfere with themselves when the conditions are right. Now imagine you tune the frequency of the light wave either up or down. Depending on the wavelength you’re momentarily at, you’ll get either constructive or destructive interference which is effectively modulating the amount of transmitted light vs. the wavelength (simplified for clarity). On a measured spectrum of a thin film (like one can assume the color filter array effectively is) this results in waveyness of the measured response.
At least that’s what I think I am seeing on that spectrum.
Ah I see, the interference noise due to the specific dichroic filters they used, especially visible in the panchromatic response and the red response in Spectral Characteristics, Fig 8 and 9. After the IR/UV cutoff filter that appears to be lower.
In one of the Perseverance camera calibration PDFs there is a note about a surprising high narrow bandwidth peak at one specific frequency. That could be related, but was much higher than those in this graph. Eventually it was considered to be negligible as the total sum didn’t change much. I guess it was the PDF about the calibration target reflectance analysis, have to check…