The Quest for Good Color - 2. Spectral Profiles "On The Cheap"

I’m really thinking that it might at least partially have something to do with Nikon’s pre A/D processing.

This is a good possibility and very simple to check: take a look at periodic gaps in the raw data and calculate from that what is normally referred to as Nikon’s white balance pre-conditioning. It is a camera-specific digital gain applied post-ADC to the R and B channels. The range is typically something of the order of 1.05-1.15, and could easily account for the differences shown.

Although the fact that gg shows more energy in B and less in R could easily be accounted for by incorrect normalization due to wrong source color temperature estimation.

Jack

I’d think the rawtoaces measurements would have had the same bias.

It depends what they mean the vertical axis to represent: what one sees in spec sheets refers to energy (or absolute quantum efficiency), so it would be before any digital gains.

rawtoaces data is normalized 0.0 - 1.0, so any notion of anything other than relative sensitivity is trashed. dcamprof uses similarly-normalized data, so that’s what I’m producing in my processing of the spectral data.

I now am the proud owner of a lab-grade transmissive diffraction grating, as well as a neutral diffuser. A bit of carpentry, and I’ll have a spectrometer that should work at least as well as the Open Film Tools contraption. After I build a spectrometer box on the 36.9 degree blaze angle, I’ll collect some data and write the next post…

1 Like

Do you know if there’s any way to set a Nikon camera to “uni-gain” white balance (in camera) so that there is no white balance preconditioning happening? I don’t even have a Nikon, so I’m mostly just asking out of curiosity and for the benefit of those who do.

I am not familiar with rawtoaces Glenn. Relative energy, relative e- or relative DN?

Excellent work btw, looking forward to the next installment.

1 Like

I am not aware of any way to defeat it Nate. In the end it only causes minor inconvenience - for instance oscillations seen near bit-depth with PTCs - and it performs a similar function as DNG’s CameraCalibration tags (‘per-individual camera calibration performed by the camera manufacturer’). Except that in Nikon’s case the relative R and B multipliers are baked into the raw data.

Whatever was measured, relative to the largest value of the largest channel. This link goes straight to the data directory:

I’m using the Nikon D7000 data, as that’s the camera I have.

rawtoaces is an ACES endeavor to make tools and conventions for ACES IDTs.

Edit: Oh, I’d say relative energy, based on a supposition they’re taking the raw values from an image of the monochromator-supplied circle, no lens…

Just a reminder that the data they have on cameras are from other sources. Hopefully, the files are well-annotated.

Here’s a link to a post at acescentral that summarizes what they did. Particularly, read ‘IDT Report v4.pdf’

sounds great, can’t wait!

1 Like

Interesting documents in this ACES stuff. The key passage in the Evaluation as far as the energy vs e- vs DN question is this:

… RGB values from each wavelength snapshot were normalized by dividing each wavelength triplet by the radiant flux at that wavelength.

Not very clear, because radiant flux can have slightly different units, potentially hiding a crucial step in the procedure. For a given illuminant, RGB intensities recorded in the raw data (DN) are proportional to the number of collected photons and converted photoelectrons (e-); on the other hand it is mentioned that the power spectrum out of the monochromator used to normalize the results was read by a separate power meter.

If the output of the power meter was in Watts, it would be incorrect to normalize intensity in DN (proportional to e-) as-is by such an energy measure because for a given energy the number of photons varies with wavelength, as we all know (e = hc/λ). If on the other hand the ‘radiant flux’ out of the power meter was already weighted by wavelength he’d be good to go as described.

I tend to think that Dyer probably performed the lambda weighting either explicitly or implicitly. Did you Glenn? It makes quite a difference since lambda is twice as long in the reds as in the blues.

Jack

PS I checked the D7000 in DPReview’s Studio Scene and WB pre-conditioning gains for their unit at base ISO was similar in both the R and B channels at 1.16-1.17. Since the gains are camera specific yours is probably differerent.

I’m still parsing what you wrote, but what I did to pay homage to spectral power distribution was to use a dataset provided by the Open Film Tools group for a tungsten-halogen spot brand-named ‘Dedolight’, values between 380nm and 725nm at 4nm intervals, the RGB values at each wavelength measurement were divided by the associated (interpolated) Dedolight power value.

I’m building a spectrometer to measure SPD, using a Raspberry Pi and a monochrome camera. The camera has a 650nm IR cutoff filter, which I need to remove to cover the upper end of the visible spectrum. The output of such a device is simply the intensity in units relevant to the camera sensor, so all I really know to do with it is to normalize the values against the highest value to 1.0. I’m still hoping a single generic tungsten-halogen dataset can be used by others, so they don’t have to come up with a power spectrometer in addition to the spectroscope.

Ok, keep in mind that the Correlated Color Temperature (hence Spectral Power Distribution) of any Tungsten/Halogen lamp varies substantially over time and depending on how it is driven. Such differences between the one you actually used and the standard Dedo could explain the variation in SSFs shown, as mentioned earlier. As I recall, one of the key design elements of this <$500 competing project was getting the lamp driven properly and stably.

This is possibly what Dyer did, in which case your results would be comparable to his. At first glance it seems incorrect to me, as mentioned earlier. I am still unclear as to the relative units of the vertical axis: DN/W? Numerator and denominator need to be both either in spectral or quantal units, a mixture of the two doesn’t make sense to me.

Interesting idea. Keep in mind that the sensor, microlenses and relative filter stack have spectral responses of their own, which are anything but flat and nothing approaching a Luminosity Function, see for instance here.

Unless you know the Absolute Quantum Efficiency (or equivalent) curve of the monochrome sensor, I am not sure I understand how doing this would result in better results than those you have shown already.

Jack

1 Like

I’m making some assumptions with regard to the required input to dcamprof, see the bolded sentence below…

Looking at the complete SPD plots for various tungsten-halogen lamps, I came to two conclusions: 1) they were all similar in peak position and slope, and 2) through the visible spectrum part, the “curve” was pretty straight and down-sloping from 700nm to 400nm. Since dcamprof accepts highest-channel 0.0 - 1.0 normalized data, I concluded that to correct the power difference I could divide my SSF values by similarly normalized power data. Using the Dedolight data in this manner, I got the SSF comparision you see in the post, where the red and blue channels are different from the rawtoaces data in a manner that indicates to me the slope of my power data is slightly different than that of the real light. That’s what got me going on the spectrometer.

I’ve actually had some fun with that, a lot of good information at publiclab.org, some of which has influenced my ‘design’ of the SSF spectroscope. Right now, I can only measure up to 650nm, as the Arducam monochrome camera I procured has a pesky IR cutoff filter, but the slopes in that range are pretty close to the Dedolight data.

So now, I have optical-grade components for the optical chain and the lumber needed to make a better box, and some preliminary measurements that look a bit better, still using the Dedolight power compensation. In the churn to make the camera profile, dcamprof produces delta-e data that I think will allow useful quantitative comparisions, so I should be able to determine if this approach is “close enough for government work”, so to speak… :smiley:

I’m a shade tree mechanic in this endeavor, so I really appreciate the feedback. I’ve lurked that thread on DPReview where you all have gone back and forth with Bernard Delly about QE, and I have only a caveman-understanding of the implications. So far, my eyeball assessments of processed images tell me I could be close enough, but I really need to look at more color situations than I have to date. A quantitative assessment actually could be both good and bad, better understanding of the differences but also realization of the coarseness of the method sufficient to compel me to abandon it.

Thing is the alternative, target shot profiles, are fraught with their own challenges. Finally a (mostly) cloudless day, I’m getting ready right now to go outside and shoot the IT8 target with my old Nikon D50 and re-do the Z6 and D7000, but my first try really drove home the glare vexation. The coarseness of spectral shots seems more controllable…

@ggbutcher Thanks for sharing the results of your hardwork, as well as tools to ease icc profiles creation!
I have a D7000, so I used the data and tools you’ve provided in github to create the icc file.
I also grabbed data from nae dot lab dot org for my 5D.
I still haven’t gone through many images, but the results are really promising, especially D7000 images. Regarding the 5D, the changes in color are a bit more radical than the D7000, so I’m still applying it to more images to see if it’s worth using it (the 5D profile).
Now a dumb, layman question: should these profiles be applied to any image, regardless the scene light conditions and/or camera settings?

I need to poke at this a bit more, but what I’m initially observing is that in one of these LUT transforms, the colors that are already in-gamut with respect to the output colorspace are pretty well left alone. It’s the out-of-gamut colors that get messed-with. So, my inclination would be yes, use these profiles by default. I’m still reading Anders’ dcamprof doc to better-understand the internal workflow, but right now it looks like they’ll work accordingly.

1 Like

Camera ICC profiles are specific to the light source that the profile was created for. A profile optimized for D50 might give ok results in 3200k and 6500k light, but you would get more accurate results by making separate profiles for different light sources. The beauty of using camera SSF’s is that if you also have the spectral data for a target you can easily creat virtual target shot data (.ti3 files) for different light sources just by changing the illuminant in Dcamprof.

D50 is the default if no illuminant is specified. I always create a D50 .ti3 first and then also create D65 and STDA (2850k tungsten I think) .ti3s.

dcamprof make-target -c cameraSSF.json -p targetspectraldata.ti3 outputfilename.ti3

dcamprof make-target -i D65 -c cameraSSF.json -p targetspectraldata.ti3 outputfilename_D65.ti3

dcamprof make-target -i STDA -c cameraSSF.json -p targetspectraldata.ti3 outputfilename_STDA.ti3
2 Likes

Thanks Nate; makes sense, in light of how Adobe has organized DCPs.

Have you experimented much with different target patch sets?

It seems like the patch set has a pretty big influence on the final colors. Actually, as I play with dcamprof (and Lumariver, which, while not free, definitely makes the profile making process a lot faster and more intuitive) I’m finding that there are a lot of variables that have a huge impact on the final profile, like the tone curve and Tone Response Operator, gamut mapping, LUT corrections, weighting, etc.

My favorite profiles so far have been dual-illuminant (D65/STDA, color appearance modeling OFF) DCP profiles using two targets for each illuminant (the Lippmann2000 all-races skin/hair/eye spectra patch set and a simple 4 patch target set including pure 100% reflective white and single R, G, and B patches). I’ve found that using these two patch sets gives more accurate skin-tones, purer reds, and smoother tonality with only a Simple matrix than a full 3D LUT profile using a target that has a full spectrum of patches. Seems like using a full spectrum of patch colors forces the matrix to compromise all colors while using a patch set focused on what you’re shooting gives better results. What do you think?

Yes, that’s been one of my idle noodlings as I’ve messed with dcamprof. After all, all of these patch shenanigans end up in one 3d RGB LUT, so I can easily imagine some push-pull between patch spectral references of a full patchset. dcamprof has a munsell-bright subset of the full 1600 collection, and I’m at some point going to generate a profile set from it to consider this very thing.

I may yet invest in Lumariver, just to get some clarity on the dynamics surrounding such choices. Movement can be a very insightful thing…