The Quest for Good Color - 4. The Diffraction Grating Shootout

I’m using whatever DE dcamprof computes; I haven’t looked in the docs or code to pick it apart.

Edit: I’m really just a shade-tree mechanic here, and dcamprof is my screwdriver… :stuck_out_tongue_closed_eyes:

To answer my own question, the DE in DCamProf is CIEDE2000, which considers L*C*h° derived from L*a*b, and is dependent on the illuminant, which in the case of making profiles, is D50.

Moreover, the first DE in the report output is called the total DE (which you called the max DE), but adding the numbers does not yield that number; rather, it looks like it is the norm of the L*C*h° values.

1 Like

@ggbutcher Glenn, you do not have by chance an analog Nikon body lying around? Something which you could stick behind the lens you used for taking the spectra?

Because, in principle you could do now the same thing for an analog ‘sensor’…I mean the idea is not new and people have done profiles of analog film from IT8 and CC24 targets quite some time now…but with the improved dE numbers compared to the targets…

I wonder if something like a SSF from real analog film could help advance the filmic module and how it handles gamut compression. @anon41087856 could be interested in actual spectral measurements of the gamut compression within film(s).

Wow, this looks like something ripped from the pages of Scientific American’s “The Amateur Scientist”. Well done!

Filmic gamut compression is parametric, not profiled/LUT-based, and works in RGB, not in spectral. It wouldn’t be usable directly.

2 Likes

I do, a Nikon F2, but I don’t understand what would go at the focal plane…

1 Like

Oh sorry. Your favourite Film. Portra 160, Ektar, Superia, Vision3…or even positive film, Velvia, Provia…

Then after developing and scanning and probably-then-some-steps-in-between, you would be able to characterize the film response spectrally and derive it’s SSF and from there give aurelien hard data for the parametric gamut compression to model after real film stock.
I have my doubts that dcamprof can derive profiles from several over and underexposed exposures of film, but then I have not read enough about dcamprof. But tweaking the filmic parameters to a nice and accurate profile…that would be the idea.

How much work would it entail to adjust the parameters after ‘real film’ spectral data. Is it on the order of months of rewrite, or is it three numbers defining a polynomial for gamut-compression vs. luminance (of course I exaggerated a bit here)?

The problem is more fundamental than this. The whole vantage point of filmic is to scale to whatever dynamic range. A LUT can’t be scaled in general because… 3D scaling relies solely on linearity between channels.

Plus film has a much smaller dynamic range than digital, especially color. So we wouldn’t have data at all for the very dark pixels, to build the LUT. If we can agree about reproducing the look and feel of film, I’m not sure we want to reproduce its limitations as well.

Then, filmic works in pipeline RGB, that is Rec2020 linear by default. Real film works in its own very non-linear space, so we would need to linearize it first, taking into account the channels crosstalk (!?!), then profile the RGB density against spectral data. However, gamut mapping and tone mapping are very much linked for film, so good luck splitting them apart, but filmic aims at doing them step by step for better control.

As it is, you take either the full film look, with color shift, tone mapping and gamut compression, and apply a spectrum → film LUT built from data, or none at all and model a 100% mathematically derivated film emulation based on chemical kinetics of reactions.

It’s kind of a Yamaha vs. Roland digital piano. Yamaha samples real grand pianos at 64 speed × force samples per key stroke, Roland has a fully modeled synthesizer that creates a virtual piano from scratch by maths. Then Yamaha added some synthesized features too (like the sympathetic vibrations of other strings and harmonic table that happen when you play a chord), but they are a bit black magic and probably out of some engineer taste. So Yamaha sounds more organic but feels less responsive, because of the discrete nature of the samples (and good luck interpolating sound spectrum between samples…) that may result in twice the same sound when you didn’t strike exactly the same, but Roland sounds more synthetic and feels to respond more accurately to the stroke.

2 Likes

First, thank you for this long and elaborate answer!

Indeed this is a functionality that analog film does not have :slightly_smiling_face: , so filmic has a different feature set and I understand that this doesn’t harmonize well with true film look emulation.

I think this is the part that I saw possible in a higher colorimetric precision with Glenns SSF generation setup in comparison to IT8 or CC24 approaches.

I assume the step by step fashion is better suited to the scaling of dynamic range part?

So this is a bit of speculation now on my part, but are the interlinked gamut mapping and tone mapping functions of real film actually understood (and known) on the side of developers? One would need to go through heaps of CC24 data for one type of film and still would not have a clear picture of what happens, especially if illumination changes etc. So I guess that a full spectral approach would be necessary to understand those tone and gamut mapping properties.

To come to your really helpful analogy of Yamaha vs. Roland: If both makers don’t know that sympathetic vibrations happen, then both of their modeling efforts are tainted. So my question would be: could we actually model film faithfully because we know all the chemical reactions kinetics and sensitivities to light?
(I am not asking whether we know what we don’t know, that would be silly, but whether we know i.e. the gamut compression in enough detail)

Again, thanks for your elaborate answer! :+1:

Exactly.

One thing to clarify here is we don’t aim at reproducing the film density alone, but rather its printed result on paper. That’s an important distinction because most of the gamut mapping comes from the translucent nature of film: the saturated colors tend to make the dyes more opaque to light, which results in a darker tone once projected on paper. As a result, saturated colors get darkened, which forces them back into paper gamut. So there is occlusion to take into account too, after this spectral mix.

Plus the fact that color filters are stacked upon each other, so the last dye layer gets all the input light spectrum minus what got already filtered out at the earlier stages, while sensor’s photosites get all the same input spectrum.

I have heard about a woman who did her PhD thesis in the 2010’s about the film chemicals, and unrolled a complete math model for that (you know, right when Kodak started to discontinue film stocks… feel the irony). I need to find that. But that should only give the spectrum → film density part of the model. Then there will be optical matters to stack on top because we are dealing with dyes densities, and occlusion is happening before going on paper. But it might only be a matter of squeezing Beer-Lambert law on top, or something in that spirit.

Then, there is the silver grain size thingy to assess too, because film ISO is not uniform over the surface of the emulsion. So maybe blurred film should be used for the SSF profiling.

Many things need to be assessed. I’m merely dipping my toes in there.

1 Like

I think I’ve dragged this topic a bit into a weird direction, sorry for that Glenn and dear reader. I am about to stop. :smiley:

Ooof, that’s two consecutive analog processes. That probably makes matters more complicated.

That reinforces my suspicion. The amount of analog processes, which themselves have thirty to fifty years of development and fine tuning in them, are not to be underestimated. The digital sensor is rather simple in it’s workings in comparison.

I’ll have a look for this.

I hope I am not coming off as pushy. Just thought that with Glenns results, there could be further insights which might be helpful to other developers dealing with this. Which reminds me that I maybe should have mentioned @CarVac throughout this discussion as well. His filmulator has a different approach than filmic I think (more akin to tone-equalizer but I might be mistaken) but delivers very pleasing results. Underappreciated I think.

I’ll stop the derailing now! :smile:

It is a different problem. Analog film has a randomness (non-uniformity) to it, not just a smoothness and continuity. This is much more pronounced than with musical instruments. Even if we revive a physical film, it won’t be the same as before.

I recently committed to ssftool a new program, tiff2specdata. If a spectrum is the only thing in a 16-bit TIFF, this program will find it, crop out a 100pixel band from the spectrum horizontal center and puke the channel averages to stdout, compatible for piping straight into ssftool. No ssftool extract or ssftool transpose required, and a by-product of printing the data right-to-left is you don’t have to horizontally flip the image in the raw processor.

With tiff2specdata, ssftool can now be used with any raw processor that’ll produce 16-bit linear unwhitebalanced TIFFs of the spectrum images.

As far as your experiments with a monochrome Pi camera - why not use one of the mono sensors that is explicitly marketed as not having an IR filter such as 1MP OV9281 Mono Global Shutter MIPI Camera Module with M12 Mount lens for Raspberry Pi - Arducam ?

Interesting, I just assumed their mono cameras had the same filtration, so I bought the cheapest one. Thanks for pointing it out, if they peddle it on Amazon, I’ll probably order it this week.

I am following your exploits in fascination Glenn. With regards to the monochrome sensor, can you explain how you are planning to use it to normalize the spectrum?

My spectrum light source is currently a 3200K tungsten-halogen bulb mounted in a LowellPro spot. For the profiles I’ve made to date, I’ve adjusted the SSFs for the tungsten-halogen SPD with a tungsten-halogen SPD dataset from the OpenFilmTools folk. Using that dataset supposes a “typicallness” of tungsten-halogen SPD, but that flies in the face of the other measurement endeavors I’ve reviewed, where they take a spectrometer reading along with the spectrum/wavelength capture and bias the measurement with it.

COTS spectrometers cost a pretty penny, so I’ve been investigating alternatives. Right now (as of this morning, in fact), I’m zeroing in on just using a Raspberry Pi monochrome NoIR camera just mounted to the camera port of the SSF lightbox and taking two images, one spectrum and the other the same CFL calibration used for the camera, then generating the compensation data from that. I want to know what meaningful contribution doing the SPD measurement along with the spectrum capture has over just using a generic tungsten-halogen SPD profile.

I’m wondering if a middle solution would be finding an easily characterizable AND consistent/reproducible light source that would be measured with a midrange spectrometer (such as an X-rite product?)

As to variation of light sources - you’re going to get some variation in spectrum from a halogen light unless you feed it with a regulated constant-current supply. If you run it off of unregulated line voltage there’s going to be enough difference to affect your data (although probably not extremely so, such that in most cases using existing data for a similar source will be “good enough”)

I think your NoIR Pi camera approach is good IF you can calibrate the spectral response of the camera (silicon isn’t flat!), but you still need a known reference light source to start with. I used to have access to one of these (we used it for verifying calibration of a spectroradiometer at my last job that was used for military NVIS compatibility testing), but not any more.

It’s kind of a chicken-and-the-egg - someone somewhere needs access to calibrated equipment in order to develop a more affordable reference chain. For example, a calibrated light source used to calibrate a NoIR camera, which is then used to measure a more readily available product (e.g. off-the-shelf tungsten lamp brand A model Y fed at exactly N amperes)

1 Like

Simon,

I haven’t forgotten the discussion about compensating diffraction grating transmission, today I finally executed what I think is an insightful test.

First, I stopped trying to compare with the rawtoaces profile; they don’t have to do a similar compensation and I don’t really know how accurate their measurement was. So, I compared the cc24 max DE from a profile constructed with each of the baseline uncompensated data and the same data compensated with your grating dataset. For the Nikon D7000

uncompensated: 2.83
compensated: 3.25

So, .5 DE “worse”, so to speak, but my real takeaway is that the compensation isn’t significant enough to spend time on. Based on something I read a few weeks ago (non-attribution, was an informal comment), better than max 4 DE is goodness. Oh, and that comment was based on a ColorCheckerSG reference…

This conclusion is also based on my recent considerations regarding the “training data”, or reference spectra; a given patch set doesn’t necessarily make all things better, it just takes you in a particular direction. Goes to reinforce my current thinking, cameras don’t have canonical colorspaces, we make up contrivances to either work toward some semblance of colorimetric consistency, or not…

Ok, if I understand correctly you are planning to divide the SSF counts by the monochrome counts. On the one hand I like this approach because at least you would know that the ratio is definitely counts/counts (as opposed to counts/energy, you would still have to compensate for Nikon’s white balance pre-conditioning).

On the other hand I am afraid that you would need to know the absolute QE of the monochrome sensor, otherwise you won’t really know what you are normalizing to. For instance here is the response of a monochrome sensor that was quite in vogue about 10 years ago and was used in both Bayer and mono applications:

Not minor variations, who knows what the response of your mono puchase will look like. IMHO you have a couple of options. One is you borrow or buy an i1Studio for a few hundred bucks (ex ColorMunki). Alternatively you assume that your tungsten light is a blackbody radiator

and normalize accordingly (these are spectral energy, they would need to be converted to quantal). You still have to guess what the Correlated Color Temperature is, but I think it would be less of a guess than using a monochrome sensor of unknown SPD.

Jack

1 Like