The Quest for Good Color - 4. The Diffraction Grating Shootout

That would be ssf_powercalibrate? And calibrationfile is something like a normalized version of source powerspectrum * grid spectral efficiency * diffusor power efficiency? That looks quite straightforwardly correct to me (though not too much stock should be given to such a statement from me :slight_smile: ). Or what is there concretely that makes you doubt itā€™s correctness?

I donā€™t know yet, just looking at some normalized data out of it just didnā€™t look like the relationship was preserved. Itā€™s something Iā€™m going to pick at next weekā€¦

Now, how you express it, as a product of the three, I havenā€™t done; Iā€™ve attempted to apply each individually to the SSF measurements. I donā€™t see how thatā€™d yield a different result, but Iā€™m not really a math guy and a lot of those implications escape me until I just go and try themā€¦ :laughing:

This just hit hackaday:

Thereā€™s some good discussion of calibration sources in the comments.

Most notably https://eprints.lancs.ac.uk/id/eprint/6736/1/inproc_326.pdf is linked from the comments

Thereā€™s a bit of discussion regarding line CCD vs monochrome 2D camera, TBH for our purposes a monochrome 2D camera is fine for the ā€œreferenceā€ calibration, and obviously for the camera to be characterized itself, it has to be a non-monochrome camera

1 Like

Interesting line CCD monochrome sensor. I wonder if they compensate the CCD for angular dependence and QE in their project:

Should be relatively easy to drive directly from an ESP32 or similar via i2c? No need for a lens in our application I think.

Or from a Pi, although in this application:
Since the camera weā€™re measuring has a 2D sensor, most of the benefits of line sensors are lost.

The main thing there that was useful for this application is the discussion of calibration sources. Specifically that paper discussing using ā€œconsumerā€ tungsten bulbs for spectral reference.

In this case, weā€™re interested in the cameraā€™s SSF, so we need to know:

  1. The SSF of our reference sensor (this could be hard without existing data, but perhaps for a monochrome sensor, the general SSF of silicon CMOS is close enough?)
  2. The spectral power distribution of our light source
  3. This lets us figure out the SSF of our optical system, which can be used to compensate it out when measuring
  4. The SSF of our actual camera
1 Like

Though it being a single line sensor is not a problem if we use it as a calibrator, assuming we trusted the QE curve in the spec sheet:

  1. Place inline sensor in box, turn on well behaved source (knowledge of SPD not necessary) and take picture of spectrum from grating with it
  2. Compensate for inline sensor QE = spectral photon distribution of source signal after grating
  3. Place camera to be tested in box, turn on the same source and take picture of spectrum from grating
  4. Normalize with the spectral photon distribution from 2) = CFA SSFs

There are much cheaper such inline sensors out there, such as this < $5 from Toshiba. I think they are used in scanners and the like. None of them include an onboard ADC (nor does the hackaday one) so an ADC-less Pi is probably less than optimal.

The key would be finding one whose ā€˜typical QEā€™ plot is repeatable and trustworthy. Do they exist?

This one appears to be spectral energy so it would need to be converted to quanta. And it obviously includes a coverglass. I wonder why they donā€™t show the response below 400nm.

Screenshot 2020-11-24 150320

I think you have replaced ā€˜estimate spectral photon flux from similar enough lightsourceā€™ with ā€˜estimate spectral photon flux from similar enough sensorā€™.
Nothing wrong with that, it takes out the optical system in between the actual sensor and the lightsource. But it also relies on the published QE of the line sensor with some assumptions. Specifically any angular dependence of the line-sensor-QE or the actual-sensor-QE will lead to minor problems as well as line-sensor-QEs variance. Spatial ADC gain variances of both sensors not even touched.

At that point a calibrated photodiode that one scans through the image plane connected to a good enough voltmeter might make more sense.

The question is, which way forward is practical and might improve on the existing results?

Ha, buy an i1Studio! I know when Iā€™m out of my leagueā€¦ :laughing:

I had the money set aside, then decided to upgrade my computer instead. Now, I donā€™t have to take vacation days to run nlmeans denoise (another technical bit where Iā€™m out of my league), but I have to start socking away my allowance again for the xRite spectrometer. Actually though, the 1-line arrays present a decent electronics project, something where I am in my leagueā€¦

Seriously, i do need to pick away at these smaller influences. Still, when I went to compare results for my D7000 with a larger training set (Munsell spectra set in dcamprof), the max DE difference was even smaller, 4.61 for a LUT profile based on my measurement vs. 4.33 for the rawtoaces monochromator-measured equivalent. Iā€™m using my Z 6 profile from my Rube Goldberg spectrscope data to good effect now. Whatā€™s that old saw about perfect being the enemy of good enough?

1 Like

Yes indeed, hence the skepticism at the end. My feeling is that variance in the latter is substantially less than in the former, especially because of the difficulty in controlling DIY temperatures and drive currents inexpensively.

A chunky slice of the budget of the Spectron II went to stabilizing the source and the integrator. On the other hand I suspect that the QE curve of the inline sensor should be repeatable with decent accuracy, at least from similar batches.

1 Like

4.33 for a monochromator measured profileā€¦I wonder if other monochromator based setups are better? that maxDE of 4.61 is very very respectable in that regard. Kudos.

I agree insofar as there ā€˜shouldā€™ be something gained from taking out optical system and drive voltages for incalibrated filaments. But when a monochromator setup does not really improve on the results, itā€™s somehow hard to justify a line sensor to calibrate against.

My Best guess is that the mentioned monochromator setup has still leeway to be better but was not optimized to deliver better maxDE values? A lot of people can see a maxDE of over 4! I would have expected a better result from a proper monochromator setup, and yet I donā€™t have any reference.

I know, this is what keeps me picking at it, even though the results i get in my raw processing are just fine.

Here are all the monochromator-based data sources Iā€™ve assembled in this endeavor:

rawtoaces:

Here is a link to a post that I believe describes their measurement methodology: Results from an IDT evaluation - Tech/Engineering - Community - ACESCentral
See the link in the post to the actual report: https://community.acescentral.com/uploads/short-url/2kdAkrmO79OIr1bU1S9J4eDEctC.pdf

RIT camspec and Univ of Tokyo:

These links actually have pictures of their measurement setup:

https://nae-lab.org/~rei/research/cs/zhao/database.html

Now that weā€™re talking about it, Iā€™ll probably take a few of these datasets and generate some munsell-based LUT profiles with them to see what DE they produce. Of note is that dcamprof produces a boatload of reports and images if you tell it to, and some of those are linear TIFFs that visually depict the difference for the various DE assertions.

I did a bit more digging behind the IDT_report_v4.pdf, found out it was a separate endeavor from the rawtoaces collection. Scott Dyer did four cameras, Canon 5D Mk ii and Mk iii, Nikon D810, and Sony A7. The thread I posted contains a post further down where he links a DropBox with 12GB of files supporting this measurement; I downloaded the tab-separated data files for each camera and added it to my collection.

A consideration in all of these endeavors is that the measurements were taken through a lens. Dyerā€™s data includes transmission data for the lenses; I plotted the Nikon D810ā€™s 24-70 f2.8 lens and it looks like this:

lens_Nikon_D810

Rather significant, on the order of the diffraction grating, which Iā€™m still not includingā€¦

The ā€œgold standardā€ measurement technique appears to be using the output of a calibrated monochromator to shine on the bare sensor, through an integrating sphere. Far past my budget, and I donā€™t even have example data from such a source for comparisonā€¦

@afre sorry to hear that about your cousin :frowning:

This one is a nice complement to that one. For our purposes we only need working temperature to estimate SPD. They use the same resistance method as in the other paper to estimate temperature and even suggest that CCT could be used, if known.

1 Like

I like the tokyo dataset more for its 4nm bandwidth resolution.

Well with such significant contributions from a lensā€¦Iā€™d say choose a lens which approximates all lenses from one manufacturer best (haha!) and measure with the lens. Otherwise you have a profile for a usecase that noone uses. You always shoot with a lens. We had this discussion before, right? :thinking:

Calibrating for every lens would be somethingā€¦with these lens transmission curves one could do this in software and would require a no-lens profile. Then for every lens attached you multiply (divide? well you compensate for what the lens doesā€¦) the attenuation on top of the SSFā€¦
The spectral-lens-transmission data would be a nice addition to the lensfun-db!! :smile:

1 Like

Makes using SSF data a bit cumbersome, I think. Youā€™d have to start with the sensor-only data, then apply the lens compensation before building the profile to use for a given camera-lens combination. And then thereā€™s the white pointā€¦

Cripes, now Iā€™m wondering how to test this. What it would take is 1) a monochromator measurement of a camera sans lens, and 2) at least one measurement with ideally the same monochromator setup and camera but with a lens in the optical chain. Iā€™m not about to buy a monochromator, at >$1000, looked at the used ebay offerings with some skepticisim. Occasionally Iā€™ve considered making one, hereā€™s a nice gold standard of such: THE PULSAR Engineering

The university on the other side of the hill has an optical lab; guess Iā€™ll have to take a walk over thereā€¦

Couldnā€™t you just use your contraption twice, first body only and then with lens? After a little massaging, lens spectral attenuation would then just be the ratio of the results. You could assemble an approximate single curve by using each channel in its active portion.

1 Like

I donā€™t think thatā€™ll work for the lens-off capture; my workflow relies on capturing a focused spectrum where each 1nm of wavelength spans only a couple of pixels in width.

The folks at Max-max, the camera conversion company, did a series of captures using a monochromator output transmitted to the camera via fiber, with the end of the fiber directly illuminating the sensor. They got a series of images of a blurry circle, each image capturing light from a close-to-single wavelength. Thatā€™s the sort of thing Iā€™d like to do; 'course, now thereā€™s fiber to considerā€¦

1 Like

Oh, I 100% agree! I thought that laughing smiley at the end of the paragraph was enough to signify my sarcasm/amusement about this Idea! :wink:

Well, the rainbow that the grating produces, just has to be as wide as your sensorā€¦making the slit-width smaller should reduce spectral overlapā€¦am I missing something? The wavelength dependence and thus the angle of the diffraction doesnā€™t need an optical system to be imaged,ā€¦does it? No I think it doesnā€™t. It may help but I think itā€™s not necessary (if I am not overlooking something super simple here).

Mulitmode fibers to the rescue!
https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=3255
Not too expensive. But of course coupling into the fiber is the big hassle because of the narrow acceptance angle. The couplers cost a lot, 180EUR hereā€¦

I would at least try to just park the camera at a distance to the grating and see of you can make do without a lens. :man_shrugging: