If you've read my Article: Color Management in Raw Processing, you'll know that the whole color management chain starts with a profile describing the camera's tonality and color representation. This profile is essential to the chain, as the chain is a set of transforms that take the image from one colorspace and tone to another, and the first transform in the chain needs to know where the camera-produced data is in terms of color and tone. Tone from most modern cameras is "linear", that is, represents the original energy relationship of the light comprising the scene. Of wider variation is the color, although not as widely as one might suspect.
Without going into a lot of detail just yet, the camera's range of representable colors is usually described as a set of 12 numbers: 9 to describe the red, green, and blue extents of the color extent, and 3 to identify a white point where all the color hues converge on "desaturation". Here are those numbers for my Nikon D7000, from David Coffin's dcraw.c:
8198,-2239,-724,-4871,12389,2798,-1043,2050,7181
This information is used by color management software as the input to a color transform of the image from the tone and color described by these numbers to another tone and colorspace. So, itās important for the camera profile to accurately describe the color and tone of raw image in order for those subsequent transforms to yield acceptable color and tone.
Thing is, 9 numbers to describe a camera's colorspace is not a lot of information. Usually though, they're enough in that the colors of the scene typically aren't out at the extremities. In fact, encoded colors of the input image that already reside within the colorspace of the intended destination are not usually changed. But when an input color is determined to be "out-of-gamut", that is, out of the colorspace of the destination profile, some sort of movement is required to make that color fit.
There are different ways to do that movement, codified in the definitions of "rendering intents" that the color transform software uses to define its logic. The intent most often used in photographic applications is "relative colorimetric", which essentially says, "move the color along a line extending from the original value to the white point, and place it just inside the bounds of the destination colorspace. This results in a color of pretty much the same hue, but less saturated. Well, that works okay until one has to deal with a scene that has areas of extreme color; if that area had some gradation of the color, that gradation is usually lost in such a transform. To handle such, more than 9 numbers is needed...
To anchor a bit of terminology here, a color profile based on 9 numbers is usually called a "matrix profile", owing to the 3x3 row-column arrangement of the 9 numbers in order to do the transform math. But there's an alternate method to represent the input to the color math, that being the "lookup table", or LUT. You can read all about the form and function of LUTs elsewhere, suffice it to say that they provide a simple "look up the original value, use the corresponding output value" mechanism to move numbers, which provides more information to do the out-of-gamut movement than just a "travel along the line until you're in" that the 9 numbers accommodate. This is the reason profiles made from a ColorChecker target shot can really only be matrix profiles; well, you can make a LUT profile from such a target shot, but it will only describe the same "just inside the destination gamut" decison the original 9 numbers supported.
Soooo..... how does one obtain sufficient information to make such a profile? Well, using a target with more colors is the easy way. One common such target is the IT8, which has 256 color patches. One can use a shot of such a target to make a well-informed LUT camera profile. But now what one has to consider is whether the person who chose the color patches did so in a way to accommodate one's particular color needs. And, such a target really needs to be shot in the light of the scene in order to properly represent the scene's colors. Indeed, as long as the camera's ability to represent colors is described in terms of color, the characterization process is fraught with imprecision and discontinuities. If I just hurt your head with that statement, let's go there and tease it apart...
There's no such physical propery as color. You can read up on that elsewhere also, suffice to say for this journey that light is the physical property recorded by cameras. Light, if you remember some of your pre-university physics, is a curious thing, both energy and particle, depending on how you regard it. When you read the technical literature on imaging sensors, they'll describe the dynamic of sensing as "photon counting", paying homage to the material description of light. As energy, light's description is of a wave, in much the same manner as radio. Indeed, light has it's own place in the wave energy spectrum; light which we as humans can sense is wave energy in the frequency range from about 380 to 730 nanometers. Just below that is "ultraviolet" light, just above is "infrared". So the physically-oriented way to characterize a camera would be with regard to its ability to sense light, in terms of its wavelength range.
Back to our color fixation, you might say, but isn't the light captured by a camera filtered through the red, green and blue spots of dye in the Bayer or XTranss Color Filter Array? Yes, we'll get to that in a bit, but the essential measurment made at each of the sensor's photosites (corresponding to the image pixels) is of light energy. That the light passes through the colored filters essentially turns that photosite into a "band-sensitive" sensor, where it can only resolve a certain subset of the visible spectrum. The mosaic mechanism of measuring light, first described by Bayer, is an accommodation of how humans turn the sensing of light into the mental phenomenon of color. Yes, color is a figment of your imagination. And a surprisingly consistent figment. In the period 1929 to 1931, two researchers characterized the wavelength-to-color matching behavior of 17 individuals, forming the definition of color we use today in anchoring all the devices we use that produce renditions of color. And they did this referenced to wavelengths of light. See CIE RGB Color Space
All that to bring us to the following assertion: "Why not characterize our cameras' color performance in terms of light?" Well, yes indeedy, why not? Now, this is the part I'm not so familiar with, but what I do know is that one of the tools available to make camera profiles, dcamprof, will take a set of numbers that describe a camera's spectral sensitivity and use them to make a LUT camera profile. Essentially, what dcamprof does to use this data is to make a "virtual target" that feeds the rest of the profile making code just like the data from a camera-shot target. By now you're asking, "What does this data look like?", and I'm happy to answer. I was fortunate to find (thanks, @afre) a spectral sensitivity function (SSF) dataset for my Nikon D7000 in the ACES rawtoaces project at Github. Here it is:
wavelength | red | green | blue |
380 | 0.0161 | 0.0324 | 0.0322 |
385 | 0.0125 | 0.0247 | 0.0272 |
390 | 0.0090 | 0.0171 | 0.0221 |
395 | 0.0071 | 0.0100 | 0.0167 |
400 | 0.0052 | 0.0029 | 0.0112 |
405 | 0.0045 | 0.0045 | 0.0194 |
410 | 0.0038 | 0.0061 | 0.0276 |
415 | 0.0246 | 0.0431 | 0.2379 |
420 | 0.0454 | 0.0801 | 0.4483 |
425 | 0.0521 | 0.1098 | 0.5982 |
430 | 0.0587 | 0.1396 | 0.7480 |
435 | 0.0550 | 0.1522 | 0.7910 |
440 | 0.0512 | 0.1648 | 0.8340 |
445 | 0.0443 | 0.1810 | 0.8738 |
450 | 0.0374 | 0.1972 | 0.9136 |
455 | 0.0353 | 0.2275 | 0.9337 |
460 | 0.0333 | 0.2578 | 0.9537 |
465 | 0.0366 | 0.3240 | 0.9424 |
470 | 0.0399 | 0.3902 | 0.9310 |
475 | 0.0419 | 0.4236 | 0.8977 |
480 | 0.0439 | 0.4570 | 0.8644 |
485 | 0.0421 | 0.4654 | 0.8017 |
490 | 0.0403 | 0.4738 | 0.7389 |
495 | 0.0418 | 0.5551 | 0.6194 |
500 | 0.0434 | 0.6364 | 0.4999 |
505 | 0.0496 | 0.7177 | 0.4175 |
510 | 0.0557 | 0.7989 | 0.3351 |
515 | 0.0702 | 0.8595 | 0.2780 |
520 | 0.0847 | 0.9202 | 0.2209 |
525 | 0.0964 | 0.9601 | 0.1887 |
530 | 0.1081 | 1.0000 | 0.1565 |
535 | 0.0841 | 0.9713 | 0.1272 |
540 | 0.0601 | 0.9427 | 0.0979 |
545 | 0.0474 | 0.9068 | 0.0798 |
550 | 0.0346 | 0.8710 | 0.0617 |
555 | 0.0366 | 0.8120 | 0.0451 |
560 | 0.0386 | 0.7530 | 0.0284 |
565 | 0.0717 | 0.6871 | 0.0229 |
570 | 0.1048 | 0.6212 | 0.0173 |
575 | 0.2548 | 0.5543 | 0.0147 |
580 | 0.4049 | 0.4874 | 0.0120 |
585 | 0.5704 | 0.4155 | 0.0102 |
590 | 0.7359 | 0.3435 | 0.0083 |
595 | 0.7209 | 0.2730 | 0.0066 |
600 | 0.7058 | 0.2024 | 0.0049 |
605 | 0.6486 | 0.1531 | 0.0041 |
610 | 0.5914 | 0.1037 | 0.0032 |
615 | 0.5389 | 0.0823 | 0.0031 |
620 | 0.4864 | 0.0608 | 0.0030 |
625 | 0.4396 | 0.0516 | 0.0031 |
630 | 0.3929 | 0.0424 | 0.0032 |
635 | 0.3582 | 0.0378 | 0.0034 |
640 | 0.3236 | 0.0333 | 0.0036 |
645 | 0.2819 | 0.0281 | 0.0042 |
650 | 0.2402 | 0.0229 | 0.0047 |
655 | 0.2094 | 0.0205 | 0.0047 |
660 | 0.1786 | 0.0181 | 0.0048 |
665 | 0.1383 | 0.0153 | 0.0041 |
670 | 0.0981 | 0.0124 | 0.0034 |
675 | 0.0640 | 0.0088 | 0.0024 |
680 | 0.0300 | 0.0051 | 0.0014 |
685 | 0.0184 | 0.0033 | 0.0010 |
690 | 0.0068 | 0.0015 | 0.0007 |
695 | 0.0044 | 0.0013 | 0.0007 |
700 | 0.0020 | 0.0010 | 0.0007 |
705 | 0.0018 | 0.0008 | 0.0007 |
710 | 0.0016 | 0.0006 | 0.0006 |
715 | 0.0014 | 0.0006 | 0.0006 |
720 | 0.0012 | 0.0005 | 0.0006 |
725 | 0.0010 | 0.0005 | 0.0005 |
730 | 0.0009 | 0.0004 | 0.0005 |
735 | 0.0007 | 0.0003 | 0.0004 |
740 | 0.0006 | 0.0003 | 0.0003 |
745 | 0.0004 | 0.0002 | 0.0002 |
750 | 0.0002 | 0.0001 | 0.0001 |
755 | 0.0002 | 0.0001 | 0.0002 |
760 | 0.0002 | 0.0001 | 0.0002 |
765 | 0.0002 | 0.0001 | 0.0002 |
770 | 0.0002 | 0.0001 | 0.0002 |
775 | 0.0002 | 0.0002 | 0.0002 |
780 | 0.0002 | 0.0002 | 0.0002 |
Very simple, each row represents the relative measured sensitivity of the camera's red-filtered, green-filtered, and blue-filtered pixels when presented with light at the wavelength specified in the first column. Of note is that the values are "normalized", in the range 0 to 1, they're not actual measurements of a particular quantity. What this data really means is probably more evident if it is plotted:
The plot readily depicts each channelās relative sensitivities to each other for given wavelengths, but it also show each channelās ābandpassā characteristic, in other words, how far up and down the spectrum each channel can measure. This data gets to the actual mechanism of the sensor and its purpose, which is to translate wavelengths of light into encoded values that can be used to construct something that can be interpreted by humans as ācolorā. That each of these bandpass filters can be called āredā, āgreenā, or āblueā is attributable to how individual wavelength light at that part of the spectrum is interpreted by humans. This is the distinction between āspectral colorsā and ānon-spectral colorsā, colors we interpret from a mix of wavelengths. So really, calling these bandpasses āredā, āgreenā, or āblueā is only a coarse approximation of the band. āredā is really āupperā, āgreenā is āmidā, and āblueā is ālowerā. And, āredā starts to impinge on the infrared part of the band, > 700nm, and "blue correspondingly impinges on the ultraviolet part, < 380nm.
Interesting stuff, but letās get back to camera profiles. Really, after considering all this, the ideal camera profile is one that can be mapped to this spectral sensitivity, as that data specifically presents camera performance across the range of light combinations with which people make colors. And indeed, dcamprof has the math required to take the table of numbers presented above and make a responsive color profile from it. I did that with the above data with the following dcamprof commands:
$ dcamprof make-target -c nikon_d7000_ssf.json -p cc24 nikon_d7000_ssf.ti3 $ dcamprof make-profile -c nikon_d7000_ssf.json nikon_d7000_ssf.ti3 nikon_d7000_ssf_dcamprof.json $ dcamprof make-icc -p xyzlut nikon_d7000_ssf_dcamprof.json nikon_d7000_ssf.icc
So, to do all this, you need to:
- Download and compile dcamprof from https://github.com/Beep6581/dcamprof;
- Take the table of spectral sensitivity numbers and format it in a JSON file. Here's the D7000,
from the rawtoaces data converted to the dcamprof JSON format:
{ // camera name, should preferably match established manufacturer and model // name used by raw converters "camera_name": "Nikon D7000", // bands in nanometers, described the same way as for spectrum format "ssf_bands": [ 380, 780, 5 ], // 400nm to 720nm in an interval of 10nm // Response functions for red, green and blue. Scaling for the responses // must be the same for all three, but it does not matter what it is, as // the response will be normalized before use. Setting the maximum to 1.0 // is typical. "red_ssf": [ 0.016100, 0.012500, 0.009000, 0.007100, 0.005200, 0.004500, 0.003800, 0.024600, 0.045400, 0.052100, 0.058700, 0.055000, 0.051200, 0.044300, 0.037400, 0.035300, 0.033300, 0.036600, 0.039900, 0.041900, 0.043900, 0.042100, 0.040300, 0.041800, 0.043400, 0.049600, 0.055700, 0.070200, 0.084700, 0.096400, 0.108100, 0.084100, 0.060100, 0.047400, 0.034600, 0.036600, 0.038600, 0.071700, 0.104800, 0.254800, 0.404900, 0.570400, 0.735900, 0.720900, 0.705800, 0.648600, 0.591400, 0.538900, 0.486400, 0.439600, 0.392900, 0.358200, 0.323600, 0.281900, 0.240200, 0.209400, 0.178600, 0.138300, 0.098100, 0.064000, 0.030000, 0.018400, 0.006800, 0.004400, 0.002000, 0.001800, 0.001600, 0.001400, 0.001200, 0.001000, 0.000900, 0.000700, 0.000600, 0.000400, 0.000200, 0.000200, 0.000200, 0.000200, 0.000200, 0.000200, 0.000200 ], "green_ssf": [ 0.032400, 0.024700, 0.017100, 0.010000, 0.002900, 0.004500, 0.006100, 0.043100, 0.080100, 0.109800, 0.139600, 0.152200, 0.164800, 0.181000, 0.197200, 0.227500, 0.257800, 0.324000, 0.390200, 0.423600, 0.457000, 0.465400, 0.473800, 0.555100, 0.636400, 0.717700, 0.798900, 0.859500, 0.920200, 0.960100, 1.000000, 0.971300, 0.942700, 0.906800, 0.871000, 0.812000, 0.753000, 0.687100, 0.621200, 0.554300, 0.487400, 0.415500, 0.343500, 0.273000, 0.202400, 0.153100, 0.103700, 0.082300, 0.060800, 0.051600, 0.042400, 0.037800, 0.033300, 0.028100, 0.022900, 0.020500, 0.018100, 0.015300, 0.012400, 0.008800, 0.005100, 0.003300, 0.001500, 0.001300, 0.001000, 0.000800, 0.000600, 0.000600, 0.000500, 0.000500, 0.000400, 0.000300, 0.000300, 0.000200, 0.000100, 0.000100, 0.000100, 0.000100, 0.000100, 0.000200, 0.000200 ], "blue_ssf": [ 0.032200, 0.027200, 0.022100, 0.016700, 0.011200, 0.019400, 0.027600, 0.237900, 0.448300, 0.598200, 0.748000, 0.791000, 0.834000, 0.873800, 0.913600, 0.933700, 0.953700, 0.942400, 0.931000, 0.897700, 0.864400, 0.801700, 0.738900, 0.619400, 0.499900, 0.417500, 0.335100, 0.278000, 0.220900, 0.188700, 0.156500, 0.127200, 0.097900, 0.079800, 0.061700, 0.045100, 0.028400, 0.022900, 0.017300, 0.014700, 0.012000, 0.010200, 0.008300, 0.006600, 0.004900, 0.004100, 0.003200, 0.003100, 0.003000, 0.003100, 0.003200, 0.003400, 0.003600, 0.004200, 0.004700, 0.004700, 0.004800, 0.004100, 0.003400, 0.002400, 0.001400, 0.001000, 0.000700, 0.000700, 0.000700, 0.000700, 0.000600, 0.000600, 0.000600, 0.000500, 0.000500, 0.000400, 0.000300, 0.000200, 0.000100, 0.000200, 0.000200, 0.000200, 0.000200, 0.000200, 0.000200 ] }
- Run the commands.
The result will be a .icc profile suitable for use as a camera profile.
To show you the difference to be had by using such a profile, here are two screenshots of a crop from an image taken in a theater where the stage walls were illuminated by blue LED spotlights. The first one is developed from the raw file with a matrix camera profile:
And this one is developed from the raw with the same tool chain, the only difference is the use of the SSF profile as the camera profile:
Evident in the second image is better tone gradation in the extreme blues of the spotlight illumination; not so evident is the lack of change in tone in the reddish-brown drums at the bottom-center of the image. I was not able to mitigate the blues with any other tool without changing the hues of the drums, or the rest of the image.
Here's another difference to consider, this time in green hues. First, matrix profile:
And, SSF profile:
It maybe hard to see, but the green leaves are more yellowish in the matrix camera profile image. I haven't picked this apart yet, but my hypothesis is that the red channel overlaps more with the green channel in the matrix profile due to the impreciseness of the matrix transform.
So, one might ask, "Where do I get such data for my camera?" One might think that the camera manufacturers ought to provide it, and some do. Predominantly though, the only manufacturer data I've seen has been for motion picture cameras. The still camera makers seem to put this in the same category as raw histograms... If one is lucky, someone has measured and published data for their camera; I've found a few sources, mostly from research endeavors. Seems the Nikon D700 is especially popular in that domain; I've found three datasets for that camera. But considering the variety of available cameras, the already-measured and published set is rather small... making one quite sad. :(
If you've read this far, I'll assume you're more than superfically interested in obtaining such data for your camera in order to enjoy the fruits of a LUT camera profile. At this point, however, we now delve into subjects that may require you to build things and buy things, some of them rather pricey. And, at this point, I still haven't determined how far one needs to go in building/buying contraptions in order to make acceptable profiles. So, read on, and ponder your near future at the workbench...
The essential task is stated thusly: Measure the camera's ability to record light measurements through the red, green, and blue CFA filters for each of a range of wavelengths. Particularly, measurements in the visible spectrum, from 380nm to 700nm, in 10nm, or better yet, 5nm increments. So, we need 32 R|G|B triplets for 10nm, 64 for 5nm. And, those numbers just need to be relative to each other, not absolute quantities of something like 'power'. "Relative sensitivity..."
With regard to data collection, I've scanned the internets for the lore surrounding this endeavor, and I've found essentially two distinct methods. Here they are, in descending order of "quality":
- Take a picture of visible light presented at each wavelength, extract the raw values from each picture. We'll call this "Monochrome Light"
- Take a single picture of a diffracted visible light spectrum, and tease the values for each wavelength from their position in the image. And this one, we'll call it, "One-Shot Spectrum"
Letās discuss each in some detailā¦
Monochromatic Light
Actually, it's not that easy to get light of a single wavelength. Well, until recently that is, with the introduction of light-emitting diodes (LEDs). But using LEDs has challenges all it's own; you'd need a quantity of devices equal to the resolution within 380-700nm you're after, and it turns out those devices are not cheap, at least through the entire spectrum. Anyway, the predominant device used to present narrow-band light is called a monochromator. They're rather simple devices, where a broadband light is shined into a port, that light gets directed to either a prism or a diffraction grating to split it out into it's individual wavelengths, and that diffracted splay of the rainbow is shined onto a very narrow slit that only lets through a narrow part of the spectrum. The prism or grating is mounted on a rotate-able platform that allows the spectrum beam to be slewed left or right to present the desired wavelength to the slit. Wikipedia has a good illustrated treatise on the devices: Wikipedia. Monochromators are lab instruments, and identifying them thusly apparently qualifies them for exorbant prices. The cheapest one I could find new was the dynasil Mini-Chrom, at about $1900US. The devices can be had used rather frequently, as they are harvested from manufacturing equipment and sold as surplus. Still, a lot of money to put out just to measure your couple of cameras...
It's also not trivial to present such light to a sensor. To measure just the light onto the sensor, the lens is removed from the camera, and the light is fed to the camera through a thing called an integrating sphere. It is just as it's name implies, a sphere where the light is presented to the interior, bounces around, and exits into some kind of conduit to the camera's exposed sensor. The sphere's role is to uniformly diffuse the light without materially disturbing its character, so there's no specularity or other non-uniformities. You might think,easy-peasy, I'll just get a ping-pong ball, but no, the interior needs treatment that is apparently costly to produce. Cheapest new I could find were north of $1000US, but there werebargains on ebay for as little as $150US. Optical equipment is expensive, if you haven't already gathered that. I did find one endeavor where they just pointed the fiber at the camera sensor, which sounds to me and my wallet like a fine idea...
And, a spectrometer. This device is needed to concurrently measure both the wavelength and power of the presented light, wavelength to confirm the monochromator calibration, power to calibrate the light intensity in the post-capture analysis. Surprisingly, I found this to be the most inexpensive article, with usable alternatives under $100US. Now, you do gets what you pays for, the more inexpensive options usually measure at, say 6 or 8 discrete channels and leave it to you to interpolate. A good spectrometer will have a monochrome CCD array of something like 1x1000 pixels across which the diffracted light is splayed for nm-precision measurement.
To our endeavor, the monochromator output is directed, usually through fiber optic cables, to two destinations: 1) the spectrometer, and 2) the camera. The process is to set a wavelength, then take a picture and record the measured power from the spectrometer. Rinse and repeat, until you have the requisite number of pictures/power readings. The camera measurements are adjusted based on the power measurements (broadband light isn't uniform in poweracross the illuminating spectrum), the entire dataset is normalized to the range 0.0-1.0, and thereyou have it, your Holy Grail dataset. It just cost you somewhere just south of $3000US to obtain...
I found variations on the monochromator theme in the literature, with the most promising being a scheme to present the wavelengths by filtering the broadband light through very narrow band-pass filters. The device used was an old slide projector, and the filters were mounted on 35mm slide blanks along with a neutral density filter for calibration. The camera was just pointed at the projector lens. Turns out the filters needed cost around $50-$70 apiece, which doesn't scale favorably to our endeavors.
Here are links to a few of these projects:
- The folks who shine the light directly on the sensor. Note that the cameras they're measuring have the IR filter removed.
- The narrow-band filter thesis. Note that the host domain, image-engineering.de, offers a number of relevant products, including a commercialization of the thesis (camSPECS).
- The project that yielded the camspec_database.txt, a collection of the spectral measurements for 28 cameras. This dataset has been used for reference and comparison in a number of other projects. The data range is 400nm to 720nm with an interval of 10nm. The database can be found here.
- SPECTACLE, a project to accomplish spectral and radiometric calibrations of consumer cameras. The scope of this project exceeds spectral response (flat field, gain, ISO, etc.), and it also encompasses two methods, one with a double monochromator, and another using DIY spectroscope components. A database is also proposed, but is still in development.
- Spectron, a github repo containing resources for integrating a monochromator-based measurement setup.
The Spectrum, One-Shot
The alternative to the monochromator setup is to simply remove the slit and take a picture of the entire spectrum produced by a broadband light source, take a second picture of another light source with power spikes at known wavelengths, then spend quality time with a spreadsheet of the data to figure out where the individual wavelengths sit in the first picture. A pretty good implementation of this is described at the Open Film Tools initiative. A bit of do-it-yourself is typically involved here, as the mechanism is just a rather coarse slit that presents the light to the diffractor, and the camera just points at the diffracted light. The Open Film Tools folk have a set of 3D printing files to make the enclosure. The major purchases would be a transmissive diffracting grid, a broadband light source, some kind of non-uniform light source for calibration, and the spectrometer (can't seem to shake that one). A few hundred dollars US, at most.
What you'd gain in more money for food and shelter you'd sacrifice in resolution and processing time. Alignment of the parts is critical to putting the spectrum on the sensor so it is lined up with the imaging array; errors here will confound your ability to use multiple rows of pixels to drive out measurement noise. You also need to figure out where the wavelengths lie on the sensor; this is dependent on the resolution of the spikes in the non-uniform light source. Oh, and knowing at what wavelength each spike resides. But, the Open Film Tools folks seemed to get good alignment with monochromator data, so it's not an impossible task. Of note regarding the Open Film Tools endeavor is that they've posted the data artifacts used to produce their camera characterizations: raw image files, spreadsheets, plots; with this, one can play with making the data product without actually having to build and assemble hardware.
I hope this wasn't too tedious to follow. In the next thread, I'll describe my effort to do "one-shot spectrum" on the cheap. If that doesn't produce usable SSF data, I'll up the game with more expensive components, until I get good data or run out of money...