The Quest for Good Color - 1. Spectral Sensitivity Functions (SSFs) and Camera Profiles

If you've read my Article: Color Management in Raw Processing, you'll know that the whole color management chain starts with a profile describing the camera's tonality and color representation. This profile is essential to the chain, as the chain is a set of transforms that take the image from one colorspace and tone to another, and the first transform in the chain needs to know where the camera-produced data is in terms of color and tone. Tone from most modern cameras is "linear", that is, represents the original energy relationship of the light comprising the scene. Of wider variation is the color, although not as widely as one might suspect.

Without going into a lot of detail just yet, the camera's range of representable colors is usually described as a set of 12 numbers: 9 to describe the red, green, and blue extents of the color extent, and 3 to identify a white point where all the color hues converge on "desaturation". Here are those numbers for my Nikon D7000, from David Coffin's dcraw.c:


This information is used by color management software as the input to a color transform of the image from the tone and color described by these numbers to another tone and colorspace. So, it’s important for the camera profile to accurately describe the color and tone of raw image in order for those subsequent transforms to yield acceptable color and tone.

Thing is, 9 numbers to describe a camera's colorspace is not a lot of information. Usually though, they're enough in that the colors of the scene typically aren't out at the extremities. In fact, encoded colors of the input image that already reside within the colorspace of the intended destination are not usually changed. But when an input color is determined to be "out-of-gamut", that is, out of the colorspace of the destination profile, some sort of movement is required to make that color fit.

There are different ways to do that movement, codified in the definitions of "rendering intents" that the color transform software uses to define its logic. The intent most often used in photographic applications is "relative colorimetric", which essentially says, "move the color along a line extending from the original value to the white point, and place it just inside the bounds of the destination colorspace. This results in a color of pretty much the same hue, but less saturated. Well, that works okay until one has to deal with a scene that has areas of extreme color; if that area had some gradation of the color, that gradation is usually lost in such a transform. To handle such, more than 9 numbers is needed...

To anchor a bit of terminology here, a color profile based on 9 numbers is usually called a "matrix profile", owing to the 3x3 row-column arrangement of the 9 numbers in order to do the transform math. But there's an alternate method to represent the input to the color math, that being the "lookup table", or LUT. You can read all about the form and function of LUTs elsewhere, suffice it to say that they provide a simple "look up the original value, use the corresponding output value" mechanism to move numbers, which provides more information to do the out-of-gamut movement than just a "travel along the line until you're in" that the 9 numbers accommodate. This is the reason profiles made from a ColorChecker target shot can really only be matrix profiles; well, you can make a LUT profile from such a target shot, but it will only describe the same "just inside the destination gamut" decison the original 9 numbers supported.

Soooo..... how does one obtain sufficient information to make such a profile? Well, using a target with more colors is the easy way. One common such target is the IT8, which has 256 color patches. One can use a shot of such a target to make a well-informed LUT camera profile. But now what one has to consider is whether the person who chose the color patches did so in a way to accommodate one's particular color needs. And, such a target really needs to be shot in the light of the scene in order to properly represent the scene's colors. Indeed, as long as the camera's ability to represent colors is described in terms of color, the characterization process is fraught with imprecision and discontinuities. If I just hurt your head with that statement, let's go there and tease it apart...

There's no such physical propery as color. You can read up on that elsewhere also, suffice to say for this journey that light is the physical property recorded by cameras. Light, if you remember some of your pre-university physics, is a curious thing, both energy and particle, depending on how you regard it. When you read the technical literature on imaging sensors, they'll describe the dynamic of sensing as "photon counting", paying homage to the material description of light. As energy, light's description is of a wave, in much the same manner as radio. Indeed, light has it's own place in the wave energy spectrum; light which we as humans can sense is wave energy in the frequency range from about 380 to 730 nanometers. Just below that is "ultraviolet" light, just above is "infrared". So the physically-oriented way to characterize a camera would be with regard to its ability to sense light, in terms of its wavelength range.

Back to our color fixation, you might say, but isn't the light captured by a camera filtered through the red, green and blue spots of dye in the Bayer or XTranss Color Filter Array? Yes, we'll get to that in a bit, but the essential measurment made at each of the sensor's photosites (corresponding to the image pixels) is of light energy. That the light passes through the colored filters essentially turns that photosite into a "band-sensitive" sensor, where it can only resolve a certain subset of the visible spectrum. The mosaic mechanism of measuring light, first described by Bayer, is an accommodation of how humans turn the sensing of light into the mental phenomenon of color. Yes, color is a figment of your imagination. And a surprisingly consistent figment. In the period 1929 to 1931, two researchers characterized the wavelength-to-color matching behavior of 17 individuals, forming the definition of color we use today in anchoring all the devices we use that produce renditions of color. And they did this referenced to wavelengths of light. See CIE RGB Color Space

All that to bring us to the following assertion: "Why not characterize our cameras' color performance in terms of light?" Well, yes indeedy, why not? Now, this is the part I'm not so familiar with, but what I do know is that one of the tools available to make camera profiles, dcamprof, will take a set of numbers that describe a camera's spectral sensitivity and use them to make a LUT camera profile. Essentially, what dcamprof does to use this data is to make a "virtual target" that feeds the rest of the profile making code just like the data from a camera-shot target. By now you're asking, "What does this data look like?", and I'm happy to answer. I was fortunate to find (thanks, @afre) a spectral sensitivity function (SSF) dataset for my Nikon D7000 in the ACES rawtoaces project at Github. Here it is:

wavelength red green blue
380 0.0161 0.0324 0.0322
385 0.0125 0.0247 0.0272
390 0.0090 0.0171 0.0221
395 0.0071 0.0100 0.0167
400 0.0052 0.0029 0.0112
405 0.0045 0.0045 0.0194
410 0.0038 0.0061 0.0276
415 0.0246 0.0431 0.2379
420 0.0454 0.0801 0.4483
425 0.0521 0.1098 0.5982
430 0.0587 0.1396 0.7480
435 0.0550 0.1522 0.7910
440 0.0512 0.1648 0.8340
445 0.0443 0.1810 0.8738
450 0.0374 0.1972 0.9136
455 0.0353 0.2275 0.9337
460 0.0333 0.2578 0.9537
465 0.0366 0.3240 0.9424
470 0.0399 0.3902 0.9310
475 0.0419 0.4236 0.8977
480 0.0439 0.4570 0.8644
485 0.0421 0.4654 0.8017
490 0.0403 0.4738 0.7389
495 0.0418 0.5551 0.6194
500 0.0434 0.6364 0.4999
505 0.0496 0.7177 0.4175
510 0.0557 0.7989 0.3351
515 0.0702 0.8595 0.2780
520 0.0847 0.9202 0.2209
525 0.0964 0.9601 0.1887
530 0.1081 1.0000 0.1565
535 0.0841 0.9713 0.1272
540 0.0601 0.9427 0.0979
545 0.0474 0.9068 0.0798
550 0.0346 0.8710 0.0617
555 0.0366 0.8120 0.0451
560 0.0386 0.7530 0.0284
565 0.0717 0.6871 0.0229
570 0.1048 0.6212 0.0173
575 0.2548 0.5543 0.0147
580 0.4049 0.4874 0.0120
585 0.5704 0.4155 0.0102
590 0.7359 0.3435 0.0083
595 0.7209 0.2730 0.0066
600 0.7058 0.2024 0.0049
605 0.6486 0.1531 0.0041
610 0.5914 0.1037 0.0032
615 0.5389 0.0823 0.0031
620 0.4864 0.0608 0.0030
625 0.4396 0.0516 0.0031
630 0.3929 0.0424 0.0032
635 0.3582 0.0378 0.0034
640 0.3236 0.0333 0.0036
645 0.2819 0.0281 0.0042
650 0.2402 0.0229 0.0047
655 0.2094 0.0205 0.0047
660 0.1786 0.0181 0.0048
665 0.1383 0.0153 0.0041
670 0.0981 0.0124 0.0034
675 0.0640 0.0088 0.0024
680 0.0300 0.0051 0.0014
685 0.0184 0.0033 0.0010
690 0.0068 0.0015 0.0007
695 0.0044 0.0013 0.0007
700 0.0020 0.0010 0.0007
705 0.0018 0.0008 0.0007
710 0.0016 0.0006 0.0006
715 0.0014 0.0006 0.0006
720 0.0012 0.0005 0.0006
725 0.0010 0.0005 0.0005
730 0.0009 0.0004 0.0005
735 0.0007 0.0003 0.0004
740 0.0006 0.0003 0.0003
745 0.0004 0.0002 0.0002
750 0.0002 0.0001 0.0001
755 0.0002 0.0001 0.0002
760 0.0002 0.0001 0.0002
765 0.0002 0.0001 0.0002
770 0.0002 0.0001 0.0002
775 0.0002 0.0002 0.0002
780 0.0002 0.0002 0.0002

Very simple, each row represents the relative measured sensitivity of the camera's red-filtered, green-filtered, and blue-filtered pixels when presented with light at the wavelength specified in the first column. Of note is that the values are "normalized", in the range 0 to 1, they're not actual measurements of a particular quantity. What this data really means is probably more evident if it is plotted:

The plot readily depicts each channel’s relative sensitivities to each other for given wavelengths, but it also show each channel’s “bandpass” characteristic, in other words, how far up and down the spectrum each channel can measure. This data gets to the actual mechanism of the sensor and its purpose, which is to translate wavelengths of light into encoded values that can be used to construct something that can be interpreted by humans as “color”. That each of these bandpass filters can be called “red”, “green”, or “blue” is attributable to how individual wavelength light at that part of the spectrum is interpreted by humans. This is the distinction between “spectral colors” and “non-spectral colors”, colors we interpret from a mix of wavelengths. So really, calling these bandpasses “red”, “green”, or “blue” is only a coarse approximation of the band. “red” is really “upper”, “green” is “mid”, and “blue” is “lower”. And, “red” starts to impinge on the infrared part of the band, > 700nm, and "blue correspondingly impinges on the ultraviolet part, < 380nm.

Interesting stuff, but let’s get back to camera profiles. Really, after considering all this, the ideal camera profile is one that can be mapped to this spectral sensitivity, as that data specifically presents camera performance across the range of light combinations with which people make colors. And indeed, dcamprof has the math required to take the table of numbers presented above and make a responsive color profile from it. I did that with the above data with the following dcamprof commands:

$ dcamprof make-target -c nikon_d7000_ssf.json -p cc24 nikon_d7000_ssf.ti3
$ dcamprof make-profile -c nikon_d7000_ssf.json nikon_d7000_ssf.ti3 nikon_d7000_ssf_dcamprof.json
$ dcamprof make-icc -p xyzlut nikon_d7000_ssf_dcamprof.json nikon_d7000_ssf.icc

So, to do all this, you need to:

  1. Download and compile dcamprof from;
  2. Take the table of spectral sensitivity numbers and format it in a JSON file. Here's the D7000, from the rawtoaces data converted to the dcamprof JSON format:
      // camera name, should preferably match established manufacturer and model
      // name used by raw converters
      "camera_name": "Nikon D7000",
      // bands in nanometers, described the same way as for spectrum format
      "ssf_bands": [ 380, 780, 5 ], // 400nm to 720nm in an interval of 10nm
      // Response functions for red, green and blue. Scaling for the responses
      // must be the same for all three, but it does not matter what it is, as
      // the response will be normalized before use. Setting the maximum to 1.0
      // is typical.
      "red_ssf": [
    	0.016100, 0.012500, 0.009000, 0.007100, 0.005200,
    	0.004500, 0.003800, 0.024600, 0.045400, 0.052100,
    	0.058700, 0.055000, 0.051200, 0.044300, 0.037400,
    	0.035300, 0.033300, 0.036600, 0.039900, 0.041900,
    	0.043900, 0.042100, 0.040300, 0.041800, 0.043400,
    	0.049600, 0.055700, 0.070200, 0.084700, 0.096400,
    	0.108100, 0.084100, 0.060100, 0.047400, 0.034600,
    	0.036600, 0.038600, 0.071700, 0.104800, 0.254800,
    	0.404900, 0.570400, 0.735900, 0.720900, 0.705800,
    	0.648600, 0.591400, 0.538900, 0.486400, 0.439600,
    	0.392900, 0.358200, 0.323600, 0.281900, 0.240200,
    	0.209400, 0.178600, 0.138300, 0.098100, 0.064000,
    	0.030000, 0.018400, 0.006800, 0.004400, 0.002000,
    	0.001800, 0.001600, 0.001400, 0.001200, 0.001000,
    	0.000900, 0.000700, 0.000600, 0.000400, 0.000200,
    	0.000200, 0.000200, 0.000200, 0.000200, 0.000200,
      "green_ssf": [
    	0.032400, 0.024700, 0.017100, 0.010000, 0.002900,
    	0.004500, 0.006100, 0.043100, 0.080100, 0.109800,
    	0.139600, 0.152200, 0.164800, 0.181000, 0.197200,
    	0.227500, 0.257800, 0.324000, 0.390200, 0.423600,
    	0.457000, 0.465400, 0.473800, 0.555100, 0.636400,
    	0.717700, 0.798900, 0.859500, 0.920200, 0.960100,
    	1.000000, 0.971300, 0.942700, 0.906800, 0.871000,
    	0.812000, 0.753000, 0.687100, 0.621200, 0.554300,
    	0.487400, 0.415500, 0.343500, 0.273000, 0.202400,
    	0.153100, 0.103700, 0.082300, 0.060800, 0.051600,
    	0.042400, 0.037800, 0.033300, 0.028100, 0.022900,
    	0.020500, 0.018100, 0.015300, 0.012400, 0.008800,
    	0.005100, 0.003300, 0.001500, 0.001300, 0.001000,
    	0.000800, 0.000600, 0.000600, 0.000500, 0.000500,
    	0.000400, 0.000300, 0.000300, 0.000200, 0.000100,
    	0.000100, 0.000100, 0.000100, 0.000100, 0.000200,
      "blue_ssf": [
    	0.032200, 0.027200, 0.022100, 0.016700, 0.011200,
    	0.019400, 0.027600, 0.237900, 0.448300, 0.598200,
    	0.748000, 0.791000, 0.834000, 0.873800, 0.913600,
    	0.933700, 0.953700, 0.942400, 0.931000, 0.897700,
    	0.864400, 0.801700, 0.738900, 0.619400, 0.499900,
    	0.417500, 0.335100, 0.278000, 0.220900, 0.188700,
    	0.156500, 0.127200, 0.097900, 0.079800, 0.061700,
    	0.045100, 0.028400, 0.022900, 0.017300, 0.014700,
    	0.012000, 0.010200, 0.008300, 0.006600, 0.004900,
    	0.004100, 0.003200, 0.003100, 0.003000, 0.003100,
    	0.003200, 0.003400, 0.003600, 0.004200, 0.004700,
    	0.004700, 0.004800, 0.004100, 0.003400, 0.002400,
    	0.001400, 0.001000, 0.000700, 0.000700, 0.000700,
    	0.000700, 0.000600, 0.000600, 0.000600, 0.000500,
    	0.000500, 0.000400, 0.000300, 0.000200, 0.000100,
    	0.000200, 0.000200, 0.000200, 0.000200, 0.000200,
  3. Run the commands.

The result will be a .icc profile suitable for use as a camera profile.

To show you the difference to be had by using such a profile, here are two screenshots of a crop from an image taken in a theater where the stage walls were illuminated by blue LED spotlights. The first one is developed from the raw file with a matrix camera profile:

And this one is developed from the raw with the same tool chain, the only difference is the use of the SSF profile as the camera profile:

Evident in the second image is better tone gradation in the extreme blues of the spotlight illumination; not so evident is the lack of change in tone in the reddish-brown drums at the bottom-center of the image. I was not able to mitigate the blues with any other tool without changing the hues of the drums, or the rest of the image.

Here's another difference to consider, this time in green hues. First, matrix profile:


And, SSF profile:


It maybe hard to see, but the green leaves are more yellowish in the matrix camera profile image. I haven't picked this apart yet, but my hypothesis is that the red channel overlaps more with the green channel in the matrix profile due to the impreciseness of the matrix transform.

So, one might ask, "Where do I get such data for my camera?" One might think that the camera manufacturers ought to provide it, and some do. Predominantly though, the only manufacturer data I've seen has been for motion picture cameras. The still camera makers seem to put this in the same category as raw histograms... If one is lucky, someone has measured and published data for their camera; I've found a few sources, mostly from research endeavors. Seems the Nikon D700 is especially popular in that domain; I've found three datasets for that camera. But considering the variety of available cameras, the already-measured and published set is rather small... making one quite sad. :(

If you've read this far, I'll assume you're more than superfically interested in obtaining such data for your camera in order to enjoy the fruits of a LUT camera profile. At this point, however, we now delve into subjects that may require you to build things and buy things, some of them rather pricey. And, at this point, I still haven't determined how far one needs to go in building/buying contraptions in order to make acceptable profiles. So, read on, and ponder your near future at the workbench...

The essential task is stated thusly: Measure the camera's ability to record light measurements through the red, green, and blue CFA filters for each of a range of wavelengths. Particularly, measurements in the visible spectrum, from 380nm to 700nm, in 10nm, or better yet, 5nm increments. So, we need 32 R|G|B triplets for 10nm, 64 for 5nm. And, those numbers just need to be relative to each other, not absolute quantities of something like 'power'. "Relative sensitivity..."

With regard to data collection, I've scanned the internets for the lore surrounding this endeavor, and I've found essentially two distinct methods. Here they are, in descending order of "quality":

  1. Take a picture of visible light presented at each wavelength, extract the raw values from each picture. We'll call this "Monochrome Light"
  2. Take a single picture of a diffracted visible light spectrum, and tease the values for each wavelength from their position in the image. And this one, we'll call it, "One-Shot Spectrum"

Let’s discuss each in some detail…

Monochromatic Light

Actually, it's not that easy to get light of a single wavelength. Well, until recently that is, with the introduction of light-emitting diodes (LEDs). But using LEDs has challenges all it's own; you'd need a quantity of devices equal to the resolution within 380-700nm you're after, and it turns out those devices are not cheap, at least through the entire spectrum. Anyway, the predominant device used to present narrow-band light is called a monochromator. They're rather simple devices, where a broadband light is shined into a port, that light gets directed to either a prism or a diffraction grating to split it out into it's individual wavelengths, and that diffracted splay of the rainbow is shined onto a very narrow slit that only lets through a narrow part of the spectrum. The prism or grating is mounted on a rotate-able platform that allows the spectrum beam to be slewed left or right to present the desired wavelength to the slit. Wikipedia has a good illustrated treatise on the devices: Wikipedia. Monochromators are lab instruments, and identifying them thusly apparently qualifies them for exorbant prices. The cheapest one I could find new was the dynasil Mini-Chrom, at about $1900US. The devices can be had used rather frequently, as they are harvested from manufacturing equipment and sold as surplus. Still, a lot of money to put out just to measure your couple of cameras...

It's also not trivial to present such light to a sensor. To measure just the light onto the sensor, the lens is removed from the camera, and the light is fed to the camera through a thing called an integrating sphere. It is just as it's name implies, a sphere where the light is presented to the interior, bounces around, and exits into some kind of conduit to the camera's exposed sensor. The sphere's role is to uniformly diffuse the light without materially disturbing its character, so there's no specularity or other non-uniformities. You might think,easy-peasy, I'll just get a ping-pong ball, but no, the interior needs treatment that is apparently costly to produce. Cheapest new I could find were north of $1000US, but there werebargains on ebay for as little as $150US. Optical equipment is expensive, if you haven't already gathered that. I did find one endeavor where they just pointed the fiber at the camera sensor, which sounds to me and my wallet like a fine idea...

And, a spectrometer. This device is needed to concurrently measure both the wavelength and power of the presented light, wavelength to confirm the monochromator calibration, power to calibrate the light intensity in the post-capture analysis. Surprisingly, I found this to be the most inexpensive article, with usable alternatives under $100US. Now, you do gets what you pays for, the more inexpensive options usually measure at, say 6 or 8 discrete channels and leave it to you to interpolate. A good spectrometer will have a monochrome CCD array of something like 1x1000 pixels across which the diffracted light is splayed for nm-precision measurement.

To our endeavor, the monochromator output is directed, usually through fiber optic cables, to two destinations: 1) the spectrometer, and 2) the camera. The process is to set a wavelength, then take a picture and record the measured power from the spectrometer. Rinse and repeat, until you have the requisite number of pictures/power readings. The camera measurements are adjusted based on the power measurements (broadband light isn't uniform in poweracross the illuminating spectrum), the entire dataset is normalized to the range 0.0-1.0, and thereyou have it, your Holy Grail dataset. It just cost you somewhere just south of $3000US to obtain...

I found variations on the monochromator theme in the literature, with the most promising being a scheme to present the wavelengths by filtering the broadband light through very narrow band-pass filters. The device used was an old slide projector, and the filters were mounted on 35mm slide blanks along with a neutral density filter for calibration. The camera was just pointed at the projector lens. Turns out the filters needed cost around $50-$70 apiece, which doesn't scale favorably to our endeavors.

Here are links to a few of these projects:

  • The folks who shine the light directly on the sensor. Note that the cameras they're measuring have the IR filter removed.
  • The narrow-band filter thesis. Note that the host domain,, offers a number of relevant products, including a commercialization of the thesis (camSPECS).
  • The project that yielded the camspec_database.txt, a collection of the spectral measurements for 28 cameras. This dataset has been used for reference and comparison in a number of other projects. The data range is 400nm to 720nm with an interval of 10nm. The database can be found here.
  • SPECTACLE, a project to accomplish spectral and radiometric calibrations of consumer cameras. The scope of this project exceeds spectral response (flat field, gain, ISO, etc.), and it also encompasses two methods, one with a double monochromator, and another using DIY spectroscope components. A database is also proposed, but is still in development.
  • Spectron, a github repo containing resources for integrating a monochromator-based measurement setup.

The Spectrum, One-Shot

The alternative to the monochromator setup is to simply remove the slit and take a picture of the entire spectrum produced by a broadband light source, take a second picture of another light source with power spikes at known wavelengths, then spend quality time with a spreadsheet of the data to figure out where the individual wavelengths sit in the first picture. A pretty good implementation of this is described at the Open Film Tools initiative. A bit of do-it-yourself is typically involved here, as the mechanism is just a rather coarse slit that presents the light to the diffractor, and the camera just points at the diffracted light. The Open Film Tools folk have a set of 3D printing files to make the enclosure. The major purchases would be a transmissive diffracting grid, a broadband light source, some kind of non-uniform light source for calibration, and the spectrometer (can't seem to shake that one). A few hundred dollars US, at most.

What you'd gain in more money for food and shelter you'd sacrifice in resolution and processing time. Alignment of the parts is critical to putting the spectrum on the sensor so it is lined up with the imaging array; errors here will confound your ability to use multiple rows of pixels to drive out measurement noise. You also need to figure out where the wavelengths lie on the sensor; this is dependent on the resolution of the spikes in the non-uniform light source. Oh, and knowing at what wavelength each spike resides. But, the Open Film Tools folks seemed to get good alignment with monochromator data, so it's not an impossible task. Of note regarding the Open Film Tools endeavor is that they've posted the data artifacts used to produce their camera characterizations: raw image files, spreadsheets, plots; with this, one can play with making the data product without actually having to build and assemble hardware.

I hope this wasn't too tedious to follow. In the next thread, I'll describe my effort to do "one-shot spectrum" on the cheap. If that doesn't produce usable SSF data, I'll up the game with more expensive components, until I get good data or run out of money...


Thanks Glenn, that was an insightful read!

1 Like

But then, at least, you can also use that sphere for flat fielding, right?

1 Like

this is super cool, i’ll follow your findings with a lot of interest, would love to have a cheap way for the masses of measuring spectral responsivities of our cameras. i suppose you also found this approach?

1 Like

Here’s another that I found:

This is not really my domain, so I have no idea whether it makes sense or their claims are a bit too optimistic – but I thought I’d mention it just in case… at least, it doesn’t seem to require any special hardware so shouldn’t be hard to validate for someone motivated enough :slight_smile:

1 Like

Hadn’t thought of that, good one. As I start experimenting, I’m keeping an eye toward reuse like that; one of my purchases, a Lowell Pro tungsten halogen lamp, is going to also be used for copy work.

@hanatos, @agriggio: No, I hadn’t run across these, will read them today. Right now, any perspective on practical trades in doing this is welcome…

1 Like

One thing I’d like feedback about is the level of comprehension to which I wrote. I’m trying to strike a balance between mathematical purity and precision and “average-person” readability, and even my idea about the latter may not be on-target. By now, we should realize we’re a diverse group here, technicians, tinkerers, and artists all trying to mung photography to their own ends, and I want to take this topic out of the formality that vexes widespread comprehension.

Good feedback right now will help me shape what I write next; I’m right now staring at a wooden contraption that’ll be the subject of the next post, and I want to tell it’s story well… :smiley:

1 Like

@ggbutcher It might be worthwhile to look into contacting your local university. If they do physics and have an optical lab, they probably have a lot of the tools you would need to tinker with.

The SPECTACLE project you mentioned was partly done by researchers in Leiden, which is pretty close to where I live. Once our local corona prevention measures are lifted, I could contact them and maybe start a conversation on how to make their process feasible for the common folk.


We almost always end up in technobabble land. Go in depth where necessary. Otherwise, keep it tame. At least that is my aim when I write here. Sure, I might be taken for a fool, which I often am, but I am more concerned about losing the crowd than my pride.

As far as advice is concerned, @Thanatomanic’s suggestion is what I would suggest as well. Collaboration is key and makes the world a better place.


As someone who sees himself as an in-betweener I was able to follow your article rather well. I did, however, pick up a lot of terminology, concepts and RAW processing knowledge in the last couple months without which I would probably be somewhat lost. Most formulas that are mentioned (in this case in the linked articles) are out of my league, but in context I do understand what is being conveyed.

I think that if one is interested in the technical side of things and is willing to search for stuff that might be unknown this article sets the correct tone.

Just my 2c.

1 Like

Thanks Glenn for the excellent treatise as well as the other one on colour management. I thought I would add a few thoughts/clarifications which might be of interest to you and others.

  1. The 3x3 colour matrix may not sound sufficient to handle incoming camera colour information, but it actually would, if the spectral response of the camera in question was a match to the standard observer (read: a human). Unfortunately the colour filters cannot possibly match a human eye’s response which is why a 3x3 matrix is insufficient when the expectation is for the camera to behave like a colourimetric device, it is an approximation (this also means there is no such thing as a perfect camera profile).

This remark was made in error, matrix profiles can and do result in clipped colours for extremely saturated colours, typically in the blues. See post #19 for explanation.

  1. Typically matrix-only profiles deliberately render high-saturation camera colours with lower saturation than LUT profiles, this is so as to preserve better precision for more normal range colours. So, matric profiles are not supposed to cause additional clipping in and of themselves.
  1. Unfortunately this is not true. Camera profile LUTs can affect the rendering of colours that are not represented in the CC24 target. Typically, Dcamprof/Lumariver uses LUTs to improve the rendering of high saturation colours beyond what is in the CC24.
  1. It’s hard to understand what you mean by this. There is no fixed “destination gamut” of a camera which is solely decided by the numbers of the 3x3 matrix-only profile, but perhaps I have misunderstood, would appreciate a clarification!
  1. I would highly recommend not to use targets like the IT8, which are reflective/glossy/semi-gloss, for general-purpose (2.5D) profiles. They are certainly useful for reproduction (3D) profiles. They are extremely difficult to photograph properly without a controlled setup, and any subtle glare and reflection will produce a significantly worse profile than one made using a CC24.
  1. Unfortunately this is not how it works, I wish it were so. SSF profiles require an illuminant to be specified, by default it is D50 in Dcamprof/Lumariver. You have to create illuminant-specific profiles if you have to deal with (your specific) LED light, or strobes you use in your studio etc. A single camera profile cannot accurately render colours when different parts of the scene is illuminated with spectrally-different light.
  1. Since I don’t know what these colours look like in real life, I cannot comment on the accuracy of their reproduction. But I note that such a transform can be made using specialist colour grading software that use LUT manipulation, beyond what RT/ACR/LR/PS can do by default, assuming the colour in question is not clipped to begin with. Also, point #6.

For regular photographers that want to make general-use profiles, I would suggest that you don’t fret about obtaining SSFs. They do not necessarily produce better profiles. General-use profiles in Dcamprof/Lumariver use 2.5D LUTs, which more or less means that it will not map colours differently based on different luminance, i.e. exposure independent, which is of course desirable. That’s why it is very difficult to achieve a better general-use profile than one made using a CC24 target. For reproduction, it’s a different story. X-rite/Gretag Macbeth CC24 targets are matte finish and not prone to glare and reflection, making the capture process significantly easier, which in turn result in better/easier to obtain profiles that are pretty much just as good.

I would also like to point out two more advantages (which I consider to be most vital) of building your own general-use camera profiles, which was not mentioned, beyond achieving colours that are closer to what you perceive:

  • you can achieve a consistent look across various cameras
  • you can have a consistent starting point regardless of raw converter, assuming they properly support the camera profiles you use

You can also design your own “look” with Dcamprof/Lumariver and thus not be locked-in to any software company’s default “look” i.e. what you image looks like when opened in the raw converter with no adjustments made at all.

In addition to Glenn’s very helpful summary (which makes the content a lot more digestible) , if you are still very interested, I also highly recommend taking the time to dive into the extensive details and mysteries surrounding camera profiling written by Anders Torger himself (creator of Dcamprof/Lumariver). Here are the links:

1 Like

One point that should be clarified here is if the important data is the relative sensitivity at each wavelength or the overall relative sensitivity.

If it’s the first:

  • you only care about relation between the three numbers for each single wavelength. For example, at 550nm the “green” site is 0.8710/0.0617=14.12 times more sensitive than the “blue”.
  • calibration of the light source is relatively easy, you only need to get the wavelength. For this, a “white” source with spectral spikes is useful (a fluorescent lamp, for example)

If it’s the second:

  • you also care about the relation between the three numbers at different wavelengths. For example, at 550nm the “green” site is 0.8710/0.6364=1.369 times more sensitive than the “green” site at 500nm.
  • calibration of the light source is hard, as in addition to the wavelength you also need the real spectral intensity of the lamp. Spectrometers with properly calibrated sensors (sensitivity vs. wavelength) are not cheap.

I’m guessing the second option is the correct?

It’s the second one. I’ll clarify that in the original post, thanks for catching it.

The normalization done to all three channels is computed to the ‘max-of-the-max’ value for all three channels, anchored at 1.0. This is the convention I’ve observed in all the research I’ve reviewed, and is consistent with dcamprof input specifications.

OK. Then it is very important to know the real spectrum (intensity vs. wavelength) of the light source. Without it, any relative sensitivity measurement you take at the camera sensor is meaningless. That eliminates any cheap non-calibrated spectrometer and general light sources from the list of suitable hardware (although you could use a real black-body light source like a candle, an incandescent bulb, or the Sun, with some careful analysis work).

But the SSF itself, not the profile created from it, is illumination independent. I think @ggbutcher uses the term ‘mapping’ when he means ‘profile creation from SSF’. The latter then needs to specifiy an illuminant for which it is calculated.

Yes and I think this is not what Glenn wants to achieve. As far as I understood it is rather about how monochromatic/saturated colors (or gradients from low saturated to high saturated colors) get rendered in display space. I hope I understood you right, Glenn!

So maybe this:

is the magic (it is not magic, I know) that we can see in the examples above, and not the fact that the profile stems from an SSF? I am very curious of the answer to this!!

Indeed, the SSF itself is illuminant independent, I should have been clearer on that, thanks. I was certainly thrown off when the example image used was of mixed lighting including blue LEDs when the illuminant of the profile was almost certainly D50. The rendition of those blues then have nothing to do with the fact that the profile was made using SSF data per se.

I’m not saying SSF data is not useful, in fact I should add that having the SSF is very convenient to simulate all sorts of different things without needing to make carefully photographed captures of colour charts. It allows for a wide range of experimentation on the computer after the data is at hand.

This behaviour is not unique to SSF derived camera profiles, but rather the way the “looks” are designed in Dcamprof/Lumariver, which can be further user-adjusted in all sorts of ways. I’ve made SSF (from public data) and CC24 profiles for my Canon 5D II and their behaviour compared was largely similar. Since I did not do the extensive work Glenn is trying to, to attain my camera-specific SSFs, I was uncertain about the validity of the data in relation to my camera, and was rather surprised to note that the way colours were rendered were not significantly different.

1 Like

Good response; this is why I wrote this as a thread post, rather than an article: A lot of what I related is limited experience, and I wanted this sort of dialog to fully flesh out the concept, even if it pointed out errors. I’m never averse to being called out…

There are a few things you said I’d like to tug on, so I can understand them fully:

What I can’t get past about this is the non-linearity of the falloff of the CFA filters, and the corresponding thing in the CIE 1931 color matching functions. The D7000 Spectral Sensitivity plot above shows this; there’s a lot of mixing going on in the transitions between the filters, and it’s not “linear”, to overload that term.

Now, this is counter-intuitive to my experience. The stage images above illustrate what I’ve experienced between matrix and LUT camera profiles; the LUT profile image has less-saturated blue. It’s not a a clipping that’s taking place, it’s a compression of the gamut based on the selected rendering intent. I didn’t include this link in the original post because it uses Adobe Flash, it’s a lesson page from Mark Levoy’s Digital Photograpy course at Stanford, and it is simply the best illustration of rendering intent behavior I’ve yet encountered:

Assuming relative colorimetric rendering intent, with the matrix profile, all the intent’s behavior can do is to drag the out-of-gamut color along the line to the white point until it’s in gamut. The LUT, from observing the behavior of the stage photos, provides more information to pull out-of-gamut colors into a more gradated result.

Thinking this through, what the camera profile provides is the information to transform the colors from camera space to the connection space, typically XYZ. Then, the transform to display or file export space goes XYZ to that space, be it a calibrated display profile or sRGB. So, whether the camera profile is matrix or LUT affects the transform to XYZ; if the display or export space is matrix or LUT has it’s own consideration.

I’ve made LUT profiles from CC24 target shots, and they don’t look any better than matrix profiles made from the same shots. That’s where I came to that conclusion; I’ll have to think about it…

I think Mark Levoy’s Gamut Mapping page illustrates what I’m trying to convey; from what I can see, his transforms are all matrix-based.

As I built the contraption I’m going to post about next, I got to thinking about the time and expense of measuring SSFs even cheaply vice just doing a well-constructed target shot. So, I’m going to borrow an IT8 target and give it a go, to compare results against a SSF profile. In this endeavor I’m primarily a photographer, not a lab rat, and it may be that a target shot of a sufficiently numerous set of patches will give the better “bang for the buck”, pardon my American colloquium. I think the target I’m borrowing is one of the Wolf Faust targets prepared specially for camera calibration. That endeavor will get its own post…

I wondered about this when I set up my tungsten-halogen lamp to shoot my first spectra. Previously, I did some experimentation with using un-whitebalance-corrected target shot profiles to do whitebalance correction through the internal LittleCMS Bradford (?) transform, so I knew the CMS would handle the difference in white point. After all, for most raw processing, we’re relying on the dcraw-style D65 primary sets… :smiley:

I played with this early on, but I found using such tools requires effort to both get the desired intent and also not screwing up the other colors. That might just be me… :stuck_out_tongue:

Thanks for giving me pause to consider; this is clearly for me an exploration of a previously unknown concept, and I’d prefer to be called on the carpet vice letting a misconception persist.

1 Like

Indeed. I did some off-the-cuff plots of my uncalibrated spectra, and compared to the reference spectra I have for the same camera (acestoraw project, measured with a monochromator), the three curves are definitely inversely biased with respect to a generic tungsten-halogen spectrum power plot. All of the research projects referenced used some sort of power compensation per-wavelength.

Yes, you have.

1 Like

Perhaps I should also mention that my motivations may be slightly different from yours. I am a landscape photographer working in natural light, so extreme accuracy of highly saturated colours are not necessary and arguably accuracy of colours aren’t really that important either if one tends to manipulate the image anyway to all kinds of artistic effect. However, my overwhelming preference is for relatively lower-saturation, lower-contrast and closer-to-realistic colours for most of my work, and having a good custom camera profile is a great neutral starting point which I can use as a reference or reality check if I need to.

For this reason I don’t benefit much from using a target with deep saturated colours, not to mention it would be a nightmare to photograph in daylight outdoors. For anyone who has similar needs to me, all I really wanted to say is you can get fantastic results with just the CC24, the simplicity of it is quite wonderful given how well it works, and you can have this immediately (assuming you already have the CC24) without needing to figure out the issues to derive your camera’s SSF. Anders Toger says about as much, and he would know :slight_smile:

Perhaps I may refer you to this, the relevant parts copied below: Making a camera profile with DCamProf

The most basic camera profile converts from the camera’s raw RGB channels to CIE XYZ with a matrix, and once in that space standard color management algorithms can translate further into RGB values suitable for our screens (and finally printers).

“Converting with a matrix” means simple multiplication, like this:

X = R * a1 + G * a2 + B * a3
Y = R * b1 + G * b2 + B * b3
Z = R * c1 + G * c2 + B * c3
That is we have a 3x3 matrix of constants. If cameras had color filters which matched the XYZ color matching functions exactly the matrix would be an “identity matrix”:

1 0 0
0 1 0
0 0 1
such that X = R, Y = G and Z = B. A perfect match would also be had if the spectral sensitivity functions was any linear combination of the color matching functions (that is if there was a matrix that is different from the identity matrix but still always resulted with the same XYZ values as the XYZ color matching functions). This is called the Luther-Ives condition but it’s never fulfilled by a real camera.

Instead a matrix with the smallest error possible is derived. This error can be substantially smaller if we decide for which light this matrix should be optimized. That is the matrix that works best for daylight will be different than the one for tungsten. This can seem somewhat strange as the XYZ space just represents cone responses and has no white point, and indeed a Luther-Ives camera could have the same matrix for all types of light. However, when there is no solution the best approximation will need to take the light (illuminant) into account.

Indeed you are right in this instance, while I was not referring to this specific example when I made this general statement, which is pretty much quoting from Anders for what it’s worth. I only mentioned clipping since you mentioned the problem of out-of-gamut colours with matrix profiles, which implies clipping. Anders has mentioned in several places that the LUTs can do rolloff/compression of various colours for a more pleasing effect and this was how he designed his default look in Dcamprof. This is what you are seeing with your blue LED example.

It’s a pretty nifty demonstration, had to use Internet Explorer to see the Flash app, but what does gamut mapping have to do with camera profiles? They do not do gamut mapping. They can do gamut compression but not gamut mapping, and they don’t offer gamut compression by default and arguably shouldn’t, ideally we should have smart gamut compression/mapping by the raw converter into your chosen output space.

I’m not sure what you mean in the context of camera raw images - there is no out of gamut colour to a camera? The rendered raw image might have clipped colours from poor processing choices or because of overexposure.

Indeed LUTs can make such effects (compression, rolloff), but note that this is a non-linear effect while the matrix profile is always linear. Anders apparently went to great lengths to ensure good smoothness for the non-linear LUTs generated by Dcamprof.

This should be of interest, and note the link about the handling of deep blues especially: DCamProf

If you use a matrix-only profile you will get negative values in the extreme range, and unless the raw converter has some special handling for this range it will be clipped flat, in the worst case to black but more common to a plain strongly saturated color with no tonality information left. This is perhaps the largest drawback of matrix-only profiles when it comes to general-purpose photography.

If you make an ICC or DNG LUT profile DCamProf will handle those extreme colors through gamut compression on the colorimetric profile level. DCamProf’s native color-correcting LUT will only work within the range where the matrix produces sane output. Outside the valid matrix range a generic gamut compression becomes active. It’s purpose is to retain tonality (varying tones) where the camera captures tonality rather than being “correct”, as the profile and camera can’t be correct in any colorimetric sense in that range anyway. Some clipping will still take place, but it’s controlled and it keeps tonality.

The reason some clipping must take place is to be able to make a reasonable “increasing” gradient from neutral to full saturation clipping. Although this clipping doesn’t kill tonality, the optimal would be retained if no clipping would take place. Unfortunately the only way to achieve this on some cameras (with extreme blue sensitivity) is to desaturate the whole profile so you get a “longer range” to play with. This can indeed be observed in some commercial profiles. I don’t recommend doing this as it sacrifices performance in the normal range, but DCamProf allows designing this type of profile too. An example can be found in the section describing custom deep blue handling.

That is certainly possible given the wide variety of ways to make profiles and evaluate them.

In the context of camera profiles, which do not do gamut mapping, maybe it would be best if we just call it gamut compression as Anders does. BTW this is handled by a different LUT in the profile:

First you need to understand the difference between the “Native-LUT” and the “Export-LUT”: Lumariver Profile Designer has its own internal LUT format, which actually isn’t a plain lookup table, but a mathematical model based on the target measurements and optimization parameters. This is the native LUT. When the profile is exported as an DNG or ICC profile this native LUT is sampled to create a real LUT in the format supported by the profile, this is the Export-LUT. As the exported LUT’s resolution is limited it will not exactly match the native LUT, but it should be very close. If not, there’s probably smoothness issues with the LUT, that is too sharp bends (which should not be the case with default parameter settings).

The Export-LUT result is mainly relevant when you make reproduction profile, as if you apply a tone curve and tone reproduction operator and gamut compression there will be additional conversions made in the final LUT.

  • Patch Split: Reference vs Native-LUT — the default setting comparing the target reference and the result of the native LUT.
  • Patch Split: Reference vs Matrix — comparing the target reference with the matrix. Swap between this and “Reference vs Native-LUT” to see which improvements the LUT makes.
  • Patch Split: Matrix vs Native-LUT — compares the linear matrix result directly with the LUT. Ideally the difference should be quite small as that means the LUT is smooth.
  • Patch Split: Reference vs Export-LUT — compares the target reference with the actual exported LUT, which may not match the native LUT a 100% due to limited resolution. If you make a reproduction profile (linear curve, no gamut compression) this presents the actual performance of the exported profile.
  • Patch Split: Matrix vs Export-LUT — compares the linear matrix result with the exported LUT, can be used as a final sanity check if you really need a LUT at all or if the adjustments are so small that you can go ahead with only a matrix.

Perhaps, but again with 2.5D general purpose profiles, Dcamprof is not able to make full use of the target by making different corrections to the same colours with different luminance. That’s why there is not much advantage over the CC24 target. I can’t find the examples Anders provided on this just now, but it’s there somewhere.

Thanks too for this brain exercise! It has also helped me to get a better understanding as I try to verbalise my thoughts. I too have much to learn. I’ve not yet explored the extensive customizability of Dcamprof/Lumariver to design my own look for the LUTs or even to modify the matrix from the default calculations. Neither have I spent as much effort as you have in looking into getting SSFs off my camera. It is most interesting and I would like to follow along as much as possible!

1 Like