Mapping scene exposure (lux-seconds) to raw pixel values

For example, if I photograph an 18% gray card under standard daylight with standard exposure, then what are the raw pixel values? What about before and after white balancing?

Does anybody know of any published studies of this type? I can’t find much by googling. I might be the only person nerdy enough about both computers and photographic film to care about this particular question.

Obviously it would vary by sensor and ISO and probably some other things.

It seems that exposure (H=Et) and log exposure were fundamental concepts in the film days but I never hear anybody discuss it now (especially in videography/cinematography). I’m a bit surprised that camera manufacturers don’t supply this information. I thought that maybe people with precision lighting needs (product photographers, cinematographers, etc.) would like to know this but I guess everybody is so used to camera firmware or off-the-shelf software handling their raw-to-image conversion that they don’t care. The impression I get–correct me if I’m wrong–is that there is much less pre-planning put into lighting and exposure these days since you can adjust on the fly by checking the histogram or a false color LUT or something.

Roger Clark’s web site has some information on photon counts/FWC, but I couldn’t find anything on this specific question. I tried to derive these characteristics for a 7D but I’m not sure of my results. I don’t have an external light meter and I was using the exposure value equation to estimate exposure from camera settings (on pictures of flat gray fields). Like I said, I don’t think my results are very good.

I’d like to buy a dedicated light meter to work on this and satisfy my own curiosity but those things are stupidly expensive these days. Even 30 year old models are going for much more than they did five years ago, if what I’ve been seeing is a general trend and not an outlier. $100+ is a lot to spend on satisfying curiosity.

1 Like

For general info:

If you don’t know the well capacity of your sensor you can work backwards from an sRGB image, see:

In sRGB, mid-gray is 118/255 = 0.463 which, accounting for a gamma of 2.2, means (118/255)^2.2 = about 18% of the maximum digital raw level.

But it’s not so easy to get back to absolute raw pixel levels from sRGB gray because equal raw RGB values do not necessarily result in sRGB gray with a color camera.

And I believe that some cameras mess with black and white levels when going from the ADC to the card, not to mention CLUTs …


White Balancing is a “red herring” because it is done in post-processing of the raw data.


You could buy this …

… satisfy your curiosity and then resell it.

What you are looking for sound like one of the common need in VFX production.
I recommend to check white papers about image-based lighting like this one:

You also have the much more advanced Physlight system:


P.S. The question is impossible to answer without knowing the exact camera model and it’s sensor-to-raw pipeline parameters.

ISO is amplification (gain) of the signal coming out of the sensor. The sensor itself has fixed sensitivity and well capacity. So the only parameters that actually change the exposure are aperture and shutter speed (and the optical quality of the lens, I suppose).

More about that here: ISO and Digital Cameras, ISO Myths

1 Like

Not necessarily true, for example anything using Aptina’s dual conversion gain approach (almost any recent Sony, and many cameras using Sony sensors). These put a capacitor in parallel with the photosite to increase FWC at the cost of read noise.

But as to the OP’s question - I’m a bit rusty, but there are three ways in the standard to measure sensitivity, and the one basically everyone uses is defined as the exposure needed to generate a value of 127 in the output JPEG - which depends on the JPEG transfer function. (This is why Sony S-Log2 and S-Log3 increase ISO by 3 stops without changing sensor configuration.)

Unfortunately there are a lot of variables that go into this - sensor quantum efficiency, sensor architecture (Aptina’s DCG will change the FWC) and hence the meaning of ADC maximum, spectral sensitivity. So even two sensors with identical silicon but different CFAs will behave very differently (with monochrome sensors being the most extreme example of this).

It is frustrating that almost no camera manufacturers publish their SSF - I was shocked when I started working on digitizing my old negatives that the datasheets for any current film have a published SSF which I was able to use to generate color profiles in dcamprof.

I don’t think there’s a way to determine an absolute reference for a camera without a calibrated scene.

Not sure what SSF is but my Sigma DSLR sensor has a published conversion efficiency of 7.14μV/electron and a well capacity of approximately 77,000 electrons per photodiode - but the usual operating point (for restricted non-linearity) corresponds to about 45,000 electrons …

… talking green light at about 550nm.

In the early days when Foveon was trying to sell sensors a lot of detailed information was published but the flow dried up after Sigma acquired the Company.

Indeed. SSF=Spectral Sensitivity Function, although most published data is not about absolute sensitivity, it’s usually normalized to 0-1, with the important aspect being the relative differences between the channels.

Thanks, I did wonder what SSF means …

So, like QE curves or like response curves (relative or absolute).

1 Like

Yes, but with a focus on response vs. wavelength.

As @ggbutcher mentioned, a lot of publications are relative and not absolute, which is good enough for a color profile. The film profiles actually ARE absolute when you have Dmin from the characteristic curve, but it isn’t necessary for a color profile.

Fujifilm Superia X-Tra 400 (which seems identical or near-identical to the Superia 400 I shot with back in high school and college):

I’ve generated DCP profiles for Fuji Superia 400 and Kodak Gold 200 (which seems to apply well to Gold 400) from the published SSFs which work great for negative inversion.

But for a digital camera, unless it’s an industrial/scientific unit, there’s nothing absolute.

Thanks for going deeper into the film domain … I never got into film at all.

My main interest is in the Foveon sensor:

Curves are for the sensor with no UV/IR blocking filter. Looks like they are relative to the blue (top) layer which is responsive to well below 400nm

SSFs don’t just apply to film, they apply to digital too! For film or digital, it’s the most accurate way to fully profile a camera. If you know the SSF, you can generate a good profile for any illuminant.

That’s an interesting SSF there - each site has VERY broad spectral response, so there’s going to be lots of color channel crosstalk.

That’s the Foveon for ya and the curves are for layers - rather than “sites” a la CFA.

Yes, hence a pretty fierce matrix to get to XYZ:

the response at upper right includes an IR-blocking filter (CM500 IIRC).

It shouldn’t say “Sharpening”, IMO …

ISO is amplification (gain) of the signal coming out of the sensor. The sensor itself has fixed sensitivity and well capacity. So the only parameters that actually change the exposure are aperture and shutter speed (and the optical quality of the lens, I suppose).

By “exposure” I don’t mean the shutter speed and f-stop combination, I mean the quantity also called exposure, illuminance multiplied by time.


Well I’m talking about both the original raw pixel values and the pixel values after white balancing.

I was wondering if that would work, did it?

My fundamental question is based on those sensitometric curves. Which log exposure value (x axis) should 0-valued pixels be mapped to, and which log exposure value should 1-valued pixels be mapped to? That’s what I’m trying to figure out, at least for my camera, for a film-simulating hobby project I’m working on.

Yes. Extremely well. Although as @ggbutcher pointed out, for color profiling, you don’t need an absolute reference for SSF, everything is relative (well, unless you’re doing a reproduction profile - which I’m not.)

My focus so far has been not on simulating film and print behaviors (the latter is a critical part of simulating “film” unless you’re dealing with slide film!), just “undoing” film behaviors to try and recover the original scene, and I don’t have a problem with using exposure compensation afterwards.

Unfortunately nowadays so many film stocks are discontinued that you can’t compare the behaviors of a film from one ISO rating to another. They’re probably relatively similar, although I have noticed that Kodak Gold 800 shots seem to have some weird performance differences from Gold 200 that I haven’t quite been able to characterize and won’t be able to unless Kodak Alaris is willing to provide datasheets for legacy products upon request. So far Kodak Gold 400 seems close enough to Gold 200 that I haven’t had any issues.

If you’re forward-simulating the behavior of film, then not only do you have to somehow transform the camera color behaviors to film colors, but then use the spectral content of the light source when generating the print, the spectral response of the film dyes (published for some Fuji products at, althought it only looks like for slide film!!!), and the spectral response of the print media.

… and from the previously supplied link:

H = 0.65 x L x t/N lx.s
where L = subject luminance cd/m^2, t = exposure time, N = aperture.

This equation also appears in ISO 12232