Mapping scene exposure (lux-seconds) to raw pixel values

It is not clear to me what is meant by “the pixel values”. For example, on my screen, “the pixel values” change every time I move a slider.

In the OP, you said:

Do you understand that white balancing does not affect the raw pixel values?

No, but the raw pixels do not represent a true white-balanced color. There are two things going on: 1) the spectral skew of the illuminant, which is what we normally think of as “white balance”, and 2) the spectral skew of the color filter array in front of the sensor. Consider this SSF plot for a Nikon Z 6:

nikon_z6_50mm_a

The channels don’t all peak at 1, so there’s a bias in the overall response…

Pedantically, the illuminant is the source of the illuminant white balance skew, so the raw pixels represent that in the measurements.

1 Like

For example, with almost any sane illuminant, despite that SSF having higher blue response, in reality the green channel is almost always counting significantly more electrons than blue or red.

Further complicating this is that lux-seconds is related to number of photons by the energy of each photon, which is wavelength-dependent.

1 Like

The OP asked specifically about raw pixel values, with only a vague reference to white balancing. Also please see the title of this thread.

I hope that the OP knows what a “spectral skew” is, especially for an illuminant.

With the original post being a bit vague, I have purposely avoided delving into white balance due to the diversity of opinions about it. After all “white balance” is almost as bad as “exposure” or “resolution”, eh?

Yeah, I just came up with that while typing the post. The SSF chart I posted for the camera should illustrate that, for an illuminant a spectral power distribution chart (SPD) is the appropriate equivalent. Here’s a more colorful one for a tungsten bulb:

ref: https://en.m.wikipedia.org/wiki/Incandescent_light_bulb#Color_rendering
Credit: Thorseth
License: https://creativecommons.org/licenses/by-sa/4.0/

1 Like

Thanks … I love that “radient” … :slightly_smiling_face:

1 Like

I’m sorry, I should have been more clear. I’m talking about the “exposure” (illuminance integrated over time) that appears on the horizontal axis of film sensitometric curve plots. So H=Et in lux-seconds. Luminance is cd/m^2, illuminance is cd*sr/m^2.

Actually these plots are given in density (or negative log transmittance) as a function of log exposure, and this is often overlooked in many discussions and tutorials about getting a “film-like curve” in a digital image.

If you have a tiny little illuminance meter on the sensor where a pixel should be, I’m curious how the the same scene exposure (E*t) would map to raw pixel values, because then I can know what density it would map to. Right now I have to guess.

BTW I’m meaning “raw pixel values” casually using to mean any linear transformation of raw pixel values, including white balance, normalization, black level subtraction, etc. Because for the purposes of my question they are an equivalence class. Again, apologies for not being clear enough.

1 Like

Hi, I’ve also been working at a negative film simulation + RA4 printing in the last few months, and it has been a lot of fun! I started showing the very first results in the latests play raw entries of the forum.

I found my way around the problem, let’s see what you think of it.
The problem I tried to solve was to convert linear RGB values from a digital picture (coming from a raw processor like darktable) to virtual raw analog values to input in the x-coordinates of the characteristic curves (D-logE) plots. The linear RGB picture already include demosaicing and white-balance, plus the conversion in a suitable standard linear RGB space.

I assume that the characteristic curves are measured using a densitometer (status-A or status-M) on a neutral gray target at several exposure levels.

I use an algorithm to convert linear RGB values to virtual spectral distribution of reflected light. I start by plugging in the algorithm a neutral grey value mimicking a grey target, e.g. arbitrarily [0.184, 0.184, 0.184] in a linear RGB color space. I use as illuminant the intended white balance of the film, e.g. D55. For now, I use only linear sRGB input because the fastest algorithm for spectral reflectance recovery I found can only deal with sRGB.

Then I can do the dot product of the spectral sensitivities and the simulated spectral reflectance of the neutral grey target. From this, I get three values that I can use as normalization of the exposure evaluated through the same procedure for arbitrary linear RGB values. Setting in this way what is considered exposure=1 (or log-exposure=0) in the x-axis of the characteristic curve.

Later in the pipeline, when I try to reproduce the diffuse spectral density for midgray (also published) from the density of the single CMY channels, I need to apply another linear correction to the density of the channels to the to make it fit properly.


Data from Kodak Vision3 50D.

You can see on the plot to the right the dashed line is the simulated diffuse density of a grey target with a suitable exposure.

Maybe there are better ways to do all these steps, so I am very interested in this discussion! :smiley:

1 Like

I’m too sleep-deprived and that was too quick for me to grasp your method.

For my part, I’m focusing almost entirely on tone mapping (mapping linear scene-referred pixel values to a range of exposures (scene illuminance x shutter time, not any other “exposure” you may have heard of), mapping that to log exposure, which maps to negative density, which maps to negative transmittance, which maps to darkroom exposure (with choice of where on the print film’s x-axis to put your ~4.4-stop range of darkroom exposure), which maps to print density, which maps to print transmittance, which maps to luminance levels on a projection screen (I am only interested in simulating motion picture film, not paper prints). Stock-specific color reproduction, grain, MTF, etc. are down the road. I also toyed with a few halation models that can look great on some pictures and very artificial and fake on others.

Right now I’m dealing with two difficulties: sRGB is too small and I’m bumping up against its limits a lot and it’s hard to get good linear output from RawTherapee (which may be a subject for another thread in that forum). I have found that I can get decent results with sRGB for small-gamut scenes, I can get much better results using ProPhotoRGB but I don’t like the imaginary primaries, I tried Rec 2020 and it sucked (everything was way too high contrast and oversaturated out of the gate), and in general finding sufficiently wide-gamut, linear (or with an easily-reversible transfer function) ICC profiles to use as output profiles that are also in a color space that seems to take well to tone mapping and that is handled well by available Python libraries like colour is tricky. I can try opening the same tiff with an embedded profile in two applications and two Python libraries and the same pixel will have different RGB %'s in all of them. It’s frustrating.

Right now I’m trying to find out how to map 18% gray to linear pixel values in a variety of color spaces. I’d also need to map some exposure lower or upper bound to linear pixels too, so I can get the slope and the intercept. Then I’d have a big piece of my puzzle. Up til now I’ve been making ad hoc scaling/offset adjustments to get that initial mapping to negative x-axis looking right.

When I’ve had more sleep I’ll talk more about my method. Can you do LaTeX in comments here?

You can write equations bracketed by double-dollar-signs:

4 = 2 + 2

I forget what markdown convention…

In my hack software I messed with intermediate working profiles for a bit, but finally settled on not using any, and just working the camera-recorded values until file export for the final rendition. Works just fine, if I make sure the camera profile has good black and white references:

In this case, generate the profile yourself, either using LCMS2 (see elle stone’s well-behaved ICC profile repository for some good example code), or with the lcms support in the imagecodecs Python package ( see pyimageconvert/imagecodec2tif.py at master · Entropy512/pyimageconvert · GitHub for one example) or use RT’s ICC profile generator.

Keep in mind, again, that unless you are talking about slide film, you also need to take into account the properties of the photo paper from which a negative is printed. Given the rather mild “knees” I’ve seen in many film datasheets for color negative film, it’s my opinion that the traditional filmic “knee” in most film simulations originates not from the knee of the film itself, but from the toe of the print.

Of course slide/color reversal film is completely different here. That has both a significant toe and knee.

As far as getting linear output in RT, iccstore: Allow loading profiles from user-writable configuration directory by Entropy512 · Pull Request #6645 · Beep6581/RawTherapee · GitHub should have helped and I’m pretty sure that’s in 5.10

Testing:

x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}

Okay it was single dollar signs.

That’s what I would love to do. Inspired by this thread I’ve made a dual-illuminant ICC profile for my camera (5D Mark III) based on a DCP profile I made earlier from ilia3101’s data. I didn’t know you could do that. I’m using the dcp file for my input, ProPhoto as my working space, and the icc-from-dcp for my output profile. The output I get from it is nonlinear, sadly. RawTherapee seems to be applying a gamma expansion or something. If I raise the outputs to the 1/2.4 power it’s much more linearized, but I’d like to know exactly what’s going on.

It’s really frustrating that rawpy/LibRaw doesn’t have Amaze demosaicing or denoising that works, the output I get from it is like early 2000s-level. I’m wondering if I should look into using darktable as my demosaicer/denoiser, I need something that I can script since this is going to be processed on cinema dng files eventually.

Using a middle-man to handle the demosaicing and denoising also means I have to store the images as tiffs, which as I said is problematic because no two ways of turning a tiff into a numpy array seem to agree on exactly what to do. I’m trying to get OpenImageIO working but I don’t know how to set up the Python bindings. I think I’ll have to build it myself. Ugh, I hate spending time on things that should be simple.

[quote=“Entropy512, post:32, topic:43338”]
In this case, generate the profile yourself, either using LCMS2 (see elle stone’s well-behaved ICC profile repository for some good example code), or with the lcms support in the imagecodecs Python package ( see pyimageconvert/imagecodec2tif.py at master · Entropy512/pyimageconvert · GitHub for one example) or use RT’s ICC profile generator.[/quote]

Thanks I’ll look into this. I’ve tried loading her linear profiles before but I get nonlinear output from RawTherapee using it. Even if everything says the profile is linear, even profiles I make in RawTherapee’s profile-maker, and the info from iccdump says it’s linear, and I switch to using camera matrix, I’ll still get nonlinear output. I even tried using a linear XYZ ICC profile as my output profile and the camera matrix, and yet the output was not linear (i.e. a linear function of the raw pixel values, which I can get from rawpy).

It seems I can get close by applying a gamma compression to RT’s output, but it’s frustrating that I don’t know exactly which gamma (sRGB or Rec 709 or something else?).

I have one profile that makes linear stuff directly from RT without fiddling, and it’s one I made using these directions: I changed it to get a linear result though.

[quote=“Entropy512, post:32, topic:43338”]
Keep in mind, again, that unless you are talking about slide film, you also need to take into account the properties of the photo paper from which a negative is printed. Given the rather mild “knees” I’ve seen in many film datasheets for color negative film, it’s my opinion that the traditional filmic “knee” in most film simulations originates not from the knee of the film itself, but from the toe of the print.[/quote]

Correct, except I’m simulating motion picture film, so I simulate black and white print film (I apply it to color channels though, it looks great for the most part).

[quote=“Entropy512, post:32, topic:43338”]
Of course slide/color reversal film is completely different here. That has both a significant toe and knee.[/quote]

If/when I start trying to implement color, I’ll probably start with Ektachrome.

[quote=“Entropy512, post:32, topic:43338”]
As far as getting linear output in RT, iccstore: Allow loading profiles from user-writable configuration directory by Entropy512 · Pull Request #6645 · Beep6581/RawTherapee · GitHub should have helped and I’m pretty sure that’s in 5.10[/quote]

I’ve tried adding all sorts of profiles that are described as “linear” but which don’t get linear output. I have to apply a nonlinear global tone map to linearize RT’s output (though I’m often guessing what that map is, or trying to use colour).

Why don’t my quotes work?

[quote=“Preimage, post:36, topic:43338, full:true”]
Why don’t my quotes work?[/quote]

I think the closing [/quote] needs to be on a separate line…

1 Like

That’s odd. I’ve never had any issue with nonlinear output from RT. I rely on linear output profile to feed stuff to dcamprof - WIP: Bundled profile for reference image by Entropy512 · Pull Request #6646 · Beep6581/RawTherapee · GitHub

Possible causes:
If your camera has a DCP profile available with a HueSatMap, the HueSatMap can remap luminance as a function of color. Also, RT applies an additional lookup table (the LookTable) from the DCP that I think it should not - DCP profile size inflated with redundant data · Issue #6467 · Beep6581/RawTherapee · GitHub
I’m assuming you turned off the curve in Exposure
Lens profile correction could cause a perception of nonlinearity

Don’t know how you’d do that. In an ICC profile, there’s only one white point tag.

1 Like

It was probably my explanation also not very clear. I will try to write down something to post here in the forum when the pressure from work slows down a bit. :grin:

I am also simulating a very similar pipeline. Starting from linear scene-referred RGB, then simulation of the density of a negative, projection of light through it in the printing process, and then creation of density on printing paper, and a final scan with a viewing illuminant. Plus halation and grain. All of it in some very hacky Python code. I have still several issues, e.g. portra endura printing paper severely lacking greens. Not sure if it due to limitations of the data from kodak datasheets or to things I am doing wrong.

By the way, I found this website (Tech Documents • 125px) with datasheets from current and discontinued products from Fuji, Ilford, Kodak, and Polaroid.

2 Likes