Mapping scene exposure (lux-seconds) to raw pixel values

Thank you, I would never have known :flushed:

In my world, illuminance is usually stated as lm/m^2.

Well I made an ICC profile from a dual-illuminant DCP file.

I stopped trying to diagnose the problem, I found a linear ProPhoto profile here. I’m moving on from this particular snag.

Yes I tried disabling all of those things, except lens. I doubt it’s the lens correction though since I’m sampling gray card readings from the center of the image, where lens profile correction has the least effect. Like I said, I’m using the linear ProPhoto profile I have that works and I’m forgetting this frustrating little episode.

Since my interest is simulating motion picture film on emissive displays (projectors, tvs, monitors, etc.) I’m simply converting to an output display color space (sRGB, Rec709, or maybe DCI-P3, but nothing bigger because my monitor is only 90% DCI-P3), so viewing illuminants aren’t a part of it. If and when I try simulating color, I’ll have to simulate the spectrum of a darkroom light, but I may not bother. I think simulating black and white negative/print films and then making little ad hoc color tweaks (keeping density constant) might be the way to go, since tone and white balance is 95% of it (ignoring lighting).

What I’m working on lately is making sure I map 18% gray to the part of the curve where the second derivative vanishes (and allowing some wiggle room), and making sure that gray negative density gets mapped to the equivalent point on the print (again, with wiggle room). Before I was making ad hoc scaling/offset adjustments to my input pixel values (prior to mapping to log exposure) but I want to really simulate the film process.

Since I’m working with 1080p raw footage from my Canon 5D Mark III, grain is low on my list of priorities. Should I get a camera that does 4K RAW I’ll look into grain and MTF.

How are you doing halation? I’ve tried using published models for halation (apparently a single halation point actually makes a ring shape), ad hoc exponential decay models, Gaussian models. I’m not trying to do anything too sophisticated with edge detection, but I’m trying a few crude tricks to keep the halation only visible in the shadows. It actually looks good on a few images I’ve tried for things like windows and sunlight through treetops, and like barf on another image that has intense highlights on a brick pattern.

My brother. This is actually my first ever GUI program, and I didn’t know about the model-view-controller thing.

Oh man you should just give up.

I kid, I kid.

Yes that’s where I got all of my data from. I used this app to digitize the plots (it’s not exactly a fast process) for Double-X and Plus-X at all the push/pull times, as well as 2302 (I think) print film. I fit the data points sampled from the datasheet curves to a sigmoid function I came up with, it has five parameters that scipy determines. Then I use a cubic spline to interpolate. The reason I fit the data points to an equation instead of just interpolating the data points directly (which I could do) is so that I could (a) better understand the problem and (b) interpolate smoothly between curves (so I don’t have to pick 5 minutes or 6.5 minutes, but any time in between).

Also it allows me to “create” a new virtual film stock by picking a D_min, D_gray, D_max, gamma, and minimum exposure (“black” point).

1 Like

For now, I am using a simple minimal model that uses Gaussian blurring. Halation mainly comes from the light being back-reflected behind the film after going through it. So it mainly affects the last layer, i.e. the red. What I do is blurring the “effective exposure” RGB images before the non-linear transform through the characteristic curves. So taking care that this is happening in the scene-referred linear data. I blur the three channels with kernels of different sizes and I add them to the unblurred images using different amounts per channel, with red being the most affected channel in size and amount.

This is an exaggerated example, where I created a nonlinear ramp with 18% gray in the middle, and a maximum value of 800%.


I could experiment with different kernels and decays, that could be fun.

Probably it will be the final outcome, and I will slowly converge to simulate a generic film instead of wanting to stick with the published data rigorously. :grin:

That sounds actually very cool! I’m very curious to follow the development! :slight_smile:

I found that to have some really weird UI flaws in Chrome, with them seeming to have an attitude of “Pay us if you want this to go away”. WebPlotDigitizer - Copyright 2010-2024 Ankit Rohatgi worked much better for me. The only thing the app you linked has that would be a big improvement is autotracing - but that’s a paid feature and I’m not digitizing enough to make that worth it! WPD that I linked is open source AND available offline without paying.

Edit: rgb_led_filmscan/density_plot.py at main · Entropy512/rgb_led_filmscan · GitHub may be useful for experimenting with modeling film density behaviors. I implemented fitting of the enhanced model I started working on in RawTherapee to real data.

One thing I haven’t figured out how I want to handle yet (partly because I’m a little less concerned about it for RT’s negative inversion even though it might be a nice-to-have) is that Kodak Gold 200 has a mild but noticeable “knee” in its response, while Fuji Superia X-Tra 400 does not. My model fits Superia very well up to the knee, while it only fits Kodak well below the knee. Already digitized CSV files for Kodak Gold 200 and Superia 400 density are in that repo.

1 Like

The resulting ICC profile can’t be ‘dual-illuminant’, one of the two matrices/white points in the DCP was used. Per the specification, there’s only room for one set of data in an ICC profile.

In the Adobe system, DNG/DCPs can contain two color matrices and corresponding white points. They’re usually D65 (daylight) and StdA (tungsten), and the software using them is supposed to take the two matrices and interpolate a matrix corresponding to the calculated color temperature for that image. To do the same with ICC profiles you’d need two profiles, one for each white point, and software to do the same tag extraction and interpolation. The only software I know of even contemplating that, commercial or open-source, is vkdt.

I’ve seen no quantitative analysis of of the benefit of doing dual-illuminant. I’m a bit skeptical from the start due to the dodgy divination of color temp from an encoded image. If I wanted good color reproduction for a given application, I’d make an ICC profile from a ColorChecker target shot taken under the same illumination as the scene, using the measured (or asserted, if lighting is artificial) color temperature from the scene. THAT would be the gold standard…

1 Like

I used this version of WebPlotDigitizer (possibly an older version) that has an automatic mode that works quite ok for simple plots that don’t cross too much.

Very interesting, thank you for sharing! :smiley:

Weirdly, the documentation for WPD hints at an automatic mode - I just don’t remember seeing anything about it. It looks like it might not work well if a graph has gridlines of the same color though.

Edit: Works OK with gridlines, you need to do a lot of point adjustments where the line you’re interested in crosses a gridline but better than manually clicking everything.

Interesting. Could you post the profile here?

Me too. If I’m bothered, I use a custom WB off of a Kodak gray card. Otherwise, the in-camera WB presets work well enough for my purposes …

1 Like

Sorry I just meant that I made an ICC profile from a dual illuminant DCP profile. I went with D65. I could swear I left this comment already but it isn’t loading right now…

1 Like

I’ve always been puzzled by the Adobe “Dual-illuminant” as opposed to degrees K plus Tint.

Adobe … proud users of the ‘Melissa’ bastardized Kodak ROMM color space and inventors of the much-vaunted ‘AdobeRGB (1998)’ space with it’s incorrect Primary … :upside_down_face:

Bruce Lindbloom said:

I have heard the rumor that the green primary for Adobe RGB came about by the accidental use of the NTSC green primary, used incorrectly since NTSC is defined relative to Illuminant C while Adobe RGB is defined relative to D65. After the mistake was discovered, Adobe decided to keep it since their experiences with this accidental reference space were favorable.

You can see some early examples here

Also, you confirmed something that I was wondering, which was whether I should embed the halation as part of the original input exposure or not. When I was toying with the concept, I was adding it after the fact and not getting very satisfactory results, so when I get back to this I think I’ll try it that way.

Because that would require coming up with a formula for every single parameter of the profile that was a function of color temperature and tint, as opposed to simple interpolation between two points based on color temperature.

Basically you’d no longer have a DCP profile, but all of the metadata required (most notably the camera SSF) to generate an illuminant-specific profile for that illuminant.

It would be great to be able to do that, but the reality is, many cameras are only profiled from a color target, and camera manufacturers don’t want to publish SSFs, so the illuminant interpolation approach used by Adobe for DNG and DCP is an effective compromise that works for color-target-derived profiles, since black-body illuminants are predictable as a function of color temperature (tint, on the other hand, rapidly diverges from the vast majority of real-world illuminant scenarios. Fluorescent lights are closest, but there are vast differences in their spectral properties these days that don’t play nice with an interpolation technique.)

1 Like

OK. I’m not a great fan of Adobe, so that makes me glad that my 20-yr old Sigma SD9 has a 3x3 color correction matrix for each and every illuminant preset (WB).

Pardon my pedantry - with my 84-yr old color vision, I probably wouldn’t notice the difference these days …

1 Like

Just be clear, if we’re talking about I.S.O.'s Standard Output Sensitivity, the value is 118/255.

That would be for grayscale.

For color we go: