Any interest in a "film negative" feature in RT ?

Math: Part 0

I’ve read through all the messages on this thread, and it seems that there is a lot of confusion on the math of what happens during the scan. I suspect that this confusion leads to many decisions which lower the quality of the output, make it hard to manually tune the process, and complicate the UI.

To make the long story short, there are the following phenomena (ⓐ ⓑ ⓓ ⓔ match the explanations in the longer versions):

  • ⓐ There is a cross-talk between the R,G,B channels in the camera. Compensating for this (by a linear mix) may significantly improve “linearity” of the overall process.
  • ⓑ There is a cross-talk between the 3 pigments in the film. Assuming that ⓐ is done well enough, one can compensate for this by a linear transformation in log- color space. (This has a chance to produce much better color fidelity.)
  • ⓓ Each pigment has a non-linear response⁰⁾ in shadows (and for some type of film, in the highlights as well). One should compensate for this both for better colors/contrast in shadows, and:
  • ⓔ … for the calculation of the powers in the power laws (this is currently done via picking light-gray and dark-gray patches).

⁰⁾ Here we assume that the power law is already applied, so the response is ALREADY linear in midtones. So “this non-linearity” is meant to be “on top of the power law”.



“In reality”, there are significant non-linearities involved in parts ⓐ and ⓑ. However, I think I found “a shortcut” which

  • uses only linear recalculations (combined with conversions to and from log- color-space) to separate colors (actually: separate pigments).
  • has a good chance to be a MUCH better approximation than what is currently used in RT.
  • seems to be simple to explain:
    (α) Remove the camera colors’ cross-talk by mixing RAW channels with (largish) negative coefficients;
    (β) Take logarithms;
    (γ) Remove pigments’ cross-talk by mixing new color channels with (largish) negative coefficients;
    (δ) Correct curves¹⁾ of non-linearity of the density of each pigment (I mean the curve exposure → density);
    (ε) Examine gray points ⇒ multiply by suitable coefficient for each pigment;
    (ζ) Return back by taking exponent.
    After this the significant part²⁾ of non-linearities is removed, and one can process this as any other RAW image.³⁾

¹⁾ If an unexposed part of the film is accessible, this might be done automatically (given the documentation for the film).
²⁾ If the light source has 3 narrow spectral peaks, this may even remove ALL the non-linearities!
³⁾ Conjecture: with a fourth (infrared???) channel, this would also get the info about the dust.

I already have one part of the longer version written (covering ⓐ and ⓑ). If there is some interest, I would post it soon.

1 Like

As far as the camera color channel crosstalk - that’s exactly why raw processors already have matrix-based camera input profiles. It’s also why rom9 states that RT’s white balance should be set such that it white balances the backlight, not the negative.

However you raise an interesting point that perhaps the working colorspace of RT is not the appropriate one for negative inversion. The challenge may be that the appropriate working colorspace for negative inversion is likely film-dependent…

1 Like

Hi @Ilyaz
Much appreciated. I can only ask for more.
I have a few comments and questions but mostly, I can’t wait for your next post.

Hello!
I have been following this topic for a long time and agree with @Ilyaz assessment that the problem has not been clearly defined. So, for what it is worth, here are some of my observations (defining the problem the way I see it).

The measured density curves are not parallel, rather they appear as the scaled copies of each other (within the limitation of analog process). I submit that this is by design and applies to all negatives using C-41 process regardless of brand and type of the film. If they were parallel, the distance between them wold be equal to density of the dyes in the orange mask on unexposed areas. In other words, the orange mask would be a constant density overlay which we know is not the case.

After re-scaling of the measured values as per @rom9’s initial premise, the curves do become parallel, separated by the density of the dyes in the orange mask on unexposed areas, and can be removed by simple white balance. Now, if you check figures 5 through 8 in “The Chemistry of Color excerpt.pdf” posted by @nicnilov, you can see this is exactly how the orange mask works.

Please note that simple process for removal of the orange mask is a mayor achievement and @rom9 deserves an extra pat on the back for discovering this.

Next, for white balance, I believe that looking for the neutral patches on the linear portions of the curves is trying to solve two separate problems at the same time.

The first problem is orange mask. The orange mask is removed by white balancing the unexposed area of the film (after re-scaling).

The second problem is spectral mismatch between scene illumination and the film (e.g. daylight - tungsten). After orange mask removal, the negative is balanced to the design illumination for the film (e.g. the daylight film exposed by tungsten light will, after inversion, produce an image with typical orange cast). The reason is that the density curves are misaligned on the scene illumination axis. Therefore, the white balance can align toes and shoulders or straight sections, but not all three (the color temperature setting will not work either as film tone response curves are “baked in”). We need the “curves” tool preferably preset to typical film tone response curve.

Finally, the color profiling is anything but simple as it seems that each roll of film would require a separate color profile for each camera/light setup.

Please do not shoot me if I got any or all of this wrong!

This is a very frequent confusion on this thread! Note my words above:

After this the significant part²⁾ of non-linearities is removed, and one can process this as any other RAW image.

And this “one can process” involves the matrix profile — but this profile should match the film-layers’ spectral curves, not the camera’s. (The camera’s story ended before the step (β) described above.)

So the pipeline I discussed is completely complementary to the usual RAW pipeline — it doesn’t replace anything in the usual pipeline. These supplementary steps convert one (linear) RAW file (from the camera’s sensels) by another (linear) RAW file (one “captured by the film layers”).

Conclusion: the “whole” pipeline I described has 3 matrix-mixing steps:

  • To cancel the cross-talk of camera’s sensels (in linear space).
  • To cancel the cross-talk of film’s pigments (in log-space).
  • To cancel the cross-talk of film’s layers sensibility (in linear space).

What you wrote matches the third one.

For me, this seems a complete lunacy. Nowhere¹⁾ in the physics of scanning is the backlight relevant — only its “mix” with the film’s base.

¹⁾ … except for the sprocket holes!

1 Like

Math: Part I

The flow of light during the scan of a film goes as this:

  • Light source creates light intensity depending on wavelength as L(w).
  • This is filtered through the film base (substrate) with transparency S(w).
  • This is filtered through the red-capturing pigment with transparency ρ(w)r; here r is the optical thickness of this pigment.
  • This is filtered through the green-capturing pigment with transparency γ(w)g; here g is the optical thickness of this pigment.
  • This is filtered through the blue-capturing pigment with transparency β(w)b; here b is the optical thickness of this pigment.
  • This is caught by a camera sensel with sensitivity C(w) (depending on the type of the sensel).

The resulting effect at wavelength w is L(w)S(w)ρ(w)rγ(w)gβ(w)bC(w). The sensel records the average value of this function of w.

Conclusion Ⓐ: only the product L(w)S(w)C(w) matters, not the individual factors L(w), S(w), or C(w). (Hence it makes absolutely no sense to color-balance light source + camera without taking into account the film’s base.)

Conclusion Ⓑ: if we could manage to make the product L(w)S(w)C(w) “monochromatic” (in other words, to be concentrated at one wavelength w₀ only), then above, instead of “the average value” (which is “very non-linear” in r,g,b) one gets a simple expression L(w₀)S(w₀)ρ(w₀)rγ(w₀)gβ(w₀)bC(w₀). This immediately gives us the linear combination of r, g, b of the form r log ρ(w₀) + g log γ(w₀) + b log β(w₀). If we could also repeat it for two other wavelengths w₁ and w₂, we get 3 linear equations for 3 unknowns, so it would be easy to find the FUNDAMENTAL DATA r, g, b.

Conclusion Ⓑ₁: for best result, one should use monochromatic L(w) (since L is the only parameter one can control). Making 3 photos of the film with 3 different sources of monochromatic light allows a complete reconstruction of r, g and b using a simple matrix algebra.

Conclusion Ⓑ₂: to minimize the “crosstalk between the pigments”, the wavelengths of monochromatic sources should be as widely separated as possible w.r.t. the absorption spectra of 3 pigments. Essentially, this means “w₀, w₁, w₂ widely separated while staying near 3 peaks of the spectra of 3 pigments. (Note: they are not the peaks of sensitivity of the layers of the film!)

Conclusion Ⓑ₃: if one has no control over L, for best linearity one should mix¹⁾ the RGB-channels of the camera so that the mixes have as narrow wavelength-bands of L(w)S(w)C(w) as possible.

¹⁾ In some situations, there are reasons to avoid mixing camera channels with some coefficients negative. Moreover, the situation when the sensitivity C(w) after mixing may take negative values is especially risky. However, the scanned film has quite narrow “output gamut”. Because of this, using negative mixing coefficients is OK as far as all 3 mixed channel in actual film images remain positive. (Having them positive is crucial since the next step involves negative powers!)

Conclusion Ⓑ₄: One should use mixes of RAW RGB-channels R₀,G₀,B₀ like R = R₀ - 0.1 G₀, G = G₀ - 0.1 R₀ - 0.1 B₀, B = B₀ - 0.1G₀. (I expect that 0.1 may be increased even more.)

Conclusion Ⓑ₅: If the light source is mostly concentrated at 3 frequencies not very far from the camera sensels’ peaks of sensitivity, the procedure of Ⓑ₄ can be tuned to match exactly the procedure with 3 monochromatic light sources.

⁜⁜⁜⁜⁜⁜ In what follows, I ignore non-linearities coming from L(w)S(w)C(w) being not narrow-band ⁜⁜⁜⁜⁜⁜⁜

Note that one can find L(w₀)S(w₀)C(w₀) from scans of the unexposed part of the film. Same for w₁ and w₂. Dividing R, G and B by these “offsets” would “cancel” the effects of backlight+film␣base+camera giving products R’ = ρ(w₀)rγ(w₀)gβ(w₀)b, likewise G’ for w₁ and B’ for w₂.

Conclusion Ⓓ: one can find “the opacity of the red pigment” er as a product (R’)ᵏ(G’)ˡ(B’)ᵐ for appropriate k,l,m. Likewise for eg and eb with appropriately modified values of k,l,m.

Conclusion Ⓔ: in fact, the effects of cancelling L(w₀)S(w₀)C(w₀) on er, eg, eb is multiplication by certain constants. So instead of er = (R’)ᵏ(G’)ˡ(B’)ᵐ one may use er = RᵏGˡBᵐ/c₀, which may be rewritten as (RG-LB-M)-K/c₀. Here the constant c₀ depends only on the type of film (and can be found where r=0: on the unexposed part of film).

Note that considering RG-LB-M “cancels the cross-talk between pigments” on frequency w₀, and the exponent -K reflects the power-law dependence of the “sensel output” on er.

This finishes the process of reconstructing the density of pigments in the film. Recall the steps:

  • ⓐ Recalculate RAW channels R₀, G₀, B₀ linearly into “narrow-band” variants R, G, B “cancelling” the RAW channels cross-talk.
  • ⓑ Calculate the “biased” opacity of each pigment (RG-LB-M)-K, likewise (R-VGB-W)-U and (R-IG-JB)-H (with appropriate K,L,M,U,V,W,H,I,J). This “cancels” the cross-talk between the pigments. (This is linear in log R, log G, log B, r, g, and b.)
  • ⓒ Calculate the “true opacity” of the pigments (RG-LB-M)-K/c₀ (etc., here c₀ is the value of (RG-LB-M)-K on the unexposed part of the film).

Note that division by c₀ is equivalent to white-balancing in the midtones. Indeed, er (etc.) depends linearly on the exposure of film to the (red) light. (This presumes the correct choice of the power K, and holds in midtones, where the law of exposure → opacity=er is very close to linear. For many types of film this linearity also works in highlights.)

⁜⁜⁜⁜⁜⁜⁜ How to continue ⁜⁜⁜⁜⁜⁜⁜

③ Explain how to find the cross-talk coefficients L, M, V, W, I, J.
④ Explain how to find exponents K, U, H (essentially, the slopes on the optical density → pigment plots).
⑤ Recalculate er, eg and eb into the amount of light captured by each of 3 sensitive layers of the film.

However, these are completely different processes, so it is better to postpone them to a different post. Moreover, I see no simple way to do ③ — except for using some “reasonable values”, like 0.1 used in ⓐ :–(.

1 Like

Have no wish to shoot you (yet)!

Nevertheless, I do not understand this obsession with the color mask(s). For example, above I considered the contribution of film’s pigments as S(w)ρ(w)rγ(w)gβ(w)b, with S describing the transparency of the (intact) color masks, ρ the transparency of 1 unit of the red-capturing pigment, and r the number of units of this pigment (etc).

Of course, what I meant for ρ was “the total contribution” of this pigment — and, in the pedantic mode (one you use! ;–), this contribution consists of two parts:

  • the pure transparency of the this pigment ρp(w);
  • the cancellation of opacity ρm(w) of the mask.

Assume that 1 unit of this pigment eats ⓡ units of the mask. Then the total contribution of this unit into film’s transparency is ρp(w)ρm-ⓡ(w). Hence the contribution of r units is (ρp(w)ρm-ⓡ(w))r.

Conclusion: proceeding in the pedantic mode is equivalent to using ρ(w)=ρp(w)ρm-ⓡ(w) in the “naive” mode of my notes above/below.

@Iliaz, I have a problem with your explanation: the mask pigments are not in the image making layer of the same color. In other words, the yellow mask pigments are in the magenta layer and are destroyed by magenta density (green light), while the yellow mask pigments are in the cyan layer and are destroyed by the cyan density (red light). In other words, the light creating the image layer and light destroying the mask layer are from the different part of the spectrum and the ratio of image creation and the mask destruction varies from point to point depending on the color composition of the light falling onto that point of the film. So, while mask and image density pigments are measured together they are independent from each other.

One can look at the orange mask as an analog implementation of the camera profile: film camera has film as a sensor, so, as the sensor changes with every roll, it makes a lot of sense to bake the profile into the film. Thus removal of the orange mask is a necessary step irrespective of whatever else we do.

I thought about starting a new thread, but then saw this one. I’m just now getting into DSLR scanning (in my case, a mirrorless Fuji X-T4). This is my first attempt. While I mostly like the results, it was some trial and error to get here. And some color tweaking in RT (courtesy of LAB adjustments). Thankfully the grain tank in the background offered up something resembling neutral gray for the white balance. Interestingly, these same settings didn’t transfer well to other frames shot on this same roll and scanned with the same settings.

First DSLR Scan

More info: My blog post.

I do think that if other shots on the same roll, taken at roughly the same point in time and same lightning conditions, scanned (shot with Dslr in your case) the same and processed the same, they should develop pretty much the same.

I choose one process setting (as a starting point at least) for an entire roll all the time.

You’ll see the exposure differences between the shots, which can explain why some have a slightly different color balance. But mostly if I have one, I have the entire roll.

Differences occur when a part of the roll is shot inside VS other shots outside. Or if I let the roll sit in the camera, and a few months have passed between shooting the first half and the 2nd half for example.

I see a lot of people with film scanners or Dslr-scanning who have some sort of auto exposure enabled during scanning, and with a Dslr do not use the exact same Dslr settings for the entire roll… This will of course give differences frame to frame.

Personally I take the scans of a roll, crop them, and then make a collage of all the scans without any room between the shots. So I force-resize all the scans to 512x512 as example (yes, messing with the aspect ratio) and glue them all together.

I then process that collage as a single image to determine a balance that seems to fit the entire roll. I then take the original scans and process them with exactly the same parameters to get my ‘starting point images’.

1 Like

That mostly makes sense, and I think I can reproduce a similar process. I’ve been manually setting my WB when scanning, but what I’ve also relied on auto exposure. That will of course change from frame to frame with each given density. I’ll make an adjustment and go from there. Thanks for the feedback.

Yes that is a good point. For some negatives you might get better scans to better utilize the full density your scanner can measure.
and for dslr scans this might be something to actually be aware of. A good film scanner has a real dmax to make this a non issue.

Still, I like to balance a whole roll in one go. If that doesn’t seem possible, you scanned them differently (auto exposure or something) or you shot them at different lighting situations :).

You might find this interesting from the creator of ART…shows his WB approach…which should also work in RT as they are siblings of a sort…Raw Photo Editing in ART (screencasts) - #29 by agriggio

I just came back to this film scanning topic 5 years later and found this thread…

Just to answer this question, which puzzled me for a long time. The reason status M measurements of films don’t show all parallel lines, is apart from being impossible to completely achieve, is that status M reads a different wavelength of light than a print or internegative would. If you were to plot the actual density using a spectral range closer to the actual spectral sensitivity range of the positive they would be more parallel.

Two references for this. 1 is the arriscan ref another is https://125px.com/docs/motionpicture/kodak_2018/2238_TI2404.pdf

1 Like

Thanks for your reply. The different measurement spectra might explain the B line deviation, but I didn’t find a confirmation by those links. Maybe you could point out the relevant bits to me?

An important aspect is that the Status M specifically targets measuring color negative film, which also mentioned in the ANSI it2.18.1996 standard item 8.4 on p.5.

If we consider that each film brand has somewhat different composition of the yellow mask, could the B line deviation on this Ektar chart just reflect the character (look) of that film compared to some “Status M standard”, more “neutral” film, and not a generic behavior of all color negative films?

Is is just a standard, yes it used for color film but it NOT designed just for the purpose of measuring contrast or designing a film. It certainly used for process control, but that is different. i.e. it is close enough and narrow enough to how the positive responds but it is not the same.

If you look diagram http://dicomp.arri.de/digital/digital_systems/DIcompanion/printdens.html you see it, and if you look at this javascript simulation http://dicomp.arri.de/digital/digital_systems/DIcompanion/scannerresponse.html

They show how you can use status-m and then scale it. i.e. increase the gamma to match the print density.

If you also look at color separations doc I posted for can be made for archiving film using 2238 film, you can see how they explicitly scale one of the channels to make for slighly incorrect filter combination.

Not really because then you never have reasonable neutral colours for different film densities. Sure there a slight imperfections that give film character. Also negative film, IS designed to be copyed many times to make internegative and interpositives. Take for example all the blue screen films like star wars in 80s where an original camera negative needed to copied perhaps 10 times before making it to the cinema. e.g. several generations to do the magic needed for blue screen and masks and then several copies to make it cinema print for distribution. Incidently this where the orange mask is really needed, if just one generation is needed and you tolerate a slight quality loss you could dispense with it.

With DSLR scanning a broad white light source, you will incorrect contrast for each channel. You can attempt to fix this with a curve and matrix. I have not had time to read the code properly. But ideally you use light source that works with camera spectral characteristic.,so less of this work to do.

Color negative print film, and internegative film, actually has a blind spot at 600nm approx, which allows for dim safe light.

This is correct. But to expand on that, the different spectral sensitivities of camera sensors and scanners will measure the density of each dye with a different slope. This is why un-processed film scans always have a color shift from dark to light. The cyan dye in particular is recorded as less dense than expected by the red sensor. The correction is to re-scale the density of each channel to balance out the densities. Mathematically, this is a power function or gamma curve.

I cover the details of the process in in my blog post: Scanning Color Negative Film.

This blog post approach looks very similar to the one implemented by @rom9 and discussed in this thread. This general group of approaches is essentially a re-iteration of the (20+ years old) manual negative conversion procedure where the channels are adjusted using individual gamma. With that said, it would be nice to have some examples of pictorial images converted along the narrative, since a color checker shot conversion is really a straightforward procedure.

Let me explain my personal grudge with this topic. The mainstream camera sensors are all linear, nearly standard. Any mainstream color negative film brand can be printed on any paper with acceptable results, which also hints on a standard. Therefore there should exist a process where any film negative shot on any raw-shooting camera could be converted in a straightforward and predictable manner to a representative digital image.

A successful process of digital negative conversion should not be bound to “your” specific camera or “your” specific film shots. It should just work. There should be no mandatory “adjust to taste” step which essentially masks the flaws of the process.

Personally I see the main issue in the fact that film-paper system aspect gets ignored most of the time. We attempt to convert a negative on its own, with no regard to the existence of the paper response, which in fact is (literally) the key to the negative interpretation. A negative scanned in a broadband white light using a calibrated device should be convertable to a faithful image just by applying a profile that represents a paper response, shouldn’t it?


There were many points made in this and other similar threads, but they mainly are assumptions or conjectures. There are plenty of personal recipes and very little of hard knowledge. Maybe it’s time we start coming up with that hard knowledge in the form of generally applicable and repeatable experiments, as well as provable (and complete) theory.

1 Like

No that is the problem, in a nutshell, if you use a broad based illuminant, and a normal DSLR , you will always begin with the wrong result, as your starting point which you will need to correct… Sure you can use 3D matrix and curves to get back where you need to be. Which is essential, if you use such a setup.

As an example your DSLR will see the spectrum of light where the print film is blind. (there is a blind spot that allows for a safelight).

The correct way is either to do this one is two use a monochrome sensor and use a light with the correct spectral distribution or close, and other way is broad based illuminant and sensor with correct sensitivity. The former IS used in several commercial products, that give reliable results.

The cineon document is hard knowledge, and there are plenty of sources for the correct way to this if you know where to look. A DSLR is not necessarily ideal for this purpose as is, but can be adjusted to give very good results, it just wasn’t designed for this purpose.

1 Like

While similar processes have been discussed before (and it’s the process used by negativelabPro and rawtherapee), unfortunately this method remains relatively unknown. A search for how to process color negative scans yields many convoluted, difficult and frankly bad looking methods.

My goal with the blog post is to explain what is actually needed for scanning all in one place, so that people can figure out the best way for their camera. Because every camera will see the film differently, and every type of film will be recorded differently. I do plan to add photos to the post, and feedback is appreciated.

This process is actually not too complex. After things like camera profiles, color profiles, and luts are prepared, there are just four adjustable parameters that will remain the same for many scans.


I like to think of a digital camera scan as having its own look/characteristics, just like photographic paper has its own look. I think that the colors already look accurate and true-to-life without further corrections. I suspect that a printing density scan would appear far more saturated and digital looking than most film photographers would expect.

1 Like