Any interest in a "film negative" feature in RT ?

Ok, i tried the LCE method in Gimp, and i think i’ve got the point. In fact, it should be easy to automate, provided that the user will select the “meaningful” area of the negative, excluding any outliers (film holder, sprocket holed, etc.) it’s only a simple histogram analysis.
I’m still not convinced it will work in really unbalanced negatives, with a strong dominant across the entire frame, like a sodium bulb-lit street scene, or a LED-spotlight-infested concert…
Anyway, i can try to implement it.

Related to the paper, i noticed that in the Fuji Crystal Archive paper datasheet you linked above, the characteristic curve plot was missing. Then, googling for the same title, i found this other very similar datasheet, whish has the curve plot, and here it is:

image

This is completely different from the film response curve! It’s a simple sigmoid, and not mostly a straight line. Notice the size of those toe and shoulder.
So, i implemented the Generalised logistic function in my test program, and actually it feels easier to adjust, compared to other curves i’ve tried before.

No, the film was “scanned” with a digital camera (Sony A7). I have no means of doing any spectral measurements (neither of the illuminant, nor the film or the sensor), so i don’t know where my peaks lie, but sure i can apply those Status A coefficients to my channels… although i think it will be a shot in the dark…

Absolutely, there’s the Tone Curve tool in RT which is very flexible. It works downstream of the filmneg tool, so it can be used normally, as in a digital picture :slight_smile:

As it appears, I forgot to mention many important things. LCE will indeed not work well with unbalanced negatives, but there is an objective explanation, and a fix.

By unbalanced negatives we should understand those exposed under a light that differs significantly from the one the film is rated for. For example, consider we shoot a winter scene in the Northern direction on Ektar 100, which is rated for “daylight”. The real temperature for such scene would be way higher. Not unexpectedly, what we get after the LCE conversion is a much stronger cool cast. This is similar to what we’d get with a digital camera set to e.g. 5500K, meaning, the behaviour is predictable, and maybe even useful.

Here is such an example. The correction is exactly the same, except for the dynamic range. Same exact Hue/Saturation adjustment applied here, yet it is obvious it is not enough.
This was shot in February around 13:40, the very end of the afternoon, beginning of the sunset. The sunlight color was probably still pretty “daylight”. The camera is looking North so all the shadows and everything not in direct sunlight should be quite cool (or hot in Kelvin terms).

The fix is either going further with the Hue/Saturation adjustment, and I did that before, or treating the cyan tint as an uncorrected WB, and correct it as such. Here is a cool thing. See what happens if we just change the WB (well, actually also reduce the exposure a bit as in the Camera Raw changing the WB blows the highlights):

Ain’t it nice? So far the theory holds. The correction is still global so should be automatable.

Now that I think of it, maybe the amount of the “residual” cyan cast can be measured and provide an idea of the real scene white balance, a piece of info not otherwise obtainable from the frame.

As an extra, here is what happens when we replace the initial Hue/Saturation with a stronger WB correction, followed by a Hue/Saturation adjustment to reduce the cyan in the highlights:

Note that this may be less representative as this state hits the color space boundaries and the colors change significantly when converted from ProPhoto RGB to sRGB. Also note but disregard the weird shades of the snow in the foreground. This Ektar is expired and that’s how it manifests, the effect is exaggerated by the conversion to sRGB , meaning it’s a different issue.


Another new observation is the Ektar curves. Note how B is not actually parallel to G and R. Maybe that contributes to the extra cyan in the highlights.

Also note the Status M densitometry, as opposed to Printed Density. This does makes sense, as this is what the Status M is for, measuring the film response on its own. The whole system of film/paper should be measured with Printed Density, but not the film individually. Nevertheless, this needs more thinking.

The thing is, film curves actually are sigmoids as well, its just their shoulder is not shown on the charts. I remember reading about it in one of the books, but can’t remember where.

On this chart the channels more or less coincide, meaning the image is balanced. Can it mean a finished print measured for the color reproduction? I’m not sure I understand what does it tell us, could you please explain?

Hi everyone,
@rom9, I apologise for getting “lost” again here.

Note: I like medians :slight_smile: really, I consider them as a good tool for a WB starting point.

  1. can you remind me how the default median WB (i.e. the one that’s computed when the filmneg module is first switched on) is tied to red and blue ratios?
  2. how hard would it be to have a button to re-compute the default median?

Both questions are related, though in different scenarios.

A. Sometimes, on a roll, I do not have enough suitable neutral tones patches. So I can’t get the red and blue ratios comfortably where they should be. And I’m wondering if their being way off in the first place influences the initial median-based white balance.

B. More frequently, when I find a suitable frame (with nice neutral tones) for initial inversion and replication of settings (copy, paste processing profile, in RawTherapee UI terms) to other frames of the same roll, I find myself unhappy with the median WB value that’s being replicated.

  • → do I have a way to re-compute that median, locally, for a given frame (without having to unapply the entire processing profile)?

Thanks.

Math: Part 0

I’ve read through all the messages on this thread, and it seems that there is a lot of confusion on the math of what happens during the scan. I suspect that this confusion leads to many decisions which lower the quality of the output, make it hard to manually tune the process, and complicate the UI.

To make the long story short, there are the following phenomena (ⓐ ⓑ ⓓ ⓔ match the explanations in the longer versions):

  • ⓐ There is a cross-talk between the R,G,B channels in the camera. Compensating for this (by a linear mix) may significantly improve “linearity” of the overall process.
  • ⓑ There is a cross-talk between the 3 pigments in the film. Assuming that ⓐ is done well enough, one can compensate for this by a linear transformation in log- color space. (This has a chance to produce much better color fidelity.)
  • ⓓ Each pigment has a non-linear response⁰⁾ in shadows (and for some type of film, in the highlights as well). One should compensate for this both for better colors/contrast in shadows, and:
  • ⓔ … for the calculation of the powers in the power laws (this is currently done via picking light-gray and dark-gray patches).

⁰⁾ Here we assume that the power law is already applied, so the response is ALREADY linear in midtones. So “this non-linearity” is meant to be “on top of the power law”.



“In reality”, there are significant non-linearities involved in parts ⓐ and ⓑ. However, I think I found “a shortcut” which

  • uses only linear recalculations (combined with conversions to and from log- color-space) to separate colors (actually: separate pigments).
  • has a good chance to be a MUCH better approximation than what is currently used in RT.
  • seems to be simple to explain:
    (α) Remove the camera colors’ cross-talk by mixing RAW channels with (largish) negative coefficients;
    (β) Take logarithms;
    (γ) Remove pigments’ cross-talk by mixing new color channels with (largish) negative coefficients;
    (δ) Correct curves¹⁾ of non-linearity of the density of each pigment (I mean the curve exposure → density);
    (ε) Examine gray points ⇒ multiply by suitable coefficient for each pigment;
    (ζ) Return back by taking exponent.
    After this the significant part²⁾ of non-linearities is removed, and one can process this as any other RAW image.³⁾

¹⁾ If an unexposed part of the film is accessible, this might be done automatically (given the documentation for the film).
²⁾ If the light source has 3 narrow spectral peaks, this may even remove ALL the non-linearities!
³⁾ Conjecture: with a fourth (infrared???) channel, this would also get the info about the dust.

I already have one part of the longer version written (covering ⓐ and ⓑ). If there is some interest, I would post it soon.

1 Like

As far as the camera color channel crosstalk - that’s exactly why raw processors already have matrix-based camera input profiles. It’s also why rom9 states that RT’s white balance should be set such that it white balances the backlight, not the negative.

However you raise an interesting point that perhaps the working colorspace of RT is not the appropriate one for negative inversion. The challenge may be that the appropriate working colorspace for negative inversion is likely film-dependent…

1 Like

Hi @Ilyaz
Much appreciated. I can only ask for more.
I have a few comments and questions but mostly, I can’t wait for your next post.

Hello!
I have been following this topic for a long time and agree with @Ilyaz assessment that the problem has not been clearly defined. So, for what it is worth, here are some of my observations (defining the problem the way I see it).

The measured density curves are not parallel, rather they appear as the scaled copies of each other (within the limitation of analog process). I submit that this is by design and applies to all negatives using C-41 process regardless of brand and type of the film. If they were parallel, the distance between them wold be equal to density of the dyes in the orange mask on unexposed areas. In other words, the orange mask would be a constant density overlay which we know is not the case.

After re-scaling of the measured values as per @rom9’s initial premise, the curves do become parallel, separated by the density of the dyes in the orange mask on unexposed areas, and can be removed by simple white balance. Now, if you check figures 5 through 8 in “The Chemistry of Color excerpt.pdf” posted by @nicnilov, you can see this is exactly how the orange mask works.

Please note that simple process for removal of the orange mask is a mayor achievement and @rom9 deserves an extra pat on the back for discovering this.

Next, for white balance, I believe that looking for the neutral patches on the linear portions of the curves is trying to solve two separate problems at the same time.

The first problem is orange mask. The orange mask is removed by white balancing the unexposed area of the film (after re-scaling).

The second problem is spectral mismatch between scene illumination and the film (e.g. daylight - tungsten). After orange mask removal, the negative is balanced to the design illumination for the film (e.g. the daylight film exposed by tungsten light will, after inversion, produce an image with typical orange cast). The reason is that the density curves are misaligned on the scene illumination axis. Therefore, the white balance can align toes and shoulders or straight sections, but not all three (the color temperature setting will not work either as film tone response curves are “baked in”). We need the “curves” tool preferably preset to typical film tone response curve.

Finally, the color profiling is anything but simple as it seems that each roll of film would require a separate color profile for each camera/light setup.

Please do not shoot me if I got any or all of this wrong!

This is a very frequent confusion on this thread! Note my words above:

After this the significant part²⁾ of non-linearities is removed, and one can process this as any other RAW image.

And this “one can process” involves the matrix profile — but this profile should match the film-layers’ spectral curves, not the camera’s. (The camera’s story ended before the step (β) described above.)

So the pipeline I discussed is completely complementary to the usual RAW pipeline — it doesn’t replace anything in the usual pipeline. These supplementary steps convert one (linear) RAW file (from the camera’s sensels) by another (linear) RAW file (one “captured by the film layers”).

Conclusion: the “whole” pipeline I described has 3 matrix-mixing steps:

  • To cancel the cross-talk of camera’s sensels (in linear space).
  • To cancel the cross-talk of film’s pigments (in log-space).
  • To cancel the cross-talk of film’s layers sensibility (in linear space).

What you wrote matches the third one.

For me, this seems a complete lunacy. Nowhere¹⁾ in the physics of scanning is the backlight relevant — only its “mix” with the film’s base.

¹⁾ … except for the sprocket holes!

1 Like

Math: Part I

The flow of light during the scan of a film goes as this:

  • Light source creates light intensity depending on wavelength as L(w).
  • This is filtered through the film base (substrate) with transparency S(w).
  • This is filtered through the red-capturing pigment with transparency ρ(w)r; here r is the optical thickness of this pigment.
  • This is filtered through the green-capturing pigment with transparency γ(w)g; here g is the optical thickness of this pigment.
  • This is filtered through the blue-capturing pigment with transparency β(w)b; here b is the optical thickness of this pigment.
  • This is caught by a camera sensel with sensitivity C(w) (depending on the type of the sensel).

The resulting effect at wavelength w is L(w)S(w)ρ(w)rγ(w)gβ(w)bC(w). The sensel records the average value of this function of w.

Conclusion Ⓐ: only the product L(w)S(w)C(w) matters, not the individual factors L(w), S(w), or C(w). (Hence it makes absolutely no sense to color-balance light source + camera without taking into account the film’s base.)

Conclusion Ⓑ: if we could manage to make the product L(w)S(w)C(w) “monochromatic” (in other words, to be concentrated at one wavelength w₀ only), then above, instead of “the average value” (which is “very non-linear” in r,g,b) one gets a simple expression L(w₀)S(w₀)ρ(w₀)rγ(w₀)gβ(w₀)bC(w₀). This immediately gives us the linear combination of r, g, b of the form r log ρ(w₀) + g log γ(w₀) + b log β(w₀). If we could also repeat it for two other wavelengths w₁ and w₂, we get 3 linear equations for 3 unknowns, so it would be easy to find the FUNDAMENTAL DATA r, g, b.

Conclusion Ⓑ₁: for best result, one should use monochromatic L(w) (since L is the only parameter one can control). Making 3 photos of the film with 3 different sources of monochromatic light allows a complete reconstruction of r, g and b using a simple matrix algebra.

Conclusion Ⓑ₂: to minimize the “crosstalk between the pigments”, the wavelengths of monochromatic sources should be as widely separated as possible w.r.t. the absorption spectra of 3 pigments. Essentially, this means “w₀, w₁, w₂ widely separated while staying near 3 peaks of the spectra of 3 pigments. (Note: they are not the peaks of sensitivity of the layers of the film!)

Conclusion Ⓑ₃: if one has no control over L, for best linearity one should mix¹⁾ the RGB-channels of the camera so that the mixes have as narrow wavelength-bands of L(w)S(w)C(w) as possible.

¹⁾ In some situations, there are reasons to avoid mixing camera channels with some coefficients negative. Moreover, the situation when the sensitivity C(w) after mixing may take negative values is especially risky. However, the scanned film has quite narrow “output gamut”. Because of this, using negative mixing coefficients is OK as far as all 3 mixed channel in actual film images remain positive. (Having them positive is crucial since the next step involves negative powers!)

Conclusion Ⓑ₄: One should use mixes of RAW RGB-channels R₀,G₀,B₀ like R = R₀ - 0.1 G₀, G = G₀ - 0.1 R₀ - 0.1 B₀, B = B₀ - 0.1G₀. (I expect that 0.1 may be increased even more.)

Conclusion Ⓑ₅: If the light source is mostly concentrated at 3 frequencies not very far from the camera sensels’ peaks of sensitivity, the procedure of Ⓑ₄ can be tuned to match exactly the procedure with 3 monochromatic light sources.

⁜⁜⁜⁜⁜⁜ In what follows, I ignore non-linearities coming from L(w)S(w)C(w) being not narrow-band ⁜⁜⁜⁜⁜⁜⁜

Note that one can find L(w₀)S(w₀)C(w₀) from scans of the unexposed part of the film. Same for w₁ and w₂. Dividing R, G and B by these “offsets” would “cancel” the effects of backlight+film␣base+camera giving products R’ = ρ(w₀)rγ(w₀)gβ(w₀)b, likewise G’ for w₁ and B’ for w₂.

Conclusion Ⓓ: one can find “the opacity of the red pigment” er as a product (R’)ᵏ(G’)ˡ(B’)ᵐ for appropriate k,l,m. Likewise for eg and eb with appropriately modified values of k,l,m.

Conclusion Ⓔ: in fact, the effects of cancelling L(w₀)S(w₀)C(w₀) on er, eg, eb is multiplication by certain constants. So instead of er = (R’)ᵏ(G’)ˡ(B’)ᵐ one may use er = RᵏGˡBᵐ/c₀, which may be rewritten as (RG-LB-M)-K/c₀. Here the constant c₀ depends only on the type of film (and can be found where r=0: on the unexposed part of film).

Note that considering RG-LB-M “cancels the cross-talk between pigments” on frequency w₀, and the exponent -K reflects the power-law dependence of the “sensel output” on er.

This finishes the process of reconstructing the density of pigments in the film. Recall the steps:

  • ⓐ Recalculate RAW channels R₀, G₀, B₀ linearly into “narrow-band” variants R, G, B “cancelling” the RAW channels cross-talk.
  • ⓑ Calculate the “biased” opacity of each pigment (RG-LB-M)-K, likewise (R-VGB-W)-U and (R-IG-JB)-H (with appropriate K,L,M,U,V,W,H,I,J). This “cancels” the cross-talk between the pigments. (This is linear in log R, log G, log B, r, g, and b.)
  • ⓒ Calculate the “true opacity” of the pigments (RG-LB-M)-K/c₀ (etc., here c₀ is the value of (RG-LB-M)-K on the unexposed part of the film).

Note that division by c₀ is equivalent to white-balancing in the midtones. Indeed, er (etc.) depends linearly on the exposure of film to the (red) light. (This presumes the correct choice of the power K, and holds in midtones, where the law of exposure → opacity=er is very close to linear. For many types of film this linearity also works in highlights.)

⁜⁜⁜⁜⁜⁜⁜ How to continue ⁜⁜⁜⁜⁜⁜⁜

③ Explain how to find the cross-talk coefficients L, M, V, W, I, J.
④ Explain how to find exponents K, U, H (essentially, the slopes on the optical density → pigment plots).
⑤ Recalculate er, eg and eb into the amount of light captured by each of 3 sensitive layers of the film.

However, these are completely different processes, so it is better to postpone them to a different post. Moreover, I see no simple way to do ③ — except for using some “reasonable values”, like 0.1 used in ⓐ :–(.

1 Like

Have no wish to shoot you (yet)!

Nevertheless, I do not understand this obsession with the color mask(s). For example, above I considered the contribution of film’s pigments as S(w)ρ(w)rγ(w)gβ(w)b, with S describing the transparency of the (intact) color masks, ρ the transparency of 1 unit of the red-capturing pigment, and r the number of units of this pigment (etc).

Of course, what I meant for ρ was “the total contribution” of this pigment — and, in the pedantic mode (one you use! ;–), this contribution consists of two parts:

  • the pure transparency of the this pigment ρp(w);
  • the cancellation of opacity ρm(w) of the mask.

Assume that 1 unit of this pigment eats ⓡ units of the mask. Then the total contribution of this unit into film’s transparency is ρp(w)ρm-ⓡ(w). Hence the contribution of r units is (ρp(w)ρm-ⓡ(w))r.

Conclusion: proceeding in the pedantic mode is equivalent to using ρ(w)=ρp(w)ρm-ⓡ(w) in the “naive” mode of my notes above/below.

@Iliaz, I have a problem with your explanation: the mask pigments are not in the image making layer of the same color. In other words, the yellow mask pigments are in the magenta layer and are destroyed by magenta density (green light), while the yellow mask pigments are in the cyan layer and are destroyed by the cyan density (red light). In other words, the light creating the image layer and light destroying the mask layer are from the different part of the spectrum and the ratio of image creation and the mask destruction varies from point to point depending on the color composition of the light falling onto that point of the film. So, while mask and image density pigments are measured together they are independent from each other.

One can look at the orange mask as an analog implementation of the camera profile: film camera has film as a sensor, so, as the sensor changes with every roll, it makes a lot of sense to bake the profile into the film. Thus removal of the orange mask is a necessary step irrespective of whatever else we do.

I thought about starting a new thread, but then saw this one. I’m just now getting into DSLR scanning (in my case, a mirrorless Fuji X-T4). This is my first attempt. While I mostly like the results, it was some trial and error to get here. And some color tweaking in RT (courtesy of LAB adjustments). Thankfully the grain tank in the background offered up something resembling neutral gray for the white balance. Interestingly, these same settings didn’t transfer well to other frames shot on this same roll and scanned with the same settings.

First DSLR Scan

More info: My blog post.

I do think that if other shots on the same roll, taken at roughly the same point in time and same lightning conditions, scanned (shot with Dslr in your case) the same and processed the same, they should develop pretty much the same.

I choose one process setting (as a starting point at least) for an entire roll all the time.

You’ll see the exposure differences between the shots, which can explain why some have a slightly different color balance. But mostly if I have one, I have the entire roll.

Differences occur when a part of the roll is shot inside VS other shots outside. Or if I let the roll sit in the camera, and a few months have passed between shooting the first half and the 2nd half for example.

I see a lot of people with film scanners or Dslr-scanning who have some sort of auto exposure enabled during scanning, and with a Dslr do not use the exact same Dslr settings for the entire roll… This will of course give differences frame to frame.

Personally I take the scans of a roll, crop them, and then make a collage of all the scans without any room between the shots. So I force-resize all the scans to 512x512 as example (yes, messing with the aspect ratio) and glue them all together.

I then process that collage as a single image to determine a balance that seems to fit the entire roll. I then take the original scans and process them with exactly the same parameters to get my ‘starting point images’.

1 Like

That mostly makes sense, and I think I can reproduce a similar process. I’ve been manually setting my WB when scanning, but what I’ve also relied on auto exposure. That will of course change from frame to frame with each given density. I’ll make an adjustment and go from there. Thanks for the feedback.

Yes that is a good point. For some negatives you might get better scans to better utilize the full density your scanner can measure.
and for dslr scans this might be something to actually be aware of. A good film scanner has a real dmax to make this a non issue.

Still, I like to balance a whole roll in one go. If that doesn’t seem possible, you scanned them differently (auto exposure or something) or you shot them at different lighting situations :).

You might find this interesting from the creator of ART…shows his WB approach…which should also work in RT as they are siblings of a sort…Raw Photo Editing in ART (screencasts) - #29 by agriggio

I just came back to this film scanning topic 5 years later and found this thread…

Just to answer this question, which puzzled me for a long time. The reason status M measurements of films don’t show all parallel lines, is apart from being impossible to completely achieve, is that status M reads a different wavelength of light than a print or internegative would. If you were to plot the actual density using a spectral range closer to the actual spectral sensitivity range of the positive they would be more parallel.

Two references for this. 1 is the arriscan ref another is https://125px.com/docs/motionpicture/kodak_2018/2238_TI2404.pdf

1 Like

Thanks for your reply. The different measurement spectra might explain the B line deviation, but I didn’t find a confirmation by those links. Maybe you could point out the relevant bits to me?

An important aspect is that the Status M specifically targets measuring color negative film, which also mentioned in the ANSI it2.18.1996 standard item 8.4 on p.5.

If we consider that each film brand has somewhat different composition of the yellow mask, could the B line deviation on this Ektar chart just reflect the character (look) of that film compared to some “Status M standard”, more “neutral” film, and not a generic behavior of all color negative films?

Is is just a standard, yes it used for color film but it NOT designed just for the purpose of measuring contrast or designing a film. It certainly used for process control, but that is different. i.e. it is close enough and narrow enough to how the positive responds but it is not the same.

If you look diagram http://dicomp.arri.de/digital/digital_systems/DIcompanion/printdens.html you see it, and if you look at this javascript simulation http://dicomp.arri.de/digital/digital_systems/DIcompanion/scannerresponse.html

They show how you can use status-m and then scale it. i.e. increase the gamma to match the print density.

If you also look at color separations doc I posted for can be made for archiving film using 2238 film, you can see how they explicitly scale one of the channels to make for slighly incorrect filter combination.

Not really because then you never have reasonable neutral colours for different film densities. Sure there a slight imperfections that give film character. Also negative film, IS designed to be copyed many times to make internegative and interpositives. Take for example all the blue screen films like star wars in 80s where an original camera negative needed to copied perhaps 10 times before making it to the cinema. e.g. several generations to do the magic needed for blue screen and masks and then several copies to make it cinema print for distribution. Incidently this where the orange mask is really needed, if just one generation is needed and you tolerate a slight quality loss you could dispense with it.

With DSLR scanning a broad white light source, you will incorrect contrast for each channel. You can attempt to fix this with a curve and matrix. I have not had time to read the code properly. But ideally you use light source that works with camera spectral characteristic.,so less of this work to do.

Color negative print film, and internegative film, actually has a blind spot at 600nm approx, which allows for dim safe light.

This is correct. But to expand on that, the different spectral sensitivities of camera sensors and scanners will measure the density of each dye with a different slope. This is why un-processed film scans always have a color shift from dark to light. The cyan dye in particular is recorded as less dense than expected by the red sensor. The correction is to re-scale the density of each channel to balance out the densities. Mathematically, this is a power function or gamma curve.

I cover the details of the process in in my blog post: Scanning Color Negative Film.