Any interest in a "film negative" feature in RT ?

@Iliaz, I have a problem with your explanation: the mask pigments are not in the image making layer of the same color. In other words, the yellow mask pigments are in the magenta layer and are destroyed by magenta density (green light), while the yellow mask pigments are in the cyan layer and are destroyed by the cyan density (red light). In other words, the light creating the image layer and light destroying the mask layer are from the different part of the spectrum and the ratio of image creation and the mask destruction varies from point to point depending on the color composition of the light falling onto that point of the film. So, while mask and image density pigments are measured together they are independent from each other.

One can look at the orange mask as an analog implementation of the camera profile: film camera has film as a sensor, so, as the sensor changes with every roll, it makes a lot of sense to bake the profile into the film. Thus removal of the orange mask is a necessary step irrespective of whatever else we do.

I thought about starting a new thread, but then saw this one. I’m just now getting into DSLR scanning (in my case, a mirrorless Fuji X-T4). This is my first attempt. While I mostly like the results, it was some trial and error to get here. And some color tweaking in RT (courtesy of LAB adjustments). Thankfully the grain tank in the background offered up something resembling neutral gray for the white balance. Interestingly, these same settings didn’t transfer well to other frames shot on this same roll and scanned with the same settings.

First DSLR Scan

More info: My blog post.

I do think that if other shots on the same roll, taken at roughly the same point in time and same lightning conditions, scanned (shot with Dslr in your case) the same and processed the same, they should develop pretty much the same.

I choose one process setting (as a starting point at least) for an entire roll all the time.

You’ll see the exposure differences between the shots, which can explain why some have a slightly different color balance. But mostly if I have one, I have the entire roll.

Differences occur when a part of the roll is shot inside VS other shots outside. Or if I let the roll sit in the camera, and a few months have passed between shooting the first half and the 2nd half for example.

I see a lot of people with film scanners or Dslr-scanning who have some sort of auto exposure enabled during scanning, and with a Dslr do not use the exact same Dslr settings for the entire roll… This will of course give differences frame to frame.

Personally I take the scans of a roll, crop them, and then make a collage of all the scans without any room between the shots. So I force-resize all the scans to 512x512 as example (yes, messing with the aspect ratio) and glue them all together.

I then process that collage as a single image to determine a balance that seems to fit the entire roll. I then take the original scans and process them with exactly the same parameters to get my ‘starting point images’.

1 Like

That mostly makes sense, and I think I can reproduce a similar process. I’ve been manually setting my WB when scanning, but what I’ve also relied on auto exposure. That will of course change from frame to frame with each given density. I’ll make an adjustment and go from there. Thanks for the feedback.

Yes that is a good point. For some negatives you might get better scans to better utilize the full density your scanner can measure.
and for dslr scans this might be something to actually be aware of. A good film scanner has a real dmax to make this a non issue.

Still, I like to balance a whole roll in one go. If that doesn’t seem possible, you scanned them differently (auto exposure or something) or you shot them at different lighting situations :).

You might find this interesting from the creator of ART…shows his WB approach…which should also work in RT as they are siblings of a sort…Raw Photo Editing in ART (screencasts) - #29 by agriggio

I just came back to this film scanning topic 5 years later and found this thread…

Just to answer this question, which puzzled me for a long time. The reason status M measurements of films don’t show all parallel lines, is apart from being impossible to completely achieve, is that status M reads a different wavelength of light than a print or internegative would. If you were to plot the actual density using a spectral range closer to the actual spectral sensitivity range of the positive they would be more parallel.

Two references for this. 1 is the arriscan ref another is https://125px.com/docs/motionpicture/kodak_2018/2238_TI2404.pdf

1 Like

Thanks for your reply. The different measurement spectra might explain the B line deviation, but I didn’t find a confirmation by those links. Maybe you could point out the relevant bits to me?

An important aspect is that the Status M specifically targets measuring color negative film, which also mentioned in the ANSI it2.18.1996 standard item 8.4 on p.5.

If we consider that each film brand has somewhat different composition of the yellow mask, could the B line deviation on this Ektar chart just reflect the character (look) of that film compared to some “Status M standard”, more “neutral” film, and not a generic behavior of all color negative films?

Is is just a standard, yes it used for color film but it NOT designed just for the purpose of measuring contrast or designing a film. It certainly used for process control, but that is different. i.e. it is close enough and narrow enough to how the positive responds but it is not the same.

If you look diagram http://dicomp.arri.de/digital/digital_systems/DIcompanion/printdens.html you see it, and if you look at this javascript simulation http://dicomp.arri.de/digital/digital_systems/DIcompanion/scannerresponse.html

They show how you can use status-m and then scale it. i.e. increase the gamma to match the print density.

If you also look at color separations doc I posted for can be made for archiving film using 2238 film, you can see how they explicitly scale one of the channels to make for slighly incorrect filter combination.

Not really because then you never have reasonable neutral colours for different film densities. Sure there a slight imperfections that give film character. Also negative film, IS designed to be copyed many times to make internegative and interpositives. Take for example all the blue screen films like star wars in 80s where an original camera negative needed to copied perhaps 10 times before making it to the cinema. e.g. several generations to do the magic needed for blue screen and masks and then several copies to make it cinema print for distribution. Incidently this where the orange mask is really needed, if just one generation is needed and you tolerate a slight quality loss you could dispense with it.

With DSLR scanning a broad white light source, you will incorrect contrast for each channel. You can attempt to fix this with a curve and matrix. I have not had time to read the code properly. But ideally you use light source that works with camera spectral characteristic.,so less of this work to do.

Color negative print film, and internegative film, actually has a blind spot at 600nm approx, which allows for dim safe light.

This is correct. But to expand on that, the different spectral sensitivities of camera sensors and scanners will measure the density of each dye with a different slope. This is why un-processed film scans always have a color shift from dark to light. The cyan dye in particular is recorded as less dense than expected by the red sensor. The correction is to re-scale the density of each channel to balance out the densities. Mathematically, this is a power function or gamma curve.

I cover the details of the process in in my blog post: Scanning Color Negative Film.

This blog post approach looks very similar to the one implemented by @rom9 and discussed in this thread. This general group of approaches is essentially a re-iteration of the (20+ years old) manual negative conversion procedure where the channels are adjusted using individual gamma. With that said, it would be nice to have some examples of pictorial images converted along the narrative, since a color checker shot conversion is really a straightforward procedure.

Let me explain my personal grudge with this topic. The mainstream camera sensors are all linear, nearly standard. Any mainstream color negative film brand can be printed on any paper with acceptable results, which also hints on a standard. Therefore there should exist a process where any film negative shot on any raw-shooting camera could be converted in a straightforward and predictable manner to a representative digital image.

A successful process of digital negative conversion should not be bound to “your” specific camera or “your” specific film shots. It should just work. There should be no mandatory “adjust to taste” step which essentially masks the flaws of the process.

Personally I see the main issue in the fact that film-paper system aspect gets ignored most of the time. We attempt to convert a negative on its own, with no regard to the existence of the paper response, which in fact is (literally) the key to the negative interpretation. A negative scanned in a broadband white light using a calibrated device should be convertable to a faithful image just by applying a profile that represents a paper response, shouldn’t it?


There were many points made in this and other similar threads, but they mainly are assumptions or conjectures. There are plenty of personal recipes and very little of hard knowledge. Maybe it’s time we start coming up with that hard knowledge in the form of generally applicable and repeatable experiments, as well as provable (and complete) theory.

1 Like

No that is the problem, in a nutshell, if you use a broad based illuminant, and a normal DSLR , you will always begin with the wrong result, as your starting point which you will need to correct… Sure you can use 3D matrix and curves to get back where you need to be. Which is essential, if you use such a setup.

As an example your DSLR will see the spectrum of light where the print film is blind. (there is a blind spot that allows for a safelight).

The correct way is either to do this one is two use a monochrome sensor and use a light with the correct spectral distribution or close, and other way is broad based illuminant and sensor with correct sensitivity. The former IS used in several commercial products, that give reliable results.

The cineon document is hard knowledge, and there are plenty of sources for the correct way to this if you know where to look. A DSLR is not necessarily ideal for this purpose as is, but can be adjusted to give very good results, it just wasn’t designed for this purpose.

1 Like

While similar processes have been discussed before (and it’s the process used by negativelabPro and rawtherapee), unfortunately this method remains relatively unknown. A search for how to process color negative scans yields many convoluted, difficult and frankly bad looking methods.

My goal with the blog post is to explain what is actually needed for scanning all in one place, so that people can figure out the best way for their camera. Because every camera will see the film differently, and every type of film will be recorded differently. I do plan to add photos to the post, and feedback is appreciated.

This process is actually not too complex. After things like camera profiles, color profiles, and luts are prepared, there are just four adjustable parameters that will remain the same for many scans.


I like to think of a digital camera scan as having its own look/characteristics, just like photographic paper has its own look. I think that the colors already look accurate and true-to-life without further corrections. I suspect that a printing density scan would appear far more saturated and digital looking than most film photographers would expect.

1 Like

I apologize for having been excessively harsh. I did learn something new from your article and would like to look closer. Also, thanks to @LaurenceLumi for nudging me in the right direction.

1 Like

No harshness taken, I just wanted to point out that outside of forums like this one, this method hard to find.

It is actually more complex that this.

The difference between status-m and the sensitises of the print medium accounts for the non-parallel lines. The ideal way is fix this is a 3D LUT but a gamma correction in many cases is clearly good enough.
However normal colour film has a gamma less than 1, usually around .6. Which is not just scaled back to 1, (like our linear working colour spaces are) but is further scaled by the print medium which is second analogue exposure. When optical printing and balancing using either timed exposure or dichroic filters, this gamma in the print affects the white balancing result. This also needs to be considered. i.e. white balancing though it is not called that in the printing world (at least not when I did this) will not have the same affect through the tonal range.

So even if you have straight lines to begin (i.e. correctly scan the film with the right sensitivies), if you photograph under a different illuminant than the film was designed for, the white balance will change slightly across the range as the curves have a different slope at different points on the print. When optical printing this is taken into account. (keep in mind the graphs using photography typically show log(exposure) against log(density) anything other 45 degrees will have a gamma different to 1)

Having read the code and got the point that I can change it. I can see that approach used here ignores all this. And just does a 1 time gamma correction to the individual channels. Which obviously people are happy about.

In the analogue world there was and still is rigorous calibration of the various steps. There are very detailed steps for calibration motion picture film scanners for the film industry etc. Kodak spent a lot of time on this in the past, and you can bet Fuji spent a lot of time on this.

I hope to have some time I hope to add some of these calibration steps into the existing code, using 80/20 rule perhaps that will improve the process.

Remember the original process IS a multistep process, with different objectives.

Hope that helps

1 Like

Yes, it definitely would be possible to get a better scan that is closer to paper sensitivity. Also corrections could be made to a white light scan using a 3x3 matrix on the density values or a 3d lut to better match printing density. But, estimating these corrections would be nearly impossible, and an accurate calibration would require expensive and specialized equipment that most photographers including myself do not have access too.

I would be very interested to know if you have a method of calibration that does not require additional hardware.

In my opinion, the per-channel density corrections (along with camera profiles and color spaces) are the 80/20 of film scanning. Adjusting for density balance is already a significant step in addition to white balance. People are going to scan their film with whatever tools they have and this method can get photos looking very close to accurate without needing too much investment.

My personal conclusion is that it’s not necessary to fully simulate the printing process, and that the appearance of the dyes scanned in white light can add a look to the film that is similar to a print.

Are you editing rawtherapee code? I’m not a rawtherapee developer, I’m just interested in the subject of film scanning.

Hacking both the rawtherapee and darktable code. Though I only have rawtherapee compiled properly.

The tools to do this properly aren’t that specialised. A spectrometer, and/or densitometer are all that’s needed, along with optical printing. But you need a lot of samples to be very accurate. And that costs time and money.

Nobody who does all that work, wants to give it away. The Academy of Motion Picture Arts and Sciences has an American standard registered for the printing density of cinema print film. But you still need a lut to charactise the print medium. Your working in a CMY output.

Kodak and Fuji never did anything like that for still photographers, preferring to sell equipment to mini labs etc. icc colormanagement ignores colour negative.

There was a time for example when digital FX work was mixed in with optical printing, so the need to have a reasonable match was essential in predicting the output for motion picture work so it can be done.

The frontier and Kodak pakon, and Kodak photo cd are some examples in the still world, all closed source or commercial trade secrets etc.

With the very small market there isn’t the market for this kind profiling because it is just too expensive, and can’t be used as is for faded stock anyway because to have to work out what what it would have been before it faded, so you have to guess.

But I think there is room for improvement in the tools we have as still photographers.

1 Like

Having recently digitized thousands of slides and color negatives, I will add my experience. Original color slides survived well with their color intact. Copies of slides however faded horribly. Color negatives all faded to some extent, but worse, the color of the film stock (unexposed developed film) varied wildly, even within the same brand & film type. I was using an RGB back light with the color adjusted to get full scale without clipping on each individual channel. I had to adjust the back light color for every roll I digitized. I had to adjust the color of every single frame I digitized because the level of fading varied through each roll. on some of the worst frames, the fading varied across the frame, even to the point where the perforations in the piece of film that was stacked on top of the frame I was digitizing were clearly visible in the frame.
I think that if you are working with new film that was developed within a year of when you are digitizing it, you have a chance of coming up with an algorithm to accurately recover the color, but in most cases, you are more likely to be dealing with negatives that have been in storage for 20 to 80 years, and very likely to be faded. In that case, you are going to need to manually adjust the picture as best you can.

2 Likes

So after a few years I’m finally getting through the process of digitizing my negatives. The tool is working really well in my process (individual monochromatic-backlit captures for R, G, and B as discussed in Digitizing film using DSLR and RGB LED lights , with a color profile for film that is derived from SSF data as I describe in Anyone have SSF Data for the A7III? - #13 by Entropy512

Results are incredible for brightly lit pictures, but extremely poor for dimly lit pictures due to the black point offset.

Complicating this discussion is the fact that the UI has changed - instead of picking a reference “black” and reference “white”, you now must pick two neutral points from the same image. Problematic if your reference “black” (e.g. the optical mask) is its own dedicated exposure.

I really think that a scenario where you cannot rely on the EC tool is a bad thing user interface wise.

Which formula? I see the formula for optical density in Photographic film - Wikipedia - but what I’m seeing in the code ( (1/v)^p = v^-p) is consistent with “so the transmission coefficient of the developed film is proportional to a power of the reciprocal of the brightness of the original exposure” with an improper assumption.

As to what that improper assumption is: Your code assumes that a value of 1 in the image data is a transmission coefficient of 1. It is not - it would only be valid IF the image were exposed and white balanced such that the orange mask of unexposed film were EXACTLY 1.0 - but it is not.

In reality, the raw data values of an unexposed but developed film area correspond to a transmission coefficient of 1. This will NOT have a raw value of 1!

Edit: Very rough proof of concept at Commits · Entropy512/RawTherapee · GitHub

EDIT 2: After some further analysis and thinking, I don’t think my rough proof of concept is really doing anything useful. It’s basically fighting against the exposure compensation after conversion. At least I learned a lot about adding GUI parameters to something in RT.

Here’s the problem. This is a comparison of the current simple model implemented in RT’s film negative tool vs. the published density curves of Fuji Superia X-Tra 400 from https://asset.fujifilm.com/master/emea/files/2020-10/9a958fdcc6bd1442a06f71e134b811f6/films_superia-xtra400_datasheet_01.pdf

Note that in this plot I’ve normalized the minimum density of the film (basically the color of the orange mask) assuming it’s been whitebalanced and scaled out of consideration by the scaling code I’ve added.

scene_vs_tcoeff

Look at the right side of the graph where the transmission coefficient is maximum/density is minimum - There’s significant nonlinearity here, and as a result, the simple model based solely on the Wikipedia formula (which Wikipedia themselves basically invalidate with the graphs to the right of that formula…) lifts the black levels by 4 EV!

You can compensate for this with a toe in a tone curve in the Exposure Compensation tool, but the problem is that ALL curves I’m aware of are applied after EC and after filmneg’s “output level” adjustment, meaning that if any of these are adjusted then the tone curve must be adjusted.

For consistency and ease of use, compensating for this curve needs to be done before any subsequent scaling.

2 Likes