Any interest in a "film negative" feature in RT ?

If you want the widest gamut possible in transferring the colors, don’t go for anything less than ProPhoto. Especially if you put sRGB or something, you restrict your gamut on export and you lose color information for the editing in NX-D.
I just tested NX-D and it correctly recognizes ProPhoto RGB as embedded profile.
image
So, in RT set things up like this:
image

P.S. Even now I only needed NX-D for less than a minute, I was vividly reminded what a crappy software it is :weary: You know you can do retouching in darktable as well?

Understood, thanks again and thanks for testing, @Thanatomanic. I apologise for the vivid memories it brought back.

I’ll give this a try. I know Nikon Capture NX-D is crappy :slight_smile:
The reason I still have it is for the occasional digital shot I do; since “Nikon Custom Picture Control” is part of my workflow, I like to be able to replicate it exactly as it’s set in-camera.
Anyway, off topic :smiley: sorry for that.

Honestly, I haven’t really tried my luck in DarkTable (apart from the film module that I can’t use as nicely as RT). I’ll see what I can do there!

BTW, is this thread the only one that discusses this module, overall?

No, here are some other threads that i know of:

alberto

1 Like

OK, @rom9, @Thanatomanic, I’ve been playing with colour negative inversion (mostly on fuji superia 200 and some kodak gold 200) for a few weeks, now.

Again, thanks for your initial responses that were very helpful to get me started.

I have more questions, if you do not mind, as I find it difficult to understand where we are, simply by following all the posts, here.

  • film rebate or frame border:

    1. is it necessary to have one for the inversion to work “properly”? how clean should it be (do I need to crop a bit to avoid unexpected darks?) ?
    2. how colour-neutral can it be? is it suited for scans that are balanced before acquisition (i.e. with proper lighting, be it dichroic, RGB, …)? if it’s completely balanced (i.e. almost clipped white), will this cause issues?
  • about the “pick neutral spots” button (that “dual-spot white balance”)
    image

    1. I’m not sure I understand why it is supposed to work (and not cause shifts in colours) if, as a neutral dark, the film rebate is sampled. While it can be black (and it’s better than having nothing to compare to), it’s not as reliable as a gray colour that’s in the picture itself - i.e. something that actually has density. ==> is it still true that you would recommend sampling the film border?
    2. what about the neutral white? why would it have to be dense? why not something just a bit lighter than the previous spot?
  • DCP question: When it comes to camera profile, I’m usually more than happy with the settings you’ve suggested, i.e. image
    but in some cases, for colours I know (or I have a “reference” of, as in an iPhone shot), I’m struggling a bit.

    1. I want to be sure nothing “distorts” the channels, relatively to one another, such as baked-in curves (hello, Nikon Custom Picture Control “landscape”, “neutral”, “portrait”, “flat”, “vivid”) ==> is it then really safe to tick all the boxes?
  • Exposure correction: what are the sliders / curves / modes you would recommend, for further processing? Exposure / blacks and then Lab? or mostly tone curves within the exposure section (which mode, btw?)?

This is all with colour rendition in mind.

Thanks!

Hi :slight_smile:

There is no need to crop, since the calculation works on the entire raw image (before cropping is applied).
With the current 5.8 release, you should try to take a close picture of the negative, with small borders around the actual negative image.
If you build the current RT dev branch, instead, there is a new “Film Base” button that lets you pick an unexposed part of the negative so that the calculation is not influenced by the border anymore.

no issues at all, that’s even better :wink:

totally agree. If you have 2 spots in the negative, that you know were neutral gray in reality, by all means pick those (the typical example would be a picture of a macbeth chart or some other reference target). But when you don’t have a dark gray or black spot available, the unexposed border should work good enough.

that would work too. Since we are estimating a curve from 2 points, choosing distant values should (in theory) give a better estimation.

Unfortunately i have no idea… i’ve simply noticed that if i deselect “Look Table” with my Sony profiles, i always get cyan sky or other weird colors. And similar things happens with other profiles for different camera models. Tone Curve should not (if i understand correctly) alter the colors, but just the contrast.
Don’t know about Base Table though, since my profiles don’t have it. Judging from the checkbox tooltip (“HueSatMap”), it might boost some specific hues? That would be bad… you can try disabling that one.
Sorry, i know nothing about color profiles, so my approach is just trial and error… :slight_smile:

Typically i just use exposure and tone curves (either Standard or Film-like modes) and nothing more. But i’m not a photographer, so i’m not very skilled in judging the result :smiley:

alberto

@rom9,
hi and thanks for your quick feedback.

EDIT: the screenshots are yuck, way over the top

to crop or…

  1. ok for the crop.
  2. Now what if the captured image does not have any border? how necessary/mandatory is the film base in the calculation? I am asking because in some cases - due to light bleeding through sprocket holes into the film base - it is very desirable to mask out everything that is not the image to scan in order to avoid nasty bleed into the image itself.

estimating curves

  • are we estimating a curve or merely a slope in the log domain?

contrast (Reference exponent adjustment)

I had totally forgotten to ask this. From a signal processing pipeline standpoint,

  • should I strive to maximise the contrast (Reference Exponent) in the Film Module itself?
  • or should I not even bother and use low values, such as 1.0 instead of 1.5, and do my contrast and exposure with the exposure section?

example:
A. film module inverts and gives me a bit too much contrast once final WB is achieved. Should I reduce the Reference exponent value? or should I play with the exposure / highlights and shadows compression sliders instead?
B. film module inverts and gives me a very conservatively stretched histogram once final WB is achieved. Should I then proceed to increasing the Reference exponent value in order to adequately stretch the histogram? Or should I just do it with curves mini. and maxi. levels?

exposure sliders

  • ok, I’ll spend some time reading how the sliders work :smiley: as I don’t find the exposure and blacks combination very intuitive.

DCP settings

  • :frowning_face: too bad you don’t know. Maybe people involved in this thread would know? do you know anyone in there? :smiley: Also found this.

  • I honestly abhor the trial-and-error approach for this kind of signal processing related issue - I need to grasp the compromises I make when I activate this or that.
    But, just so you see what irritates my eye, here is an obvious case of dull reds. The reds are not the only hues affected but they’re the most obvious in this picture.

  • how I adjusted the neutral blacks, neutral whites and white balance:
    image

Here’s the folder containing the .NEF and .pp3 files.

One additional example: look at the paint of the car and at the tail lights.

==> I am not saying it’s “perfect” without the look table but… here’s, for what it is worth, a slightly contrast-boosted iphone shot taken at the same moment: https://www.flickr.com/gp/franekn/yK3oZL

Reference:

:arrow_right: anybody knows the “advanced uses” mentioned here? :grin:

Thanks for reading and, in advance, thanks for your help / pointers.

side note, that I want to appear clearly (and not just as an edit to a post):

  • here’s a very interesting article on tri-color scanning
  • and here an interesting thread with pointers to sony A7 raw files containing our favourite passport :smiley:
1 Like

That’s perfectly fine, borders are absolutely not necessary (in fact, i also cover the sprocket holes myself, because of light bleeding). If you build your DIY film holder, you can cut it a bit wider than the negative frame, in order to include the tiny slice of unexposed film between frames (16 pixels would be enough), so that you have an unexposed spot handy in every image.
On the other hand, you should not include much of the film holder itself, because being completely dark, its reciprocal tends to infinite, and these huge values alter the channel medians that are used to calculate final multiplers. So, the more film holder you include in the picture, the more you will have to compensate manually afterwards.
With the dev branch of RT, the Film Base sampling is persisted in the processing profile, so you can even take a shot of an empty space at the start of the film, sample the unexposed spot from that, and then apply the same film base values to the rest of the roll. This also makes the calculation insensitive to the dark film holder portion.

sorry, i don’t know the correct mathematical terms; we know that the function is a simple exponentiation, so the procedure is calculating the correct exponent starting from 2 known points.
If the values are very close to each other, the signal-to-noise ratio (where “noise” could come from the film grain, imperfections, uneven backlight, etc.) would be worse, so i think it should be preferable to choose more distant values, if available.

Definitely not. The Reference Exponent should only be incremented to get an additional contrast boost when you have a very old, faded negative, and the output is so low contrast that is difficult to manage with the standard controls. The film negative tool only applies a simple exponentiation without any “smoothing” at the extremes, so if you maximise contrast with it, you might get very noisy highlights. Moreover, keep in mind that it works before demosaic, so boosting the raw data too much could also affect the demosaic algorithm.
The usual contrast and tone curve controls, instead, happen after demosaic and already do a wonderful job of smoothing out the highlights.
So, your “B” example is definitely the way to go :wink:

I think this might explain the problem. My camera profile doesn’t have a base table (the checkbox is grayed out), so when i tried disabling the look table, i got bad results, as predicted above. This lead me to the wrong conclusion that the look table must be enabled.
Instead, it turns out that, when a camera profile provides both, it might work better with the look table disabled. I tried your raw files and agree that those reds are way better.

Anyway, i doubt that this tool will ever produce perfectly accurate results, since it works by altering sensor data, before the camera profile is applied.
If you want more accuracy, maybe you could have more luck by processing your raw image normally, producing a (negative) output TIFF, and then opening the TIFF file and applying the film negative tool to that, instead of the raw file.
This is not yet supported in the current release, but is in the works on a separate branch.

2 Likes

Hi @nonophuran, sorry for the delay. Here’s an example of what i was referring to in my previous post: processing the raw file to a negative tiff (with “Camera Standard” as input profile), and then inverting the tiff (using this experimental RT branch) :
audi_tone_camstd_nonraw

Note how the yellow licence plate looks more similar to the iPhone shot, although the red shade is still not the same.

Regarding the Medium article about tri-color scanning, thanks for the link, it is very interesting indeed. I had to try it myself :smiley:
I tried using one of these cheap RGB bulbs, by taking 3 separate shots, with red, green and blue light respectively.
Then i created a composite, “fake” raw file, where for each corresponding pixel i kept the larger value among the 3 images (should be analogous to what “Lighten” does in the Medium article).
This is easily done with existing tools. Let’s say we have 3 raw files test_r.ARW, test_g.ARW and test_b.ARW.

Extract each raw file to a grayscale, linear tiff, without demosaic or colorspace conversion:

dcraw -v -T -o 0 -4 -H 1 -d test_{r,g,b}.ARW

Combine the 3 tiffs and keep the larger value for each pixel:

convert test_{r,g,b}.tiff -grayscale Rec709Luminance \
  -evaluate-sequence Max test_comp.tiff

Now rename the composite tiff to dng, and add some metadata to mark it as a mosaic, RGGB raw file:

mv test_comp.tiff test_comp.dng

exiftool -DNGVersion=1.1.0.0 \
  -PhotometricInterpretation="Color Filter Array" \
  -IFD0:CFAPattern2="0 1 1 2" \
  -IFD0:CFARepeatPatternDim="2 2" \
  test_comp.dng

At this point, RT can open this DNG as a normal raw file from an unknown camera model.

So first of all, to check if it could work, i’ve tested with a standard, positive picture of a color target (no film negative involved). I took three separate R,G,B shots and the same picture with a normal xenon flash for comparison.

I used a linear sRGB input profile (this one) for both the composite DNG and the flash picture to make sure colors were treated the same, despite different metadata. This is the result (RGB composite on the left, flash on the right):

The composite DNG is much more saturated, but besides that, there doesn’t seem to be any huge color deviation, which is good news.

So, i’ve tested with an actual negative. Here is an example with Kodak Portra 800 (left RGB composite, right xenon flash backlight):

In order to make a fair comparison, both pictures are processed using Linear sRGB as input profile.
Even if i try to boost Saturation and Chromaticity in the right picture, trying to match colors on the left, the result is never as good (note especially the yellow and yellow-green patches):

image

Here’s a real-world example, Kodak ColorPlus 200 (please excuse the artistic quality, i’m not a photographer :-D)

This is also processed with Linear sRGB as input profile, and just some tone curves and chromaticity boost.
My impression is that using this method, it’s more straightforward to get a good result. Using a white backlight, it is also possible to achieve a comparable result, but in some cases it may require more tweaking. The obvious disadvantage is having to take 3 shots each time, and being more sensitive to vibration: if one of the shots is slightly offset… bye bye demosaic!

Anyway… maybe you could try this method on that Audi TT picture, and see what comes out… :wink:

alberto

2 Likes

? monochrome ?

well, that would require a different processing in the filmneg tool that currently only supports mosaic raw file… the “nonraw” branch will fix that though, as we can process the 3 components first, and then merge them as non-raw images.
Anyway, the vibrations could still be an issue even if we demosaic individual components first. To preserve sharpness, collimation of the component images is critical

@rom9

My answer was not be taken serious!

… ooops :rofl:

Hi Rom9,
First of all, let me thank you for developing such a great feature! I have a couple of years of experience scanning negative film using different programs (Silverfast, Vuescan, NegativeLabPro) and I must say I’m now able to get THE BEST results using your code, as opposed to that other closed source & paid programs. I was really in awe when I saw the colors I could get out of my color negs using RawTherapee, it was literally the second time I saw them correctly (the first time was with my eyes, when I shot the frames…).
You really hit the nail with that formulas, and the idea of obtaining the correct coefficients from two shades of gray on the film looks very cool.

That being said, I must admit that I get such good results only while paired with Photoshop.
And if we could solve that last part, it would be the final solution.
Let me explain what I mean.
First of all, I’ve built manually the latest code you have on your personal github, so I’m working with the latest version, the one which works past demosaicing, has film base sampling etc.

And the problem is with subtracting the film base, it still never gets me real black point, and this makes impossible to achieve correct looking results via RawTherapee usual controls (exposure, black, etc). For example, if I push Black too much I see it also dulls the other colors, it’s not purely black level. And if I push Exposure too much to the left, it becomes too dark.

Let me illustrate this with pictures (please forgive dull subject matter…). It was shot on Fuji Pro400H and digitized with Nikon D750.
Here’s unedited version from RawTherapee with negative inversion & film base point picked up:


If we open it in Levels in PS (Ctrl-L) and check color channels we’ll see that there’s plenty of space left to the left and different channels have different amount of that space, so simply dragging Exposure/Black in RawTherapee to the left will not fix the color cast!

Now if in PS I edit each channel individually by dragging left pointer so it reaches the left edge of each color’s histogram (each channel has different amount!), save it to tiff, and then apply exposure/WB corrections in RawTherapee I get this almost ideal version which looks just how my eyes saw this scene:

But imagine we don’t have PS, RawTherapee doesn’t allow to easily cut the histogram to the right, there are controls but you don’t see the histogram so it becomes hit and miss, in PS you just dumbly drag down the left slider until it goes up to the left edge of histogram. So if we use RawTherapee conversion alone, with film base applied, and dial down Exposure/Black we’ll get this, much duller, version. It may look ok until you see the potential that was obtained from the same source in PS with purely mechanical action.
So pure RT variant:

I see two problems with the last picture (apart from it being a bad photo):

  1. No black is really black
  2. I cannot bring it down with the Black slider more than it currently is because it would make the car’s rear lights very discolored and whole look overly dark

So my proposition is this: what if we

  1. Remove film base picker altogether
  2. Instead, after you calculate the inverted pixels, you just do what I’m doing manually in PS, i.e. find the first leftmost non-zero point in each channel’s histogram and subtract it from corresponding channels data and then rescale each channel back so that its maximum level stays the same as it was before subtraction - this is what Photoshop is doing, I’ve checked it today with pre-generated picture and then seeing its processed data.
  3. The reality is that current film base picker is very dependent on each picture and the contrast (green) used for each particular frame. The approach I’m proposing allows to get rid of that ever changing factor. I’ve processed about 60 frames in the last days in this way so I’m certain it works better this way.

I would be interested in hearing your thoughts and thanks again for the feature! Even if you don’t implement what I propose I can still work around with Photoshop postprocessing, just it would be more manual effort.

P.S. Saved the source negative NEF here:

2 Likes

Hello @Ilya_Palopezhentsev and welcome! Thank you for testing :slight_smile:

If i understand correctly, it is certainly possible to do level adjustment in RT: just set a straight segment in “Tone Curve 1”, like this:

With the endpoints at the desired min and max values of the histogram; this should do exactly what you need :slight_smile:

One question: when you say

what repo/branch do you mean exactly? Because if you used rom9/dev, it is terribly out of date, since i forgot to push to it, sorry :smiley:

The latest film negative version is in the official upstream repo, dev branch.
With that, results should be pretty stable among different frames, provided the backlight and camera settings stay the same.

Edit: i’ve just pushed to my personal repo as well, just in case :smiley:

2 Likes

Hi rom9,

  1. You see, the curve you are showing affects all channels in equal manner, but my point is that after inversion different channels have different offsets from zero, and that’s what I’m resetting in PS, in each channel individually. If not done, it leaves color cast which then gets in the way in all RT manipulations. In this particular picture this is not extreme, but I’ve seen cases where difference in offset is quite prominent between the channels.
  2. I used the filmneg_nonraw branch from your personal github, is it wrong? It had commits from april…
  3. What do you think about my proposal which automates this? It would allow not to bother selecting the film base/working with curves and anyway as I say RT doesn’t yet have convenient per channel curve editor with overlapped histogram.

Oh, ok, i see. The filmneg_nonraw is indeed up to date, but the Film Base Picker doesn’t work in that version (sorry about that, basically i made i mess in the code while hacking together a solution for tiff files…).
Also, keep in mind that, if you give it a raw file, that version still does its processing before demosaic, as usual.
So, if you digitize your negatives using camera raw files, please build the official upstream dev branch: the film base picker should work correctly and “anchor” the picked spot to black, no matter what reference exponent (green) you choose.

Thanks, will try that branch!

So I’ve tried the latest official dev. It works a bit better but alas I still don’t see complete cancelling of black. And when I increase the exposure, it pumps up the black point - which I assume it wouldn’t had it been truly zero (if I’m right thinking Exposure in RT is multiplicative in nature - in LR it seems to be so). And when I pump up the exposure, the effect of the curve is undone, black becomes gray, I have to move the curve slider again to match the new origin of the histogram. Also when I touch the Lightness slider the background histogram in the curve screen doesn’t move despite the brightness is clearly changed.

All of this makes editing unconvenient, I still think had the negative conversion after its main step subtracted the minimum value from each of the channels (that would be tailored to each individual picture), rather than subtracting some film base color (perhaps sampled from another frame, with potentially different S/N ratios causing sampling imperfections), the end result would be better and faster.

True, the values can’t go down to absolute zero since the film base adjustment applies a coefficient. The film base becomes 1/250th of max in the output, if i remember correctly.
I don’t think you should ever touch Exposure comp. when using the film negative tool. After you selected the film base spot (which goes to near-black), if you find the rest of the picture to be extremely dark, you can raise the Reference Exponent a bit; you’ll see that the near-black spot will stay at the same level, while the rest of the picture becomes brighter. Then, you can fine-tune just using the tone curves.
Here’s an example to apply to your raw file above; note how exposure comp is zero:
Dslr0329.NEF.pp3 (12.2 KB)

Lightness is applied after the Tone Curves, so their histogram does not change.

The film negative implementation is based entirely on the formula from the Wikipedia article; if i throw an offset into the mix, it becomes a polynomial, not a simple exponentiation, and i’m not sure it would be correct.
Moreover, after you have picked the film base spot, the channels are aligned (you will have dark gray in the output), so you would have to subtract the same quantity from all channels. This is exactly what the Black level compensation does; if you don’t boost the exposure, you would only have to bring that 1/250th film base level down to zero; for such a small adjustment you won’t lose any saturation, and you can achieve absolute zero black if you want :slight_smile:

Finding the “minimum value” automatically is not that straightforward, as some images might include the film sprocket holes or other spurious lights.