Any interest in a "film negative" feature in RT ?

Hello, and thanks for testing! :slight_smile:

Unfortunately i can’t give a very scientific answer, because i’m totally ignorant about color science :rofl:
From what i’ve seen, it seems that the best results are obtained with the standard color profile for your camera, the same that you normally use to develop normal digital pictures.
“No profile” is definitely not a good setting, i always get strong color deviations with that. This is true for my Sony A7, my Sony A6000, but also for many other sample raw files that i got from other users here.
The Nikon D750 profile should be included in RT, so in your case just selecting “Auto-matched camera profile” should be enough. Otherwise, you can look for the suitable DCP profile online, or take it from Adobe DNG Converter.
Be sure to enable all the available checkboxes after selecting the profile, especially “Look Table” and “Tone Curve”.

These have nothing to do with the negative inversion itself, as they come into play later in the processing pipeline.
Working profile is what RT uses internally for calculations, after the picture is already converted to a positive (with happens before demosaic); i always leave the default setting.
Output profile will be the color profile of your output file. This is also not related to negative inversion, but rather to what you plan to do with your output file. If you want to export a tif to do further editing in a different software, you might want to use a wide gamut profile, in order to retain as much color information as possible. Still, i’m not an expert on this, so please take a look at the wiki for more accurate info :slight_smile:

There’s no need to match the Working and Output profiles. From what i understand, it’s normal to have the Working profile being wider gamut than the Output, in order to leave more headroom for intermediate color calculations.

Not at all! Feel free to ask, and thanks again for testing :wink:

alberto

3 Likes

@rom9 has the truth of it. You can just leave Input profile on “Auto-matched”. Having “No profile” means that the RGB values your D750 produces are not correctly transformed to the colors they represent.
The Output profile you select gets embedded in the exported file. It is probably best to set this to Rec2020 or ProPhoto if you’re going to do post-processing. However if NX-D doesn’t read this embedded profile, the colours will be messed up when you open your file. If everything works properly, the TIFF you import in NX-D should be identical to what you see on screen in RT before export.

3 Likes

@rom9, @Thanatomanic, thanks for your responses :smiley:

  • :bulb: I don’t really shoot on my DSLR - most of my work is colour and monochrome negatives :film_strip:
  • :heavy_check_mark: about the auto-matched camera profile , I think I understand - but I still am thrown off balance when I choose the DCP/Tone curve option (huuuge impact). I was afraid that RT would attempt to mimic the baked-in Nikon Custom Picture Control… just as Adobe Camera Raw (ACR) gives that “landscape”, “portrait”, “neutral” mode, depending on what’s read from the file, though it’s imperfectly done.
  • :heavy_check_mark: thanks for clarifying the Working VS Output colour profiles. I would just add a quote from rawpedia/colour management addon article

Do you have a very high quality monitor which has a gamut close to AdobeRGB or WideGamutRGB? in this case take a wide gamut profile.

OK, so, since I do dust-n-scratches :broom: in Nikon Capture NX-D anyway, I’ll set my RT output profile to NKAdobe, or something like that. I initially tried RT flavour of sRGB (RTv4) and while the resulting tif file is successfully opened by Nikon Capture NX-D, it kind of fails in processing it. It just remains stuck (that piece of software is far from perfect, so I’m not really surprised).


Now, mostly for the RT newcomers / newbies (aren’t there many people looking for alternatives to LR+NLP?), I’ll summarise (and add a few elements of) my workflow. Feel free to comment. Maybe that could walk people through and be an addition to the rawpedia, especially in the film negative article.

  • to digitise, I use a Nikon D750, I do Expose-to-the-Right + Universal white balance (#ETTR #uniwb are the usual hashtags) + Neutral* (-2 saturation) “picture control” (because I heard “flat” is less further away from linear than Neutral is).
  1. Open RawTherapee. In the “File Browser” tab, navigate to where the file you want to process is.
  2. Right-click the file and choose “Open”. This gets you to the “Editor” tab.
  3. In the panel that sits on the right, click the “colour” button.
    a. Scroll down to the “Colour Management” section.
    b. In the “Input Profile” subsection, pick the “auto-matched” option
    c. In the “Working Profile” subsection, pick a wide-gamut profile, such as ProPhoto (provided your display has sufficiently wide gamut too), without any “Tone response curve”.
    d. In the “Output Profile” subsection, since we are going to use Nikon Capture NX-D, pick “NKsRGB”, or “NKAdobe”, or some other profile you know your next post-processing software can handle. The idea is to keep the gamut as wide as possible until the moment you generate an 8-bit JPG.
    e. I leave the “rendering intent” and the “Black Point Compensation” as they are.
  4. In the panel that sits on the right, click the “Raw” button.
    a. Activate the “Film Negative” section. Inversion happens automatically in the preview image. ==> output preview looks nasty, as film base compensation has not been refined, yet.
    b. For further tweaking, if you have such reference points in your image, click the “Pick white and black spots” button and with the eyedropper tool you now have, pick one neutral highlight and one neutral shadow. ==> output preview undergoes subtle changes.
  5. In the panel that sits on the right, click the “colour” button.
    a. Activate the “White Balance” section.
    b. If you are feeling lucky, in the “Method” dropdown, pick “Auto”. ==> output preview looks fine, now.
    c. If you want to do it manually, you can “Pick” the eyedropper tool and sample the black border. Or any other target that is supposed to be neutral in your final image.
  6. In the panel that sits on the right, click the “Exposure” button.
    a. In the “Exposure” section, right after the sliders, you may have already tone curves set up. Set them to “linear” and “standard”.
    b. If you want to proceed to curves and levels adjustments, “Auto Levels” can give you a good place to start, to stretch the overall contrast. Note: known bug, it works much better if you don’t have a border in the first place and the blacks in your picture differ from the border itself (hello, overexposure)
    c. Then play with the sliders and the curves (+ blacks and whites clipping indicators)
    d. Quite possibly, there are many options you can play with, for most of the colour adjustments. The flexibility it offers is literally overwhelming.
  7. In the panel that sits on the right, click the “Detail” button to adjust sharpening. The “Transform” button allows you to crop, straighten…
  8. Proceed to saving the file to TIFF (floppy disk icon on the bottom bar, on the left of the main screen)
  9. Open this TIFF file in “Nikon Capture NX-D”
    a. do the dust and scratches,
    b. do the final adjustments to your levels (e.g. with RGB curves, …)
    c. convert to JPG.
    d. post online, get :+1: :heart: :heart_eyes_cat: and comments, be sleep-deprived because of it.
    e. shoot more fresh colour negative film.

out-of-scope EDIT: seems I can’t figure out how to export from Nikon Adobe RGB to sRGB in Capture NX-D :angry:

If you want the widest gamut possible in transferring the colors, don’t go for anything less than ProPhoto. Especially if you put sRGB or something, you restrict your gamut on export and you lose color information for the editing in NX-D.
I just tested NX-D and it correctly recognizes ProPhoto RGB as embedded profile.
image
So, in RT set things up like this:
image

P.S. Even now I only needed NX-D for less than a minute, I was vividly reminded what a crappy software it is :weary: You know you can do retouching in darktable as well?

Understood, thanks again and thanks for testing, @Thanatomanic. I apologise for the vivid memories it brought back.

I’ll give this a try. I know Nikon Capture NX-D is crappy :slight_smile:
The reason I still have it is for the occasional digital shot I do; since “Nikon Custom Picture Control” is part of my workflow, I like to be able to replicate it exactly as it’s set in-camera.
Anyway, off topic :smiley: sorry for that.

Honestly, I haven’t really tried my luck in DarkTable (apart from the film module that I can’t use as nicely as RT). I’ll see what I can do there!

BTW, is this thread the only one that discusses this module, overall?

No, here are some other threads that i know of:

alberto

1 Like

OK, @rom9, @Thanatomanic, I’ve been playing with colour negative inversion (mostly on fuji superia 200 and some kodak gold 200) for a few weeks, now.

Again, thanks for your initial responses that were very helpful to get me started.

I have more questions, if you do not mind, as I find it difficult to understand where we are, simply by following all the posts, here.

  • film rebate or frame border:

    1. is it necessary to have one for the inversion to work “properly”? how clean should it be (do I need to crop a bit to avoid unexpected darks?) ?
    2. how colour-neutral can it be? is it suited for scans that are balanced before acquisition (i.e. with proper lighting, be it dichroic, RGB, …)? if it’s completely balanced (i.e. almost clipped white), will this cause issues?
  • about the “pick neutral spots” button (that “dual-spot white balance”)
    image

    1. I’m not sure I understand why it is supposed to work (and not cause shifts in colours) if, as a neutral dark, the film rebate is sampled. While it can be black (and it’s better than having nothing to compare to), it’s not as reliable as a gray colour that’s in the picture itself - i.e. something that actually has density. ==> is it still true that you would recommend sampling the film border?
    2. what about the neutral white? why would it have to be dense? why not something just a bit lighter than the previous spot?
  • DCP question: When it comes to camera profile, I’m usually more than happy with the settings you’ve suggested, i.e. image
    but in some cases, for colours I know (or I have a “reference” of, as in an iPhone shot), I’m struggling a bit.

    1. I want to be sure nothing “distorts” the channels, relatively to one another, such as baked-in curves (hello, Nikon Custom Picture Control “landscape”, “neutral”, “portrait”, “flat”, “vivid”) ==> is it then really safe to tick all the boxes?
  • Exposure correction: what are the sliders / curves / modes you would recommend, for further processing? Exposure / blacks and then Lab? or mostly tone curves within the exposure section (which mode, btw?)?

This is all with colour rendition in mind.

Thanks!

Hi :slight_smile:

There is no need to crop, since the calculation works on the entire raw image (before cropping is applied).
With the current 5.8 release, you should try to take a close picture of the negative, with small borders around the actual negative image.
If you build the current RT dev branch, instead, there is a new “Film Base” button that lets you pick an unexposed part of the negative so that the calculation is not influenced by the border anymore.

no issues at all, that’s even better :wink:

totally agree. If you have 2 spots in the negative, that you know were neutral gray in reality, by all means pick those (the typical example would be a picture of a macbeth chart or some other reference target). But when you don’t have a dark gray or black spot available, the unexposed border should work good enough.

that would work too. Since we are estimating a curve from 2 points, choosing distant values should (in theory) give a better estimation.

Unfortunately i have no idea… i’ve simply noticed that if i deselect “Look Table” with my Sony profiles, i always get cyan sky or other weird colors. And similar things happens with other profiles for different camera models. Tone Curve should not (if i understand correctly) alter the colors, but just the contrast.
Don’t know about Base Table though, since my profiles don’t have it. Judging from the checkbox tooltip (“HueSatMap”), it might boost some specific hues? That would be bad… you can try disabling that one.
Sorry, i know nothing about color profiles, so my approach is just trial and error… :slight_smile:

Typically i just use exposure and tone curves (either Standard or Film-like modes) and nothing more. But i’m not a photographer, so i’m not very skilled in judging the result :smiley:

alberto

@rom9,
hi and thanks for your quick feedback.

EDIT: the screenshots are yuck, way over the top

to crop or…

  1. ok for the crop.
  2. Now what if the captured image does not have any border? how necessary/mandatory is the film base in the calculation? I am asking because in some cases - due to light bleeding through sprocket holes into the film base - it is very desirable to mask out everything that is not the image to scan in order to avoid nasty bleed into the image itself.

estimating curves

  • are we estimating a curve or merely a slope in the log domain?

contrast (Reference exponent adjustment)

I had totally forgotten to ask this. From a signal processing pipeline standpoint,

  • should I strive to maximise the contrast (Reference Exponent) in the Film Module itself?
  • or should I not even bother and use low values, such as 1.0 instead of 1.5, and do my contrast and exposure with the exposure section?

example:
A. film module inverts and gives me a bit too much contrast once final WB is achieved. Should I reduce the Reference exponent value? or should I play with the exposure / highlights and shadows compression sliders instead?
B. film module inverts and gives me a very conservatively stretched histogram once final WB is achieved. Should I then proceed to increasing the Reference exponent value in order to adequately stretch the histogram? Or should I just do it with curves mini. and maxi. levels?

exposure sliders

  • ok, I’ll spend some time reading how the sliders work :smiley: as I don’t find the exposure and blacks combination very intuitive.

DCP settings

  • :frowning_face: too bad you don’t know. Maybe people involved in this thread would know? do you know anyone in there? :smiley: Also found this.

  • I honestly abhor the trial-and-error approach for this kind of signal processing related issue - I need to grasp the compromises I make when I activate this or that.
    But, just so you see what irritates my eye, here is an obvious case of dull reds. The reds are not the only hues affected but they’re the most obvious in this picture.

  • how I adjusted the neutral blacks, neutral whites and white balance:
    image

Here’s the folder containing the .NEF and .pp3 files.

One additional example: look at the paint of the car and at the tail lights.

==> I am not saying it’s “perfect” without the look table but… here’s, for what it is worth, a slightly contrast-boosted iphone shot taken at the same moment: https://www.flickr.com/gp/franekn/yK3oZL

Reference:

:arrow_right: anybody knows the “advanced uses” mentioned here? :grin:

Thanks for reading and, in advance, thanks for your help / pointers.

side note, that I want to appear clearly (and not just as an edit to a post):

  • here’s a very interesting article on tri-color scanning
  • and here an interesting thread with pointers to sony A7 raw files containing our favourite passport :smiley:
1 Like

That’s perfectly fine, borders are absolutely not necessary (in fact, i also cover the sprocket holes myself, because of light bleeding). If you build your DIY film holder, you can cut it a bit wider than the negative frame, in order to include the tiny slice of unexposed film between frames (16 pixels would be enough), so that you have an unexposed spot handy in every image.
On the other hand, you should not include much of the film holder itself, because being completely dark, its reciprocal tends to infinite, and these huge values alter the channel medians that are used to calculate final multiplers. So, the more film holder you include in the picture, the more you will have to compensate manually afterwards.
With the dev branch of RT, the Film Base sampling is persisted in the processing profile, so you can even take a shot of an empty space at the start of the film, sample the unexposed spot from that, and then apply the same film base values to the rest of the roll. This also makes the calculation insensitive to the dark film holder portion.

sorry, i don’t know the correct mathematical terms; we know that the function is a simple exponentiation, so the procedure is calculating the correct exponent starting from 2 known points.
If the values are very close to each other, the signal-to-noise ratio (where “noise” could come from the film grain, imperfections, uneven backlight, etc.) would be worse, so i think it should be preferable to choose more distant values, if available.

Definitely not. The Reference Exponent should only be incremented to get an additional contrast boost when you have a very old, faded negative, and the output is so low contrast that is difficult to manage with the standard controls. The film negative tool only applies a simple exponentiation without any “smoothing” at the extremes, so if you maximise contrast with it, you might get very noisy highlights. Moreover, keep in mind that it works before demosaic, so boosting the raw data too much could also affect the demosaic algorithm.
The usual contrast and tone curve controls, instead, happen after demosaic and already do a wonderful job of smoothing out the highlights.
So, your “B” example is definitely the way to go :wink:

I think this might explain the problem. My camera profile doesn’t have a base table (the checkbox is grayed out), so when i tried disabling the look table, i got bad results, as predicted above. This lead me to the wrong conclusion that the look table must be enabled.
Instead, it turns out that, when a camera profile provides both, it might work better with the look table disabled. I tried your raw files and agree that those reds are way better.

Anyway, i doubt that this tool will ever produce perfectly accurate results, since it works by altering sensor data, before the camera profile is applied.
If you want more accuracy, maybe you could have more luck by processing your raw image normally, producing a (negative) output TIFF, and then opening the TIFF file and applying the film negative tool to that, instead of the raw file.
This is not yet supported in the current release, but is in the works on a separate branch.

2 Likes

Hi @nonophuran, sorry for the delay. Here’s an example of what i was referring to in my previous post: processing the raw file to a negative tiff (with “Camera Standard” as input profile), and then inverting the tiff (using this experimental RT branch) :
audi_tone_camstd_nonraw

Note how the yellow licence plate looks more similar to the iPhone shot, although the red shade is still not the same.

Regarding the Medium article about tri-color scanning, thanks for the link, it is very interesting indeed. I had to try it myself :smiley:
I tried using one of these cheap RGB bulbs, by taking 3 separate shots, with red, green and blue light respectively.
Then i created a composite, “fake” raw file, where for each corresponding pixel i kept the larger value among the 3 images (should be analogous to what “Lighten” does in the Medium article).
This is easily done with existing tools. Let’s say we have 3 raw files test_r.ARW, test_g.ARW and test_b.ARW.

Extract each raw file to a grayscale, linear tiff, without demosaic or colorspace conversion:

dcraw -v -T -o 0 -4 -H 1 -d test_{r,g,b}.ARW

Combine the 3 tiffs and keep the larger value for each pixel:

convert test_{r,g,b}.tiff -grayscale Rec709Luminance \
  -evaluate-sequence Max test_comp.tiff

Now rename the composite tiff to dng, and add some metadata to mark it as a mosaic, RGGB raw file:

mv test_comp.tiff test_comp.dng

exiftool -DNGVersion=1.1.0.0 \
  -PhotometricInterpretation="Color Filter Array" \
  -IFD0:CFAPattern2="0 1 1 2" \
  -IFD0:CFARepeatPatternDim="2 2" \
  test_comp.dng

At this point, RT can open this DNG as a normal raw file from an unknown camera model.

So first of all, to check if it could work, i’ve tested with a standard, positive picture of a color target (no film negative involved). I took three separate R,G,B shots and the same picture with a normal xenon flash for comparison.

I used a linear sRGB input profile (this one) for both the composite DNG and the flash picture to make sure colors were treated the same, despite different metadata. This is the result (RGB composite on the left, flash on the right):

The composite DNG is much more saturated, but besides that, there doesn’t seem to be any huge color deviation, which is good news.

So, i’ve tested with an actual negative. Here is an example with Kodak Portra 800 (left RGB composite, right xenon flash backlight):

In order to make a fair comparison, both pictures are processed using Linear sRGB as input profile.
Even if i try to boost Saturation and Chromaticity in the right picture, trying to match colors on the left, the result is never as good (note especially the yellow and yellow-green patches):

image

Here’s a real-world example, Kodak ColorPlus 200 (please excuse the artistic quality, i’m not a photographer :-D)

This is also processed with Linear sRGB as input profile, and just some tone curves and chromaticity boost.
My impression is that using this method, it’s more straightforward to get a good result. Using a white backlight, it is also possible to achieve a comparable result, but in some cases it may require more tweaking. The obvious disadvantage is having to take 3 shots each time, and being more sensitive to vibration: if one of the shots is slightly offset… bye bye demosaic!

Anyway… maybe you could try this method on that Audi TT picture, and see what comes out… :wink:

alberto

2 Likes

? monochrome ?

well, that would require a different processing in the filmneg tool that currently only supports mosaic raw file… the “nonraw” branch will fix that though, as we can process the 3 components first, and then merge them as non-raw images.
Anyway, the vibrations could still be an issue even if we demosaic individual components first. To preserve sharpness, collimation of the component images is critical

@rom9

My answer was not be taken serious!

… ooops :rofl:

Hi Rom9,
First of all, let me thank you for developing such a great feature! I have a couple of years of experience scanning negative film using different programs (Silverfast, Vuescan, NegativeLabPro) and I must say I’m now able to get THE BEST results using your code, as opposed to that other closed source & paid programs. I was really in awe when I saw the colors I could get out of my color negs using RawTherapee, it was literally the second time I saw them correctly (the first time was with my eyes, when I shot the frames…).
You really hit the nail with that formulas, and the idea of obtaining the correct coefficients from two shades of gray on the film looks very cool.

That being said, I must admit that I get such good results only while paired with Photoshop.
And if we could solve that last part, it would be the final solution.
Let me explain what I mean.
First of all, I’ve built manually the latest code you have on your personal github, so I’m working with the latest version, the one which works past demosaicing, has film base sampling etc.

And the problem is with subtracting the film base, it still never gets me real black point, and this makes impossible to achieve correct looking results via RawTherapee usual controls (exposure, black, etc). For example, if I push Black too much I see it also dulls the other colors, it’s not purely black level. And if I push Exposure too much to the left, it becomes too dark.

Let me illustrate this with pictures (please forgive dull subject matter…). It was shot on Fuji Pro400H and digitized with Nikon D750.
Here’s unedited version from RawTherapee with negative inversion & film base point picked up:


If we open it in Levels in PS (Ctrl-L) and check color channels we’ll see that there’s plenty of space left to the left and different channels have different amount of that space, so simply dragging Exposure/Black in RawTherapee to the left will not fix the color cast!

Now if in PS I edit each channel individually by dragging left pointer so it reaches the left edge of each color’s histogram (each channel has different amount!), save it to tiff, and then apply exposure/WB corrections in RawTherapee I get this almost ideal version which looks just how my eyes saw this scene:

But imagine we don’t have PS, RawTherapee doesn’t allow to easily cut the histogram to the right, there are controls but you don’t see the histogram so it becomes hit and miss, in PS you just dumbly drag down the left slider until it goes up to the left edge of histogram. So if we use RawTherapee conversion alone, with film base applied, and dial down Exposure/Black we’ll get this, much duller, version. It may look ok until you see the potential that was obtained from the same source in PS with purely mechanical action.
So pure RT variant:

I see two problems with the last picture (apart from it being a bad photo):

  1. No black is really black
  2. I cannot bring it down with the Black slider more than it currently is because it would make the car’s rear lights very discolored and whole look overly dark

So my proposition is this: what if we

  1. Remove film base picker altogether
  2. Instead, after you calculate the inverted pixels, you just do what I’m doing manually in PS, i.e. find the first leftmost non-zero point in each channel’s histogram and subtract it from corresponding channels data and then rescale each channel back so that its maximum level stays the same as it was before subtraction - this is what Photoshop is doing, I’ve checked it today with pre-generated picture and then seeing its processed data.
  3. The reality is that current film base picker is very dependent on each picture and the contrast (green) used for each particular frame. The approach I’m proposing allows to get rid of that ever changing factor. I’ve processed about 60 frames in the last days in this way so I’m certain it works better this way.

I would be interested in hearing your thoughts and thanks again for the feature! Even if you don’t implement what I propose I can still work around with Photoshop postprocessing, just it would be more manual effort.

P.S. Saved the source negative NEF here:
https://www.dropbox.com/s/1kd7gyq2lczr8kl/Dslr0329.NEF?dl=0

2 Likes

Hello @Ilya_Palopezhentsev and welcome! Thank you for testing :slight_smile:

If i understand correctly, it is certainly possible to do level adjustment in RT: just set a straight segment in “Tone Curve 1”, like this:

With the endpoints at the desired min and max values of the histogram; this should do exactly what you need :slight_smile:

One question: when you say

what repo/branch do you mean exactly? Because if you used rom9/dev, it is terribly out of date, since i forgot to push to it, sorry :smiley:

The latest film negative version is in the official upstream repo, dev branch.
With that, results should be pretty stable among different frames, provided the backlight and camera settings stay the same.

Edit: i’ve just pushed to my personal repo as well, just in case :smiley:

2 Likes

Hi rom9,

  1. You see, the curve you are showing affects all channels in equal manner, but my point is that after inversion different channels have different offsets from zero, and that’s what I’m resetting in PS, in each channel individually. If not done, it leaves color cast which then gets in the way in all RT manipulations. In this particular picture this is not extreme, but I’ve seen cases where difference in offset is quite prominent between the channels.
  2. I used the filmneg_nonraw branch from your personal github, is it wrong? It had commits from april…
  3. What do you think about my proposal which automates this? It would allow not to bother selecting the film base/working with curves and anyway as I say RT doesn’t yet have convenient per channel curve editor with overlapped histogram.

Oh, ok, i see. The filmneg_nonraw is indeed up to date, but the Film Base Picker doesn’t work in that version (sorry about that, basically i made i mess in the code while hacking together a solution for tiff files…).
Also, keep in mind that, if you give it a raw file, that version still does its processing before demosaic, as usual.
So, if you digitize your negatives using camera raw files, please build the official upstream dev branch: the film base picker should work correctly and “anchor” the picked spot to black, no matter what reference exponent (green) you choose.