Any interest in a "film negative" feature in RT ?

No, it should be the same, actually that’s what i was doing with the G’mic scripts above, and it worked. Hmm, interesting… i could try to implement the feature, but limited to linear TIFFs… this way i would avoid entering the gamma rabbit hole, and still make a bunch of people happy. I guess scanner users should also be able to set their software to output linear TIFFs.
Good idea, thanks, i’ll try :slight_smile:

@matt_jones @Entropy512 : sorry for the delay, i didn’t forget about the multiplier issue we were discussing above. In fact i’ve just created this PR that adds a new button to compute multipliers from crop. Here are a couple of examples of the results.
First of all, large dark borders around the negative don’t affect the calculations anymore, so the interesting part of the picture can be centered in the histogram without having to set extreme values of exposure and WB:

Moreover, reference values are saved in the .pp3, so that we can paste the same processing profile on all pictures from the same roll, and get pretty consistent results:

(… except for the 3 night shots under sodium light; i was using normal daylight film, so i guess it’s normal if they come out a bit too reddish)

Alberto

3 Likes

Excellent I’ll give it a burl later this week

:+1:

Hi Rom9,

Not sure what I am doing wrong, I compiled V5.7-135-g911cc85, I load up that Pram negative and it still gives me the same results with or without a crop border? I can manually move Ref exponent, Red Ratio and Blue Ratio to get a reasonable image, but I think it’s meant to calculate automatically? Any help appreciated. :slight_smile: Selection_320|690x370

Hello matt_jones,
sorry i did’t specify it explicitly: this is still a PR, it’s not merged in dev yet.

If you want to try it before it is merged, you can build my fork of rawtherapee. You can do that by running the following commands, from your home directory.

This makes a backup of the original source directory:

mv programs/code_rawtherapee programs/ORIG_code_rawtherapee

Then, download my fork from github:

git clone "https://github.com/rom9/RawTherapee.git" programs/code-rawtherapee

Switch to the branch that contains this feature:

cd programs/code-rawtherapee
git checkout filmneg_stable_mults
cd ../..

Finally, run the build_rawtherapee script with the -b option, which does just the build without updating the source from the upstrem repo:

./build_rawtherapee -b

And run the compiled executable as reported in the last line printed on the console by the build script. It should be a path like
~/programs/rawtherapee-filmneg_stable_mults-release/rawtherapee

You should now see the new “Get ratios from crop” button in the film negative tool :slight_smile:

To go back to the original upstream source code, just delete the code_rawtherapee directory, and rename back ORIG_code_rawtherapee to code_rawtherapee, then run the build script as usual.

alberto

ps.: i forgot a little but important advice: if you plan to digitize an entire roll, when you take pictures of the negatives please do not use the AWB (auto white balance) setting of your camera, but instead use a fixed WB setting for all pictures (like daylight, or tungsten, any setting will be ok as long as it stays constant for all shots).
This is because the channel multipliers chosen by the camera will affect processing in RT, so if you use AWB these multipliers will be slightly different among different pictures, and will produce noticeable shifts in the output, that will need to be compensated by re-balancing each individual picture.
I’m trying to revert this effect to make it transparent to the user, but i still haven’t figured out a way to do it, so for now, just remember to select a fixed WB setting and you’ll be fine :wink:

Or easier, assuming you already cloned or ran the build-rawtherapee script in the past:

cd ~/programs/code-rawtherapee/
git checkout dev
git pull
git checkout -b rom9-filmneg_stable_mults dev
git pull https://github.com/rom9/RawTherapee.git filmneg_stable_mults
<it will ask to merge, commit the merge>
./tools/build-rawtherapee -b

Hi all,
in the latest update of my PR (that you can try by following the instructions above), i’ve removed the “Get multipliers from crop” functionality, and replaced it with “Pick film base color”, a similar concept to the “color of film material” setting found in Darktable’s “invert” module.
This way, there’s no need to set any crop: you just click the button and pick a spot of unexposed film. The spot you picked will become your black level in the converted image, and will act as an “anchor point” for the exponents calculation. This means that, if you change any exponents in the sliders, the black level will stay exactly the same, and the more exposed areas will change.
Moreover, the film base spot will be used to “pre-balance” the channels, so that the WB won’t have to be set to crazy values.

Note: if you don’t have a spot of unexposed film in your image (like the pram sample above), you can simply pick the darkest spot in your scene (aka the less exposed spot on the negative), and then slightly compensate the exposure. In the pram sample, this can be a shadow spot on the concrete, or the dark blue cloth inside the pram.

Anyway, i strongly recommend to take at least one shot that includes some empty unexposed film. You can take just one, and then copy/paste the same parameters on all subsequent images, as long as you keep the same backlight and camera exposure settings.
An easy way to do that is to digitize the first (or last) frame in a roll with the film “shifted” inside the holder, so that the beginning (or end) of the film is visible:

Last but not least, the problem with the AWB camera setting, described in my previous post, is now solved :wink: . The output channel balance is now normalized to a fixed known color temperature, regardless of camera WB settings.

Alberto

6 Likes

Hi!
I’m trying to use this tool to process images which are later stitched into one image (there are 9 overlapping shots for one negative frame).
I’ve built version from filmneg_stable_mults as per instructions above, which has “Pick film base color” button.

The problem is that it seems that results are inconsistent, even if I copy/paste processing profile across all files. I’ve even created simple script which verifies that all pp3 files have the same contents.

When I try to stitch images processed by RT, it’s clearly visible that colors of the sky don’t match where they should.

I’ve noticed that thumbnails look more consistent when stitched.
rt-thumbnails

Is thumbnail processing different than full image processing? Am I missing something here, or maybe RT is not well suited for my use case?

Hard for us to tell if you don’t post your pp3 file

Here’s my pp3 file (it’s exactly the same for all 9 images):
01_1c_20190412-212852-0000.dng.pp3 (11.6 KB)

1 Like

Hi @mrock, and thanks for testing! :slight_smile:
This is pretty weird… can you try again disabling Chromatic Aberration auto-correction in the Raw panel ? Although it seems strange that it has such a strong impact…
If that doesn’t work, could you post two of the overlapping raw files via filebin.net , so i can take a look at the values?

It is indeed on a different code path, but just because of incompatible data formats. Conceptually, the same processing should be applied to both images.

alberto

1 Like

Hi @rom9, I’ve just tried with Chromatic Aberration auto-correction disabled, but results are same as earlier, as far as I can tell.
I’ve uploaded raw files here: https://filebin.net/l4fbf1l8oioeubid

Can’t read all 100+ posts here prior to posting, nor can I easily search using browser search for posts related to VueScan, but want to pipe-up after reading the first ~25 posts concerning film color filtering. VueScan has a method of taking/cropping/selecting a peice of the orange film mask and calculating the RGB value for making the orange film mask transparent. This RGB value is then applied to the entire scanned negative or film strip.

Far better method and far better color then trying to find a black or white balance spot within a not so perfect world! The VueScan method kills two birds with one stone (sorry Tweeters), 1) Finds the film mask and is usually film vendor specific, not including aging of the film; 2) After this is done, white/black points are almost automatically performed.

@mrock You’re right, there is indeed a difference in channel multipliers between the two pictures. RT does a sort of auto-gain at the very start of the processing, when calculating channel coefficients, by averaging the whole picture (this does not happen in the thumbnail processing, hence the different behaviour).
I’ve taken some measures in the film negative code to reverse the effect of this auto-gain, and it seemed to always work, but… in this specific case, the varying amount of border around the picture (corner vs. middle image), shifts the channel average by a lot, and my calculation isn’t accurate enough. I never thought about this use-case, i’ve always tested using similarly-framed shots.

In fact, if i comment the auto-gain feature in the RT code, i get nearly perfect match:

image

but i can’t solve this way because it would affect all other normal non-film-negative processing.

Anyway, i think Hugin should be able to deal with that gap automatically when it does the stitching, so you should get a good result anyway.
In the meantime, i’ll try to figure out a more robust way to reverse the auto-gain. I have some ideas i’ll test tomorrow.
Thanks again for testing & reporting! :wink:

2 Likes

Where is this ‘auto gain’ located if I may ask?

Here:

(note the last argument set to true). It has caused me sooo much headache and sleep deprivation… :sob:

Hmm I’ll have to take a look at that this weekend.

Maybe it’s something that needs a conditional that disables it when the negative inversion tool is enabled, or needs to be exposed in the white balance UI?

Unfortunately, it happens before preprocessing, in the load() method, and params aren’t yet available at that stage.
So i’m afraid the only reasonable solution is to “undo” the multiplier after the fact…
I’m open to suggestions, though :slight_smile:

Ping @heckflosse who maybe has some insight into the inner workings of RT here. ri->get_colorsCoeff seems like a pretty messy function to be calling twice and for non-obvious reasons. I don’t know why white balancing before demosaicing would be beneficial / necessary.

It’s said to give better results for ca-correction and demosaic, especially if camera wb is off (for example on uniwb shots)