Any interest in a "film negative" feature in RT ?

Unfortunately, it happens before preprocessing, in the load() method, and params aren’t yet available at that stage.
So i’m afraid the only reasonable solution is to “undo” the multiplier after the fact…
I’m open to suggestions, though :slight_smile:

Ping @heckflosse who maybe has some insight into the inner workings of RT here. ri->get_colorsCoeff seems like a pretty messy function to be calling twice and for non-obvious reasons. I don’t know why white balancing before demosaicing would be beneficial / necessary.

It’s said to give better results for ca-correction and demosaic, especially if camera wb is off (for example on uniwb shots)

I understand the comment, but I wonder why. Forgive me for going off-topic here, but is there a reason why demosaicing would require a prescaling of the RGB values?
Also, the code has been untouched since 2015. Can we be sure this still applies? (I can do some tests)

Ping @Carmelo_DrRaw
iirc he also did some tests and photoflow also applies wb before demosaic

We had a discussion about this many years ago (in the old RT forum) … as far as I remember it was Emil Martinec (the author of AmaZe demosaicer) who said that all demosaicers need WB prescaled values to work better. So he choose to use the as_shot WB from exif data.
Then came a test by Elle where RT had problems because the used RAW sample was a “UniWB” shot so the as_shot WB was way far . Dcraw calculates the WB and had no problems …
The solution by Anders Torger was to use Dcraw’s pre demosaic WB and this is it. Because he (we) was not exactly sure if this is 100% safe he opted to leave this boolean parameter to make easy toggling between calculated vs prescribed WB coeffs .

Fact is that we meet a lot of cases where the calculations of pre_demosaic WB fails when the raw frame includes a lot of unreliable data as when there is strong barrel distortion / vignette or we have a lot of clipped highlights … now he have one more case with the film’s borders :smile:

I think that an optional alternative (say use exif data or user input) would help … but better would be to find a clever way to detect and reject from calculations the unreliable data :wink:

@heckflosse Ingo, … what if we use the median instead of average … have this worked fine for the pixelshift normalizations ?

3 Likes

Relative issue … Using "as_shot" WB in early stages instead of current WB results in artifacts when WB difference is large · Issue #2043 · Beep6581/RawTherapee · GitHub

1 Like

Sorry for the delay, i’ve just pushed a change to the PR, that should fix the problem.

@mrock : please, can you pull & build the latest filmneg_stable_mults, and try again? Results should be much more consistent now:

@Entropy512 @ilias_giarimis @heckflosse @Thanatomanic
I myself am totally not convinced by this change. It’s extremely ugly, but it has the advantage that the change is limited to the film negative code, so we have the guarantee that nothing will break when it’s not used.
The fix involves re-setting the scaling multipliers, so that downstream processing will behave as if the auto-WB never happened. I had to make obscure tricks to compensate the calculations for this change, and the fact of mutating the multipliers on the fly is in itself hideous.

As a cleaner (but more risky) alternative, maybe we could move the code block with the get_colorsCoeff call from ::load into ::preprocess, and run it conditionally based on params. Maybe this could come in handy in other cases too?

What do you think? I can try to implement it, if you think it’s worth it.

alberto

2 Likes

Indeed, results are consistent now.

Great, thanks again for testing :wink:

@Entropy512 @ilias_giarimis @heckflosse @Thanatomanic
Regarding the call to get_colorsCoeff from preprocess, i meant something like this .
Did not create a PR yet, it’s just a quick test to show you what i meant. It seems to work…

Hi rom9,

Thanks a lot for your efforts. I’ve tested it in RT with a few hundred of my scanned negatives and I’m very pleased with the results.

However I have thousands of negatives to be scanned and my workflow is done usually in Darktable.

So I programmed a python script with your gmic code to do the pre-processing automatically and then import the created tiffs in DT. I’ve managed to eliminate problems with spots, sprocket holes or other disturbing effects by applying some routines from python cv2. Also the red and blue coefficients are calculated automatically by finding the correct measurement spots. WB is quite good with your median approach.

My only problem is contrast and saturation. Most are too flat. Experiments with gmics apply_gamma or apply_colors are not satisfiying.

I’ve tried to interpret your RT code for the first slider of the negative film module, but I’m lost as I have no knowledge of c++. I’d like to implement some function to this automatically or by applying a film specific command line parameter.

What kind of parameters / formulas are changed by this slider?

Thanks in advance,
Guenther

Hi Guenther, thanks or testing :slight_smile:

don’t worry: neither have I ! :rofl:

The basic formulas (described on the WikiPedia article) are quite simple. In pseudo-code:

R = (R ^ rexp) * rmult
G = (G ^ gexp) * gmult
B = (B ^ bexp) * bmult

that is, each channel is elevated to a different exponent. I’ve added the final multipliers just to adapt the output values to the acceptable range (0-65535).
Then, in order to make it easier to control the image contrast, i made so that the first slider sets the “reference” exponent (which will be applied to the Green channel), and the other two set the Red and Blue “ratio” to that reference. The actual Red and Blue exponents will be “ratio” times the reference exponent. So the formulas become:

R = [R ^ (refexp * rratio)] * rmult
G = (G ^ refexp) * gmult
B = [B ^ (refexp * bratio)] * bmult

This way, by changing the reference exponent, all 3 channel exponents will be multiplied by the same factor, and the proportions between the 3 exponents will remain constant.
The result is that you change the image contrast without altering the colors.

Gmic’s apply_gamma should work, though.

… of course, there’s always rawtherapee-cli for your use-case :wink:

alberto

edit: to boost contrast in gmic, you can also add, for example:
-adjust_colors -20,20
at the end of the pipeline. The first agument decreases brightness by 20%, the second argument increases contrast by 20%

Hi Alberto,

Thanks a lot!

The approach with the ratios works like a charm and much better than gmics apply_colors which I use at the end for some fine tuning of brightness and saturation.

rawtherapee-cli would be also a solution, but I have now put too much effort in my routines finding the right spots for dynamic and WB to switch over. Over 99% of my scans are now right of the box with no further need to WB. Of course the routines are tailored to my way of scanning as I know where are my unexposed areas and so on …

Again thank you very much for your fast support.

My programming experience with C was almost 30 years ago, but maybe if I find some time I can implement the film negative as a module in DT. It should be not so much work. That’s what I love on open source.

Guenther

3 Likes

Hi!
so I jump in, after I build your branch a couple times these last days and tried to use it.

What I feel is that there’s a useability issue in the first place:
rawtherapee_9room_film_neg_points_picker

for instance I take at random this neg (1024px rezise of a V700 3600dpi scan), without even unexposed border, and I am lost about where to get pick two points as instructed (neutral hue and different brightness ???)

but if I just scan as positive with Vuescan without color adjustements, pick the most suitable kind of WB (auto, sunny, etc) in RT then invert in Gimp, I got an ok positive to optionally tweak further in two clicks,

If I use only the NegativeLapPro LR plugin in default setting, it’s just one command:

in both cases no need to pick points, and easy to batch process a whole roll

Thanks for the feedback. Very nice pic, by the way :wink:

Exactly. The tool needs two different levels of gray from the negative.
Try picking these 2 spots, they’re not ideal but might be good enough:
image

Yes, i know it’s difficult when you have a “border-less” scan, but it’s necessary to have 2 gray samples in order to estimate the correct proportions between channel exponents, because each film type needs different values.
The good news is that, after you have picked the 2 spots and found the correct values, you can use the same exact values for all frames in the same roll, and most likely you will be able to use the same values in other rolls of the same type (unless the manufacturer decides to change the emulsion).
For example, the default slider values should be ok for the Kodak ColorPlus 200, which should work out of the box.

So, if you tend to always shoot with the same film type, or a few of them, you’ll have to do this “calibration” very rarely. You can take one of your negatives where you know you have a white (or light gray) subject, scan it making sure that a piece of clear film border is included, pick the two neutral spots and use the resulting values on all other negatives of the same film type.

I believe that NLP solves this by letting you choose the film type from a list before processing, so the plugin knows the characteristic values of the film type in advance.

You can do something similar with RT by saving a processing profile for each film type, and then recall it when you have to process anoter roll of the same type. This way you only need a bit of extra effort once per film type.

alberto

1 Like

Hi there rom9, excellent work on the invert tool, I shoot a lot of film myself. I’ve also started experimenting with stitching medium format negs to get higher resolution and ran into the same problems as mrock. I wonder, Is this fix in the main git tree? I’m not very familiar with compiling stuff manually, (and completely unfamiliar to applying patches) as I’ve always compiled through AUR on manjaro.

I compiled and installed “Commit description: 5.7-419-g39824c12d” but I still get very inconsistent behaviour regarding whitebalance and uneven exposure/contrast.

Hmm, I mucked about a bit managed finally to pull your branch (i think) and compile it, this is what ‘about’ in RT says
Version: 5.6-657-g3a207dace
Branch: filmneg_stable_mults

But I still get this weird uneven contrast and whitebalance, I have 4 images of one negative and it seems the images that the two images with a border the right are warmer and darker and the ones to the left are colder and brighter. I’d upload the images, but I discovered something creepy and unsettling while looking closely so I have to check something out before I dare upload them.

Hi,
it’s strange that you’re seeing “5.6-…” in the “about” dialog…
Have you used the “Pick film base color” button?

image

Can you confirm you have the same “Film base RGB” values in both images?
If you don’t even have that button, then you somehow got the wrong version.
In that case, please start from scratch by following the instructions in this post.

Totally agree about NOT uploading creepy negs, whatever it is :smiley:

Yeah, nope don’t got that button :stuck_out_tongue: Something must’ve gone wrong! I’ll try again…