Any interest in a "film negative" feature in RT ?

Here is what I do with some of my negatives where there are no obvious neutral spots, but I’m afraid, I’m not going to reveal anything new. Since you seem to be asking primarily about “methodically”, I will take it as “not necessarily with RT”. Generally, RT/filmneg is what makes the best conversion for me, but sometimes I use Darktable with Negadoctor, specifically for the way it handles the highlights and shadows.

The approach, like you described, is indeed about matching the channel histograms, but at both sides. Channel histograms often have a well defined peak, matching on which may give a good starting point. Sometimes though, the peak may not be that well defined or there may be more than one. In such cases a good conversion might be when all channel histogram peaks are matched simultaneously.

For this the ability to shift and stretch an individual channel is necessary, and that’s where Darktable becomes relevant. Besides selecting the rebate color as the basis for blackpoint, it allows separate adjustments of the color cast in shadows and in highlights, as well as stretching and contracting the histogram in a pretty intuitive way.

Getting back to the “methodically”, the approach boils down to fitting all channels inside the histogram area; shifting them to get a neutral black point; and then adjusting the highlights so that their shape also syncs on the histogram. This requires a fair bit of compensation, as when we try to e.g. move a balanced histogram as a whole to the left, the channels often get out of sync and need to be adjusted again. Nevertheless, this doesn’t get completely out of hand, and with some negatives results in reasonably well controlled conversions.

This is quite similar to what can be done in Photoshop using Levels. The difference is Levels allow to adjust the channel gamma, and that’s not precise enough. Darktable targets highlights and shadows more precisely, so that when one area changes, the other is not affected that much. In RT when changing the red/blue ratios, the appropriate channel’s peak changes its height (which makes total sense, as the ratio of pixels belonging to the affected channels changes). Changing the white balance offsets shifts the channels in relation to each other without affecting their shape much (also makes sense as a whitepoint adjustment). What is missing is the ability to affect e.g. only the highlights of an individual channel.

When converting such negatives using RT/filmneg I often just select spots reasonably close to neutral, and which are not on the toe or the shoulder, and from there I adjust freehand, meaning there is no hard color reference, and it is of course very difficult to nail a good conversion this way.


With everything said above, this is also true. This makes me think the discussion now is not about a perfect automatic conversion anymore, but about finding some tools which would make the manual conversion of individual difficult frames easier and more reliable.

Here are some unorganized thoughts of mine.

  1. Quite often when I make an analog shot, I also make a digital (still or video) capture, just as a reference to the scene, and as an exposure record. In some cases I do a duplicate digital capture too, for comparison or other reasons. With this, at the negative conversion step I have a reference. Typically it does not match the contrast very well, but it matches the hues close enough. Hue for me is the most difficult part to get right. With it in place, saturation and lightness are kind of easier. So this could be one way out for the new captures, albeit not applicable to the old ones.
  1. Back to the “methodical” approach, an ability to separately affect the color in shadows and highlights seems to be important when converting negatives. Contrary to my previous opinion, there might be no way around having more sets of the color controls just for this purpose. Yet, to make it more intuitive maybe they should be not RGB but HSV, as this is where the most inconvenience comes from (for me) - adjusting RGB to get a precise color mix. HSV may make it easier. To work well this requires an informative and responsive histogram (not just the preview).

  2. Speaking of matching a given color from memory, I still believe it is not realistic due to how color works, but that’s only for an individual spot color. If we try and much a whole hue group, that might work very well. Here is a project I came across which (among other things) extracts dominant hues from an image image analyser. With those extracted hues which are the dominant colors of the prominent objects, as of yet uncorrected, the task of coming up with a correction seems to be easier. They are, as a reference, already on screen; they are dominant, so not random; they are much easier to remember as the color of the whole object, as opposed to the color of an individually picked spot. It should not be difficult to realize which direction the correction should take.

  3. There are also the examples of spectrograms. These could be helpful in matching the color of the same objects between shots, even if the shots were taken from different perspectives, with different lenses and film.

  1. This now seems a very good idea (it took me some time :blush:). I’ve been getting a lot of harsh highlights in my conversions, where the tonality is close to non-existent and the color noise is overwhelming. It didn’t happen this way with Darktable on the same negatives, probably due to the mentioned “nicer formula”.

Parts of this seem to be getting quite off from the direction filmneg was taking so far, but maybe some of this can still be applied.

1 Like

Thanks for the input! :slight_smile:
I’m currently trying to take a more complete measurement of the film response curve, using the 5 target shots from the post above, taken at increasing exposure levels from -1EV to +3EV.
I wrote a small script that extracts the average channel value from each of the 6 gray patches on each of the pictures.
Then i did the same sampling from a digital picture of the same target, shot immediately before the negative was shot. From the 6 patch values of that single digital pictures, i derived the corresponding values of whole range of exposures -1 … +3 by simply halving or doubling the initial values.
Finally, i made a log-log XY plot with the digital values on the X axis, and the corresponding negative values on the Y axis, here’s what i got:

which is surprisingly close to what theory would suggest :slight_smile:
(note that the negative pictures were white-balanced on the film border, hence channels coincide at their highest value).

Unfortunately, this range of exposures is too small to show the real limits of the film, but it seems that there is already some slight “bending” at the extremes. To be able to “zoom out” on these graph, and see what happens on more extreme values, i need to shoot a gradient with a broader dynamic range.
For this purpose, i ordered a small 8x8 LED matrix, that i plan to drive via an arduino, starting from the array fully lit, and turning off one pixel at a time, at increasing delays. All this while taking a 1sec or longer exposure, shot directly at the array. This should give a 64-steps gradient spanning a huge range of light values.
I expect to reach the limit of the lens (due to lens glare) much sooner than the limit of the film; in this case, i’ll split the work in two, and take separate shots for the upper and lower limits.

I hope to observe that all the channels behave the same way at the extremes; this would mean that we won’t need three separate adjustments for toe and shoulder. Also, we’d be very lucky if the curve behaves simmetrically on both ends … we’ll see :wink:

Maybe, better handling of over- and underexposure, could also lead to better and easier-to-achieve color accuracy, because sometimes while adjusting our negative conversions we might be misled by some color cast in a highlight area, and to fix that we throw the rest of the picture off the rails… :thinking:

1 Like

Which… theory? :blush:

There is something bothering me about measuring the WB on the film rebate. Is it correct to treat it as a uniform color cast which just needs to be subtracted? If that would be the case, the rebate could just as well be clear like on a B&W film, couldn’t it?

Instead, the rebate reveals layers of the film which dynamically participate in the color formation. The color of the rebate is the combined color of the magenta and yellow color masks where they have not participated in the dye formation. The rebate color should therefore transform on the positive to the neutral black. Notably, not to the middle gray where the WB is typically measured, not white, not even at the straight portion of the curve. As the masks do participate in the dye formation, the more exposure a given point receives, the less of the mask remains there, meaning the less relevant the WB correction measured on the black becomes for that point, resulting in progressively bigger color cast when moving from shadows to the highlights.

Isn’t this what the diverging RGB curves demonstrate on your chart? Wouldn’t it make more sense then to arrive to a neutral middle gray (which is what is already done through the sampling of two neutral spots) and then correct the toe and shoulder channel divergence locally? Which is similar if not exactly the same to what Darktable does. The benefit should be that the film’s exact dynamic range boundaries and where the image is placed within them becomes less important. As long as the black point, middle gray and white point are neutral, the image should be balanced.

With the middle gray anchored from the two spots selected by the user, and looking at the channels divergence at the toe and the shoulder, shouldn’t it be feasible to come up with the local compensation ratios automatically?

Just an aside, the color positive films may also use a colored film base to improve the color fidelity in conjunction with the projection light color, like the bluish cast on the Kodachrome made to compensate the tungsten projector lamp.

In such a test the black level is likely to get skewed a lot because of the veiling glare like you mentioned, and as described here Vinland's Useful Dynamic Range Test (link credits to Elle). The reason reported though is not the internal lens glare but the light bouncing between the film and the lens inside the camera (although in the referenced post it was the sensor which is likely more reflective). Anyway the solution to reduce the glare is to use a pinhole.

This would be a solid calibration data for a single film, but would it be generally applicable? Also, as this demonstrates, on film a whole density step (a quarter of the negative dynamic range) is dominated by noise, making the TRCs in that area rather arbitrary. Adding to the single film argument, it also shows film brands vary a lot in their response.

Actually, there is variability even within the brand between the formats. For example, Kodak Ektar 100 35mm has a more dark-brownish film base and responds differently to non-normal exposure and development compared to Kodak Ektar 100 4x5, which has much brighter orange-pinkish film base.

Maybe it is the density that should be measured instead, like you were pointing out before. After all, the exposure, the development, any variability there ends up resulting just in the density range of the three channels. The user could be given a control to modify the detected density range, which would in essence define the dynamic range. Combined with the three-point channels normalization that could work in both extracting the available tonal range and achieving the color balance.

Here are some examples of under and overexposed scenes. It is notable how consistent the Fuji Frontier SP3000 scanner is in interpreting the colors regardless of the exposure, provided that its scanning brightness remains constant.

3 Likes

With some further research (e.g. this, p.9) it seems the relationship may not be dynamic after all. The mask transforms to dye where the exposure happens so the dye replaces the mask. This might not be the whole answer though.

There are two masking layers, yellow for extra blue absorption, and pinkish for some extra blue and green absorption. Let’s focus on the green. If the exposure spectrum at any given point consists of mostly (or only) green, the pinkish mask will transform, but the yellow should remain. The color at this spot then has to end up skewed to blue (on the positive). If both masks received their target spectra, both will disappear and the color will be balanced.

When it so happens that both masks are converted to dye, it becomes appropriate to remove the mask by simple addition of its complementary color. But what about the scenario when one mask was converted but the second remains? The outcome would not only be dynamic, but non-deterministic, for when a point has a bluish cast, how can we tell it is a true color, or the effect of the yellow mask that has not been transformed?

Yet, analog prints from color negatives look balanced most of the time. Meaning I have to be missing something.

1 Like

Hi @rom9! I hope you are doing fine. Maybe you as the author will have some ideas on my problem…
After getting great result with RT on 35mm film, I’ve upgraded my rig to support 120 film as well and wanted to check how latest RT behaves on non-raw images.
Why non-raw? Before, I was doing conversion solely using raw’s, as I was doing 35mm. But now I want to extract more resolution from 6x7 frames than single 24mp D750 capture allows and so capture it in parts, then apply flat field in RT, reset all default auto-curves to make the images flat looking and export from RT to lightroom (16bit tiff), there I use autostitching. Then I export the stitched negative from LR to 16-bit tiff. Then I go back to RT to convert the final stitched image as a whole using RT’s negative tool.
It works absolutely great for black and white images - using 8 shots I obtained result visually undistinguishable from the scan of the same frame from Nikon 9000 on max settings (100mp in my case, 80mp on 9000).
Now the problem. It works bad when doing it for color negatives. The colors look all wrong when converting the stitched negative opened as tiff. I’ve ‘debugged’ it and squeezed the problem a bit, I ruled out stitching, lightroom etc. I can now reproduce it just staying within RT and using one frame.

So, two scenarios,
Good:
Use .NEF raw file to convert in RT, using our usual procedure.
Bad:
Export the .NEF negative (disabling all auto curves etc) into 16-bit tiff, then open the tiff and use the same procedure as in the ‘Good’ scenario, i.e. resample greys, choose the same white point etc.
No matter what I tried, I cannot achieve good result working on a tiff of the same frame as I can get result for from the source raw.

Let’s see an example.

  1. Here’s Nikon 9000 lab scan for reference (not saying it’s great, but it’s okayish…)
  2. RT conversion of Nikon D750 raw capture:

    Arguably, colors look better than Nikon 9000
  3. RT conversion of the same negative file from point 2 but exported to 16-bit tiff and the same steps done:

    Colors look unacceptable. I cannot improve them to match 2.

If you want to try it yourself, here’s my .NEF + pp3 from point 2. You can use it as reference, then remove all negative inversion, export to tiff and try converting the tiff.
1911-1-0019.NEF (28.6 MB) 1911-1-0019.NEF.pp3 (13.6 KB)
Flat field:
1911-1-0003.NEF (27.2 MB)

Would be very interesting to hear your opinion on what I’m potentially doing wrong or maybe there’s something to improve in the tiff workflow of RT.

1 Like

One potential solution is to stitch the identically done negative inversions rather than invert one negative stitch. Here’s a proof of concept using two frames stitch. The colors are good and I think better than Nikon 9000 (but in any case portra 160 seems not that suited to such winter scene…)
It can work on simple stitches. I’m afraid it won’t work on large number of subframes in a stitch, where you cannot easily invert one small area and copy it to all others since they will have different curves etc.
So would be good if you could find a solution allowing to invert a tiff stitch and get the same good result!..

Hi all, sorry for the huge delay! I’ll try to respond in order :slight_smile:

The one from the wikipedia article linked in the very first post of this thread, which states that … “the transmission coefficient of the developed film is proportional to a power of the reciprocal of the brightness of the original exposure.”

No no, sorry i didn’t clarify this: i was not suggesting that we should white balance on the film border to do the conversion. This was just a test shot of the negative to evaluate its response curve, simply measuring the light passing through the negative. I just picked a WB spot to make the curves coincide on a well-known value (Dmin), common to all 5 shots.
And, in any case, there’s no subtraction going on, white balance applies channel multipliers.

In theory, this would work for the toe around Dmin, by having the user pick also a spot on the film border, but we might not have data about the shoulder (around Dmax), since a particular negative could not have a neutral highlight spot reaching Dmax on all channels.
Hence i’m afraid that some manual adjustment will be needed.

I was planning to take two separate shots, one strongly underexposed to evaluate the behavior near Dmin, and the other strongly overexposed to evaluate the behaviour near Dmax. This should mitigate the problem by not having extremely dark and bright areas in the same shot, so the dark areas are not “polluted” by glare and bounces from bright spots …

The same data would not be generally applicable, but hopefully, the shape of the curve should be generally applicable after adjusting it with as few parameters as possible.

Yes, that’s the intent: letting the user adjust Dmin and Dmax. But i’d like to avoid having to adjust them for each channel, so i hope to observe that there is some general rule that can be applied to derive Dmin and Dmax for each channel.

The Fuji Frontier samples are outstanding indeed! :slight_smile:

Well, we could use the film base color to detect how close each channel is to Dmin; that would tell us which channel is completely clear and which one has some density… :thinking:
Anyway, since i have a RGB led array, i can test the channels separately as well :wink:

BTW, the problem with this leds is that they’re not perfectly matched, some are brighter than others when set at the same level… i need to build some sort of LUT and load it on the arduino :smiley:

Thanks for the link to the very interesting pdf :slight_smile:

I guess the orange mask is there exactly as a compensation layer to make the print look correct :slight_smile:

Which output color profile did you use when saving the tiff ? For the inversion to work correctly, the tiff file should be linear gamma. Can you try saving the tiff using this profile, and also select the same icc as input profile when re-opening the file in RT ?

Also, your very nice pictures deserve a better backlight! :slight_smile:
The one used in this sample is extremely “un-flat”… i know that you can compensate it via the flat field shot, but with such large compensation levels, it could introduce problems…

1 Like

You are a wizard @rom9! Thanks very much for pointing this out, I couldn’t have figured it out myself as I’m very bad with profiles (and maybe the UI could have made it more prominent that not any tiff is going to work).

I studied profiles a bit and instead of using that linear profiles from github, I made the following (if someone wants to do the same conversions):

  • Used RT’s built-in ICC profile generator, there I chose sRGB+linear gamma and saved it.
  • Then put the saved icc in the system folder so that both RT and LR can see it (restart was needed for LR + selecting “also show Display profiles” in the export screen)
  • Then disabled all curves in RT and chose my new linear profile as the output profile and saved the two partial negative stitches into 16-bit tiff
  • Then imported them to LR, made the stitch and went to export
  • there I chose TIFF + select custom profile → select my linear profile → save the tiff, it’s now also linear.
  • I just opened the saved TIFF negative stitch, and used the usual RT inversion as if it was RAW, this time it has worked great! I did not have to choose input profile since the default option of using the embedded one worked fine (seems LR saved if to TIFF as well).

Here’s the result of inverting in RT a TIFF stitch assembled in LR:

Thank you! Yes I also noticed it’s uneven, the new rig while allowing larger formats is much more susceptible to enabled room lights. I will have to fix it somehow. But we see the flat field actually works wonders!

1 Like

Hi @rom9! Now that the feature is really stable and works great maybe it’s time to merge it to develop? It will then get to automated nightly builds so it’s readily available for people not able to build it from source.
Yeah, the reason is also selfish a bit, since I compiled it in a 8gb ram Linux VM and it’s very slow to open 60mpix tiffs I‘m getting from 6x7 camera scans (opening dir with 10 frames takes more than an hour), while the same files open normally on the 16gb host machine but there it‘s Mac/Windows but it’s too tedious to build there compared to linux and the available nightly builds lack the new filmneg tool :slight_smile:

1 Like

That’s definitely too much. We have to inspect this…

It might not be a RawTherapee (why did I think darktable? Probably because of the recent MacOS/ARM discussion…) problem but a VM with an improperly configured storage driver.

Side note, I really should try playing with WSL2 for personal reasons such as this in addition to work projects… Might make the “building is too tedious” a bit easier.

One thing we can definitely rule out is that’s not something with the filmneg, i.e. I open folder with 10 200mb 16bit compressed negative tiffs freshly exported from LR and don’t yet activate any inversion etc - just double clicking on the folder in RT browser stalls it for an hour, with each image preview appearing like once in ten mins.
I don’t know, maybe it’s general sluggishness of VirtualBox shared folder (i.e. the images are on Mac host machine folder mapped to vm and the host folder is itself a samba share on a home nas)
But the 30 mb nefs were opening just fine in this config, the slow down of building previews for the tiffs is definitely not linear.

And the non vm installation of RT develop branch opens the same folder in like several mins max, I think even faster (I have iMac late 2013 so it’s not very fast itself)

Hi! :slight_smile:
That might also be a memory problem, can you check whether your VM is using a lot of swap space while opening that folder?
I tried opening a folder with 8 sample GFX100 raw files (209mb each), then another folder with several large tiff scans (~200MB each), and it takes a few seconds, reading from a usb hard drive.
I noticed though that RT allocates quite a bit of RAM while opening the previews (unless they’re already cached), so maybe you’re having a disk swap issue.

Anyway, you’re right regarding merging this to the dev branch, i’ll ask on the github PR if everybody agrees :slight_smile:

3 Likes

So wait… It’s a network share that is being re-shared to the VM, as opposed to having the VM mount the NAS directly?

IIRC, VB implemented some host filesystem access as a network share approach, and often re-sharing a network share a second time from a client can have some really negative performance implications.

This might be of use Run RawTherapee custom linux build on Windows 10 with sharp fonts on a HiDPI display · GitHub.

On Windows, with MSYS2, building is of the same difficulty as on linux; The main task is to set up MSYS2 toolset, but there is a guide in Rawpedia. You get a kind of minimal linux with all tools and dependencies available (bash, git…).
I think I can use the DrSlony linux bash script as it is. But I really prefer to bundle all dependencies as I regularly update the MSYS2 environment.

/RawTherapee_filmneg_5.8-2702-g7db670bd3_W64_generic_201116.7z uploaded at 

https://keybase.pub/gaaned92/RTW64NightlyBuilds

What is your processor?

Yeah, now that you pointed it out it sounds stupid to me as well :slight_smile:
Will check if it gets better by mounting directly

Yeah I read the guide. I stopped at the point when it told that I have to downgrade certain packages etc. It’s too much compared to Ubuntu, sorry :slight_smile:
Although I understand the context, it’s not really RT’s fault.

i5-4570R@2.7
Note it doesn’t look to be CPU bound AFAIR from the system monitoring

@rom9 Your LED RGB light source, what kind of spectrum does it have? Is it narrowband? Did you by any chance experiment with comparing captures made with such backlight to a wideband one in terms of the eventual conversion quality?