Any interest in a "film negative" feature in RT ?

Sure, here is the location of the settings i was referring to (in the “Color” tab):

image

Yes, i will, that’s a good idea :slight_smile:

But in this specific case, the current crop could include the sprocket holes, so we’d better not trust it :wink:

From film datasheets, for example:
e4050_portra_400.pdf (256.0 KB)
superia_xtra400_datasheet.pdf (346.1 KB)

I looked at the “Spectral Sensitivity Curves” on the last page.

I don’t think the problem is the size of the gamut. In the normal workflow, your input image is converted from camera space to working space. There, you perform your editing operations (manipulating numbers which are expressed as working space R,G,B), and finally the resulting numbers are converted to the output space, to be displayed on screen.
Now, especially since our operation here is an exponentiation (different on each channel), i suppose that performing it in different working spaces, with different matrices, would result in completely different colors after output conversion.

Does this make any sense? :slight_smile:

Ok, I’ve found the setting you meant and yes, it was this auto-matched one. I’ve quickly tried your recommendation of setting it to “Camera standard” and Rec2020. Here are my feelings:

  1. Yes, main WB no longer affects the conversion much
  2. Yes, some frames become “better”
  3. But it’s not guaranteed at all… It turns some frames to ‘bad’ look I got from NLP before and hated it. It’s not how the scene looked… Let me give an example:
    So, Fuji Pro400H, January 2020:

using previous automatched profile + high main WB temperature:

using Camera standard profile + Rec2020 + main WB@backlight temp:

Yes, the first one is also not perfect but probably I like it a bit more than the second one. The second one looks washed out and it’s not solved via curves. It was afternoon close to sunset, the sun was on, yet we don’t feel nothing of it in the second photo. Also the snow is this tone of ‘teal’ or “green”. Exactly what I was getting from NLP and couldn’t do any better with it.(
Here is the raw, maybe you can use it for examination…
Dslr0344.NEF (27.7 MB)
The flat field is the same as for the cat photo above.

Oh, this should be fixed by changing exponents, i see you kept the same values. Sorry i didn’t mention it above, when changing the working profile and/or camera profile, you might also have to re-set the Red ratio and Blue ratio.
Try adjusting the sliders manually, you should easily find a more pleasing result :wink:

Unfortunately, other values are much worse, i.e. unusable. I change them, then reset white spot, and it all looks bad no matter how I tried.
I.e. it’s actually much easier to follow my original way of using automatched + high main wb, at least it gives something usable.

The weeds should be golden from the sun, like on this photo, from the same roll, inverted with high wb:

Well, yes and no :slight_smile: Yes, if you are saying the calculated value is a set of coordinates and the color at that coordinates differs depending on the color space we’re using. It makes it critical to pick the suitable working space. Ideally that would be the native color space of a given negative. But it also means this task may not be realistic as each film brand and batch has its own color space.

And no, because as far as I’m aware, a matrix is not an attribute of a color space, but of a conversion between color spaces which is typically defined by a profile. Meaning, while we are in a given color space, a color transformation is just a math operation on a 3-component value, completely oblivious to its effect on the resulting color. Therefore it doesn’t matter that much what color space we perform the exponentiation in, as the results in terms of the calculated absolute value will be the same. But, since what we are calculating is the new coordinates, the gamut size does become relevant. The bigger it is, the smaller the chance to end up with imaginary colors.
Eventually we will indeed have to perform the conversion to the output space and for that the relationship between colors (the matrix) has to be known, therefore the working space needs to be defined. But is it possible to generalize it because of the mentioned film variation?

the more vivid colors are probably due to the auto-matched camera profile which is more saturated. Keep in mind that you can always boost colors by using the “Saturation” and “Lab Adjustments”->“Chromaticity” settings in the first tab:


(i’m not saying this is a good result, it’s just to show the idea :wink: )

So, in general, don’t worry if you get washed out colors, you can always boost them. The important thing is to get well-balanced colors without weird color casts.
And my impression is that using Rec2020 with different samples gives slightly better results than ProPhoto, without needing much adjustment.

Also, to clarify, in my previous post i was not implying that the “Camera Standard” input profile and “Rec2020” working profile have to be use together, nor that we should always use Camera Standard as input profile.
With “Auto-matched” vs “Camera Standard”, i wanted to provide an explanation for the differences that you observe by changing the main WB: that way, you are in fact altering the input profile. Which is a very good trick if you get a good result, but it may not work the same for all users with different cameras and setups.

Regarding the latest hack to keep colors constant while changing the main WB, i’ve tried to implement it correctly but it’s much harder than i thought :sob:
I can’t save the spot coordinates in the processing profile because that would break batch processing. The only alternative i’ve found is to sample reference input values from the raw image, upstream of white balance, but that would bring back a bunch of issues that i had in previous versions, with raw channel scaling not being constant between pictures, and between the thumbnail and the main preview… it’s just too complex.

For now, since the release of RT v5.9 is getting close, i removed the dirty hack, and also removed the hard-coded “FilmNeg” profile, in order to leave the PR in a clean state with the minimal base of features that we know work and make sense. Then we’ll keep experimenting and add features on the dedicated branch.
Moreover, in the latest commits i made some improvements to the sliders: all are now exponential, and the balance sliders are centered at zero. They should now feel a bit smoother and more user-friendly :slight_smile:

I think this is what i was meaning, but you said it using the correct terms :slight_smile:
I know that the operation itself (exponentiation) is not affected by the chosen working profile. But, by using different working profiles, the input → working conversion will yield different absolute values, hence the result after exponentiation will be different, and finally the working → output conversion will represent different colors on screen.
I agree with you that maybe it’s just impossible to find one profile good for every possible film stock :worried:

Yeah, thank you! I’ve tried the latest version and kind of learned to reproduce results that I obtained with my changing main WB with your way of using generic camera profile+Rec2020. For some photos to become comparable I had to increase saturation to 20-40.

I also agree we probably will never find ideal procedure without reproducible hw conditions that were provided by film scanners + soft that was tailored to them by manufacturer.

But what’s important is with the current state of RawTherapee we can do very convincing conversions and manually fine tune them to perfection in great proportion of cases! I also have NegativeLabPro, people often praise it but personally I never got so good results with it as I do with RawTherapee, with all the same raw files - I think you should be really proud @rom9 that your free offering beats commercial software considered by many as the state of the art!

Big thank you again for your hard work! Waiting for the final 5.9!

4 Likes

I have created a “starting point” pp3, should i push it to git under the rtdata/profiles together with Unclipped.pp3 ?

Here it is anyway:
Film Negative.pp3 (638 Bytes)

These are the default parameters set by this profile (everything else is left unchanged) :

  • disable Capture Sharpening : since it’s quite CPU-heavy, and the image sharpness is limited by film grain anyway…
  • disable auto-matched tone curve
  • enable tone curve 1 for level adjustment, tone curve 2 with a pre-set S-curve, just as a hint for the user

image

  • set working profile to Rec2020. Actually, the difference with ProPhoto is very minimal from a practical usage standpoint… i don’t know, maybe i could remove this setting :thinking:
  • set the input profile to “Camera Standard”, in order to use a simple matrix by default, for the cases (like some Sony cameras) where the auto-matched camera profile is a DCP with a built-in look table which can’t be disabled.

Opinions welcome! :slight_smile:

@Ilya_Palopezhentsev : thanks! :wink:

1 Like

I’ve been doing some testing on my side as well, and results so far are indeed superior to Negative Lab Pro or any other method I’m aware of. Although the filmneg approach does not eliminate manual steps and does involve quite a bit of subjective judgement, the fundamental approach feels very solid. It is generally quite straightforward to arrive to a valid conversion, requiring only minor and (most importantly) predictable corrections.

I would like to share the flow I currently use which gives me good results. To some extent it is coordinated with what @Ilya_Palopezhentsev described in his blog post, for which I’m grateful. Parts of the flow were found empirically, so there are also some questions, and maybe mistakes someone might kindly correct.

To note, I’m new to RT so probably missing many of its idioms.

My scanning setup at the moment is similar to what’s been posted here. It’s just a box with white inside walls, in which I point a flash from the top. Also on the top there is an opening for the negative so it gets lit by rather diffused light bouncing from the walls of the box.

I created an icc profile of my backlight by shooting a colorchecker placed at the bottom of the box. Before deriving the profile I applied the flat field correction to this image.

Here is the flow:

  1. Open the negative and reset its state to Neutral
  2. Apply the flat field correction
  3. Set the backlight icc profile as the Input profile
  4. Set the working color space to Rec2020
  5. Set the inversion color space to Working
  6. Pick the neutral spots (sometimes iteratively)
  7. Modify the reference exponent
  8. Modify the output level
  9. Add the black point - white point linear curve
  10. Revisit the reference exponent, output level, and adjust the tints
  11. Add a tone curve

There may be more iterations from step 6 forward, but mostly for fine tuning.

And here are the questions:

  1. Previous approaches involved setting the white balance, first on the film rebate, later on the flatfield. I did experiment with setting the WB in various ways but in the end abandoned it altogether. I think it may not be necessary because of the backlight-specific icc profile which balances the input, but am I missing something?

  2. With the last couple of flimneg versions I didn’t ever have to adjust the red and blue exponent ratios. What would be a likely scenario for their use?

  3. Is it ok to change the neutral spots after the mentioned downstream steps, or is it necessary to reset somehow?

  4. I do have a colorchecker shot on a negative but so far I couldn’t find a good use for it. An icc profile made from it blows the colors off completely. Is there a suggested way to use it?

@rom9 Thanks so much for the fine job!

In my testing the difference is quite pronounced and Rec2020 gives superior results.

UPDATE

There is also a usability suggestion. As the adjustable values have different scale and precision, it might make sense to make the correction delta proportional. Currently the “+” and “-” buttons change the least significant digit by a single step. It fits the tints well, but the output level not so much. There a delta of 0.10 seems more appropriate. For the reference exponent I’d choose a delta of 0.050. Can’t speak much about the red and blue ratios as my experience with them is limited. Otherwise the inputs often need to be edited directly which is not as convenient.

2 Likes

I think it’s still important to be able to change them to experiment. Even on the latest version I sometimes sample what look to be valid greys to get one set of ratios, they give good result for the picture on which I sampled them. But then I copy the settings to another frame on the roll and, after resampling white spot, they give bad results. Then I resample grey points from another frame from the same roll and they give quite different ratios already and give better results overall if applied to other frames of the roll.
So, imagine there was no other frame on the roll with grey spots or all frames don’t have good grey spots (quite easily when shooting 10 6x7 frames e.g. in forest…). Being unable to play with the ratios manually would make it impossible to correct.

2 Likes

Certainly. Rather than implying they have no use, I was looking for an example scenario when they become important.

As a matter of fact after writing that I was doing some more conversions, this time of the flatbed-scanned tiffs. As these don’t have the accompanying backlight icc profile and don’t really have obvious neutral gray spots, they have a pretty wide range of acceptable interpretations. That’s where I did find the red and blue ratios to be of major importance.

@Ilya_Palopezhentsev, do I understand it correctly that the colors you are getting on your images (which are very pleasant) are in the end the product of your visual and subjective assessment and not dictated by the outcome of the mostly automatic conversion?

Regarding the copying, I ended up accepting that the copied conversions do not work very well most of the time even within the same roll. I found it more reliable to convert each frame manually from scratch. Maybe this is something the colorchecker shot on the negative might help with, but so far I haven’t found a way to apply it.

This is not unexpected though. For frames sharing the general scene properties and the exposure, the copying would likely work quite well. For rolls where the scenes and the exposure vary more it is natural that the conversions are more difficult to reuse.

@Ilya_Palopezhentsev Thanks for the input on the ratios, it made total sense. I hope someone kindly gives me thorough explanations on the other questions as well.

1 Like

In theory, you should set the WB to the exact color temperature of the backlight (either measured by your camera, or by picking a spot on the flatfield from RT) and forget about it, because this is what your custom profile was calibrated for.

In practice, since we don’t know the exact color profile of the negative, altering the input WB can be a clever trick to slightly alter the input profile (and thus, the output of the inversion), in ways that could not be possible by simply adjusting the exponents or the output balance multipliers.

Think of what happens mathematically: for each raw channel, we are computing a weighted sum of the 3 channels, raising the result to an exponent, multiplying by some factor, and that’s our result in working space.
By changing the input WB (which happens before the input matrix) we are in fact also changing the weights of the input matrix.
So… yes, this trick can be put to good use, but it can also get confusing. If you want to adjust colors very precisely to your taste, RT has plenty of solid tools to do that (like L*a*b Adjustments, HSV Equalizer, Color Toning, etc.), i would suggest to use those instead :wink:

(see below)

in this version, yes it is ok! :slight_smile:
Before, in RT 5.7 and 5.8 you had to re-balance each time you changed exponents. Now, the spot that you choose with the “Pick white balance spot” button, will be saved as an “anchor point”, and the multipliers will be automatically adjusted so that the same input will always produce the same output, regardless of the exponents.

Unfortunately for now, the only use i can suggest is to eyeball the result. :sob:
You can make the task a bit easier by using the Lockable Color Picker feature, and compare the values with the “official” color values of the target (which you should easily find online). Maybe the most effective way to evaluate color is to use HSV coordinates.

Rec2020 it will be :wink:

Very good idea! Thanks for that, i’ll change it.

You we both right: different exposures require different exponent ratios. Here’s an example.
I have a roll where most pictures work pretty well with the default settings. I copied the same settings (exponent/ratios, reference input values, output level/balance) on all frames, and all were digitized at the same exposure setting, so channel multipliers are consistent. This way it’s easy to evaluate whether a frame is over- or underexposed with respect to another.
Well, i have this frame, which is clearly overexposed. This is also evident by the naked-eye appearence of the neg:

image image

Let’s say i want to recover it: i simply lower the Output Level, and i get this:

image

The highlights, that were previously clipping to white, are now somewhat pink. I can try to adjust the output balance to make the facade gray, but now everything gets greenish

image

This is because (i think), as the film approaches Dmax, each channel exponent starts to diverge from the nominal value. This is not explicit in Fuji and Kodak datasheets, as the curves are abruptly cut:

image

but i’m pretty sure if the manufacturers were to trace the curves further to the right, we would see a knee on each curve.

Anyway, to fix the picture i can now lower the Red ratio (which means lowering the red channel exponent, since red exponent = red ratio * reference exponent), and raise the Blue ratio a bit, until the subject becomes neutral gray.

image

Note that the overexposure of the film also makes the picture less contrasted. I could further raise the reference exponent, or tweak the tone curve … you get the point.
Also notice how the lower part of the picture, in shadow, now has a slight reddish color cast. That’s because the new exponents are not good for normal exposure levels; we are sacrificing the shadows in order to recover the overexposed part.
For reference, the values here were:

red ratio  = 1.151
blue ratio = 0.93

In general, try to experiment with your overexposed negatives: you will be amazed at the amount of information that’s captured in there! :slight_smile:

Yes, you nailed it :wink:

5 Likes

Thank you! Yes, I don’t believe in fully automatic conversions, as negative film is an interpretative medium. Definitely we should have vision which result is good for us and nudge the program in that direction. I choose realistic vision, whereas I see many like more alien colors, associating them with ‘film look’. Personally, I find that wrong. I suspect such desire may actually be caused by inability to get natural colors and accepting inferior result.

I do believe though in copy-pasting settings. It would be very tiresome to search for exponents for each picture. For copying to work perfectly, I guess you should split frames on a roll into groups: underexposed, normally exposed, overexposed. As we all see, they require different exponents to look good. But within each group it should be safe to copy-paste exponents and then edit just the curves/white spots to suit each frame.

Most of the frames would be normally exposed anyway, especially using modern AE cameras, so one just has to find normally exposed frame for finding exponents which would suit most of the images on the roll and then fix the rest.

1 Like

@rom9 Thanks a lot for the very detailed post. The place of the WB correction in the workflow is now much better established in my understanding :slight_smile: The same goes for the red and blue ratios, although I still haven’t got enough experience with them to extract any guidelines for my workflow.

@Ilya_Palopezhentsev While its hard to argue with this, achieving naturally looking and pleasant color is not a trivial task. Looking at an image made to perfection in terms of the realistic interpretation of color (especially side by side to its inferior version), it’s easy to know it’s good. Looking at an image where the color is already believable, it can be very difficult (at least for me) to tell whether it can be further improved.

Now that I better understand the correlation between the exposure and the exponent ratios, the varying outcome of the copy-pasting becomes clearer as well. Yet, because of the said above, I’m cautious about any shortcuts which might make me miss something about the image.

Yeah, I often feel the same. For me it helps to come away from the screen for a while and then return and recognize it looks wrong.
Also in RT there is handy comparer/history tool at the bottom left where you can switch to previous state and compare. Sadly the history is not persisted after going to another photo.

All that being said, there are certain frames which I can’t get to look right no matter what I tried.

1 Like

Fix pushed to git :wink:
Using 0.1 and 0.05 for level and reference exponent respectively seemed a bit too coarse, so i’ve used:

  • 0.05 for Output Level
  • 0.01 for Reference Exponent
  • 0.01 for Red/Blue ratios.

Keep in mind that, in addition to clicking the + and - buttons, you can also Shift + Mouse Wheel while hovering over the spinbutton. This is quick and precise at the same time :wink:

Also, i’ve increased the max value of the ratio sliders from 3 to 5, since i came across a very old negative where the blue channel is completely flat, and i needed a much higher exponent to recover some contrast in that channel.

Moreover, i was completely wrong about Capture Sharpening! Despite the film grain, it does a pretty good job on a negative:

Hence, i’m going to remove the setting from the “starting point” profile, so if it’s enabled by default, applying Film Negative.pp3 will leave it enabled.

3 Likes

Interesting point about capture sharpening.
I have not really tried to find suitable settings for this, so I’ve remained “old-skool” with the Unsharp Mask, when it comes to enhancing detail. (Don’t tell me about using wavelets for both sharpening and noise reduction, I’m not there yet :smiley: (kidding! do tell me!))

Thanks for the update! I wasn’t aware of the hover action, with this the delta becomes less of an issue. Agree 0.1 and 0.05 would be way too coarse, now even with the smaller one I’m not sure that was a good idea after all :sweat_smile:

I was wondering, is it possible to explain the algorithm (or the steps)?

I once started tweaking the negfix8 script,and that has resulted in me having written my own inversion tool in (simple) c++ using libvips,and still experimenting with ImageMagick on the command line. (using it for raw positive scans from a filmscanner,not a digital camera).

I now find the darkest values for the r,g and b channels in the scan. I divide those values by the pixel values (so in ImageMagick terms I do -fx “0.1233/u” for all the channels).

This inverts the image and sets a white point,but the blacks are still ‘dirty’. So I take a scan of just the filmstrip and average it,invert it the same way and take the resulting values as my new blackpoint . (without a piece of scanned filmstripi search the brightest values in my scanned image).

I subtract those values from the inverted image so black becomes 0,0,0 in rgb again,and I multiply the channels by a respective amount to keep the same scale. (basically a levels operation to set the black.If I subtract 500, I multiply by 65535/(65535-500)).

And then often I auto balance the image by aligning the mean values of the three channels, then finally bring it out if linear space by a gamma correction (or I tag it with a linear gamma 1.0 ICC profile and convert it to regular srgb or adobergb).

What steps are you doing that I’m not, or any other insights?

Negfix8 calculates a gamma correction per channel to bring the final blacks in line (so r/g/b have the same minimum value) and then subtract that. I find it yields brighter,flatter images.But mostly it seems the gain settings on my scanner have an impact on the final output brightness,which I don’t think should happen.

What I do now is merge all the images of a film roll into one big collage where they are fitted perfectly together,and each image has like a 255x256 dimension. I search for the darkest values in that combined image (with histogram binning to ignore some peak values I don’t want), and I invert that combined image and do the color balancing on that one.I then use the exact values used for the combined image,on each of the separate scans in full size. This way I set the white point based on the entire film roll, and I colorbalance the entire roll as one. While still having separate pictures to edit and tweak.

Doing it all in floating point means I also don’t have to loose the highlights which I might clip in setting a white point.

I was wondering if I’m doing something weird and it’s just luck I have good results, I’m missing a small step or something,or if it’s basically the same as rawtherapee (and darktable?)are doing.