Any interest in a "film negative" feature in RT ?

Yeah, I often feel the same. For me it helps to come away from the screen for a while and then return and recognize it looks wrong.
Also in RT there is handy comparer/history tool at the bottom left where you can switch to previous state and compare. Sadly the history is not persisted after going to another photo.

All that being said, there are certain frames which I can’t get to look right no matter what I tried.

1 Like

Fix pushed to git :wink:
Using 0.1 and 0.05 for level and reference exponent respectively seemed a bit too coarse, so i’ve used:

  • 0.05 for Output Level
  • 0.01 for Reference Exponent
  • 0.01 for Red/Blue ratios.

Keep in mind that, in addition to clicking the + and - buttons, you can also Shift + Mouse Wheel while hovering over the spinbutton. This is quick and precise at the same time :wink:

Also, i’ve increased the max value of the ratio sliders from 3 to 5, since i came across a very old negative where the blue channel is completely flat, and i needed a much higher exponent to recover some contrast in that channel.

Moreover, i was completely wrong about Capture Sharpening! Despite the film grain, it does a pretty good job on a negative:

Hence, i’m going to remove the setting from the “starting point” profile, so if it’s enabled by default, applying Film Negative.pp3 will leave it enabled.

3 Likes

Interesting point about capture sharpening.
I have not really tried to find suitable settings for this, so I’ve remained “old-skool” with the Unsharp Mask, when it comes to enhancing detail. (Don’t tell me about using wavelets for both sharpening and noise reduction, I’m not there yet :smiley: (kidding! do tell me!))

Thanks for the update! I wasn’t aware of the hover action, with this the delta becomes less of an issue. Agree 0.1 and 0.05 would be way too coarse, now even with the smaller one I’m not sure that was a good idea after all :sweat_smile:

I was wondering, is it possible to explain the algorithm (or the steps)?

I once started tweaking the negfix8 script,and that has resulted in me having written my own inversion tool in (simple) c++ using libvips,and still experimenting with ImageMagick on the command line. (using it for raw positive scans from a filmscanner,not a digital camera).

I now find the darkest values for the r,g and b channels in the scan. I divide those values by the pixel values (so in ImageMagick terms I do -fx “0.1233/u” for all the channels).

This inverts the image and sets a white point,but the blacks are still ‘dirty’. So I take a scan of just the filmstrip and average it,invert it the same way and take the resulting values as my new blackpoint . (without a piece of scanned filmstripi search the brightest values in my scanned image).

I subtract those values from the inverted image so black becomes 0,0,0 in rgb again,and I multiply the channels by a respective amount to keep the same scale. (basically a levels operation to set the black.If I subtract 500, I multiply by 65535/(65535-500)).

And then often I auto balance the image by aligning the mean values of the three channels, then finally bring it out if linear space by a gamma correction (or I tag it with a linear gamma 1.0 ICC profile and convert it to regular srgb or adobergb).

What steps are you doing that I’m not, or any other insights?

Negfix8 calculates a gamma correction per channel to bring the final blacks in line (so r/g/b have the same minimum value) and then subtract that. I find it yields brighter,flatter images.But mostly it seems the gain settings on my scanner have an impact on the final output brightness,which I don’t think should happen.

What I do now is merge all the images of a film roll into one big collage where they are fitted perfectly together,and each image has like a 255x256 dimension. I search for the darkest values in that combined image (with histogram binning to ignore some peak values I don’t want), and I invert that combined image and do the color balancing on that one.I then use the exact values used for the combined image,on each of the separate scans in full size. This way I set the white point based on the entire film roll, and I colorbalance the entire roll as one. While still having separate pictures to edit and tweak.

Doing it all in floating point means I also don’t have to loose the highlights which I might clip in setting a white point.

I was wondering if I’m doing something weird and it’s just luck I have good results, I’m missing a small step or something,or if it’s basically the same as rawtherapee (and darktable?)are doing.

Actually i have no idea how any of these tools work, i never do any sharpening… i only know about Capture Sharpening because i find it enabled by default in later versions :rofl:

No problem :slight_smile:
What if i decrease the Output Level step from 0.05 to 0.01 ?
The Reference Exponent already feels fine-grained enough at 0.01, do you agree?

The algorithm is still exactly the same described in the first post: for each input channel, raise the value to a negative exponent, then apply a multiplier to balance the output picture.

If i understand correctly, i think you’re applying the per-channel gamma correction too late in the pipeline.
You should first raise each channel to its negative exponent (that’s the same as calculating the reciprocal and raising it to a positive exponent), then balance the resulting picture by applying appropriate multipliers, and finally do the subtraction+scaling for adjusting the levels.

In short, do not do any subtraction before entering the exponentiation stage :wink:

Regarding the auto-balance, i do the same as you except i use the median values instead of the mean. That is a bit more robust in case you have some very dark or bright area around the frame, like a film holder, or some direct light visible (sprocket holes?)

After months and months i finally made this test :slight_smile:
Here is a sequence of color target shots, in daylight, using Kodak ColorPlus 200 film.
The +0EV was the setting suggested by the camera meter (i’m not sure how reliable it is).

In the second column i calculated the parameters by sampling exponents and balance from the +0EV picture, then i copied those same parameters on all other pictures.
In the third column, instead, i did the sampling individually on each picture, and adjusted the output level so that the median brightness of the pictures is roughly the same. Note how the contrast is completely gone at +3EV.
Right of the third column i wrote the red and blue ratios that i got with the dual-spot feature, by sampling the second and fifth gray patch.
Remember : red ratio and blue ratio are ratios to the reference exponent, so we’re really talking about exponents here :wink:

Note how the red ratio drops between +0 and +1EV, then stays almost constant. While the blue ratio drops more significantly between +2 and +3EV.

If this behavior is common to all types of film, i could even compensate for it (that would mean the exponent itself would also be a function of the base, OMG! :rofl:)
But to do that, i would need an absolute input reference, so i would need to ask the user to sample the film base again… oh well.

Here are the raw files: link

2 Likes

The Reference Exponent is quite nice with 0.01 delta, if the Output Level is made the same it will be nicer as well, and more consistent. Sorry for the misleading suggestion earlier :blush:

That’s a great test indeed!

About automatically compensating the ratios depending on the overexposure, not sure. While most mentions of the overexposed color negative films speak about loss of contrast and saturation, the effect on the channels may not be shared. After all, the dyes are the result of a chemical transformation which should have differences between brands and manufacturers. It would be expected if the color cast appeared at different points with different intensities between the films, so these jumps would vary.

Although, how can you tell from the base color whether the film is overexposed and the ratios need to be modified?

No problem :+1: Pushed to git.

If i have the absolute RGB value of the film base color, its ratio to that of any other pixel value in the picture, will give me the amount of film density for that pixel.
So, i can modulate the exponent based on the film density: the higher the density, the lower the exponent. In other words:
y = x ^ f(b / x)

where
x is the input channel value
y is the (unbalanced) output
b is the film base value for that channel
f() is some function that we need to figure out :slight_smile: . It may involve some threshold value on density, and i agree with you that it’s unlikely to be the same for all films.
But if we find that the model of behavior is common, we might add some adjusters and let the user configure the threshoulds manually. It would be kind of a “highlight recovery” tool, tailored specifically for film negative.
Negadoctor takes this into account, if i remember correctly.
Anyway, i would like to do some additional measurements. I need to take a test shot with more steps of gray and a higher dynamic range. I could take multiple exposures with an led in front of the camera and varying exposure times. I could even take individual R,G, and B gradients… :thinking:

My English is generally pretty good, but in case of math I have to think hard how to translate it :).

‘Raise the value to a negative exponent’, that’s doing pow(value, -1) right? And then multiply it with a number to ‘balance out the picture’. pow(value, -1) * somenumber is the same as somenumber/value, so we appear to be doing exactly the same.

But I pick the numbers to balance out the white-point of the final output-picture, you too? Because so far I haven’t touched the ‘film-base color’ yet, which feels weird :).

So after all that, I use the film-base color (supplied by ‘the user’) to set the black-point. What I basically do is go through the whole process with a scanned piece of filmstrip, and the color that comes out of it ‘should be black’.

You should first raise each channel to its negative exponent (that’s the same as calculating the reciprocal and raising it to a positive exponent), then balance the resulting picture by applying appropriate multipliers

If I read this it seems to me you are doing pow(value, -1.125) * somenumber. Which makes me think 'how do you determine the -1.125 (an example) in this case, and how do you determine ‘somenumber’? Trying to balance out the blackpoint? whitepoint? User-supplied gray-patch?

Sorry, that’s most probably because of my bad english, (and my bad math too) :rofl:

Correct, except that my negative exponent is not always “-1”, but some channel-specific negative value, as you pointed out below.
Regarding your black point compensation procedure, it should work, as long as you do that after the exponentiation part.
I don’t automate that, since the user can adjust it manually with the RT’s tone curve, but it’s basically the same concept.

Yesss! Exactly, that’s all i do :wink:

I calculate the correct exponents by asking the user to select two neutral gray patches, at different levels (a dark gray and a light gray), and then using the formula that you can find in the ODS spreadsheet attached in the first post. Look into the cells labeled “p=” (F18 - H18).
That spreadsheet use the blue channel as a reference, while in RT i use the green channel, but the concept is the same.
Basically i get the ratio between the dark and bright gray value for each channel. Then i choose one channel as the reference, and i find the exponents needed for the other two channels so that their ratio between the dark and bright gray is the same as that of the reference.

Let’s say we have:
REFdark which is the dark gray patch value in our reference channel.
REFbright which is the bright gray patch value.

Cdark is the dark gray patch value in one of the other channels, for which we want to determine the exponent.
Cbright is the bright gray patch value in that channel.

The channel exponent will be:

log(Cdark / Cbright) (REFdark / REFbright)

Note that “dark” here means “dark in the original scene”, so it will be bright in the negative, and vice-versa. Anyway the result shouldn’t change if you reverse both :slight_smile:
Also note that in the spreasheet i calculated the reciprocals separately, so the resulting exponents are positive.

no and no…

Yess, exactly! That’s the function of the “Pick white balance spot” button :slight_smile:
And, while the user hasn’t picked the spot yet, i simply balance the channel medians of the whole image :wink:

1 Like

But how does the film density relate to the exposure in relation to the scene? It could be a bright underexposed scene or dark overexposed one and the density would be the same. Yet the contrast and saturation range should be different (here I don’t have much of experience though) as pulled film would have somewhat increased contrast and saturation while the pushed one would have less contrast and saturation. How does one tell the difference based on the density alone (or the film rebate color)?

Sorry if I’m asking about something obvious, I really should read your code to get a better grasp of what’s going on.

Yes indeed, those situations would be the same, that is: the same amount of light would yield the same density, no matter the situation. :slight_smile:

I don’t know about pushing and pulling film, but anyway i’m not interested in evaluating the contrast range of the whole picture. I only look at the level of density in a particular spot, which is solely based on how many photons have hit that particular spot during exposure.
I’m not trying to find a way to judge whether a picture as a whole is over or underexposed, i’m only saying that maybe (i’m not sure), instead of applying a simple power curve to the input, i should apply a slightly different curve.
For example:

save

(you can tweak it here)

The black curve is what i do know: raising my input value to a constant exponent: y = x ^ 1.4 in this example.
The red curve is what i was referring to in my previous post, that is:

if (x<=1.0) then
  y = x ^ 1.4
else
  y = x ^ (1.4 / x)

Or: up to a certain threshold (1.0 in this example) of the input value, apply the constant exponent. Above that threshold, the exponent should decrease as the input increases. Basically it gives a “shoulder” to the curve, to smooth the top end of the output range. Each channel would have a different threshold and possibly a different degree of “smoothing”.
The threshold in this example is a fixed absolute value, but in reality will be a fraction of the film base value for that channel. This will take into account different digitization exposure levels (basically, the exposure setting of your digital camera or scanner when you shoot your neg).
Note that this is just a crude example of such a formula, just to show the idea. Negadoctor has a much nicer formula to do this kind of smoothing, i could steal that one :slight_smile:
Also, i’ve used positive exponents for simplicity.

Don’t worry, nothing is obvious here (at least for me), i’m just taking some wild guesses :rofl:

Not quite, the density depends on the development as well. Meaning you can have two negatives of the same exact scene, one exposed and developed normally, and the other underexposed two stops and then pushed in the development two stops (or one, or three) to compensate. The look will be quite different as the channels will shift non-linearly, meaning a given point’s individual channel density may be the same between the two negatives, but the tonal distribution of the values on both sides of that point will be different. This will not only affect the tone but also saturation and global contrast.

Overall the idea of smoothing looks interesting. I’m curious to see it in action as it is not quite clear how intuitive it will be to have this more dynamic relationship between the ratios.

I’ve been using negadoctor recently. I have some difficult to convert negatives, which have no apparent middle grays. These are sheet film so there are no adjacent frames to refer to. Although negadoctor’s approach to the color balancing is far less intuitive compared to filmneg, in one case the result was closer than I could achieve with filmneg so far. Suprisingly, NLP is the one giving the best result overall for that particular frame. Maybe it speaks more about my skill than the available software options though :slight_smile:

What was a revelation to me regarding negadoctor, is how excellent its approach is when applied to B&W negatives. The result is outstanding and the control is so straightforward (and actually quite in line with a physical darkroom). I’ve been focusing primarily on B&W for some time now and the conversion part have been exceedingly difficult before. Something that might not come through as expected compared to the color negatives :joy:

Maybe a hybrid approach where the flow would resemble the negadoctor’s but the color balancing was based on filmneg’s exponents would be the ultimate solution.

1 Like

Thanks for doing the test @rom9! And I like your quoted idea. Before I imagined something like a discrete slider -3, -2… that user would choose to specify how much his individual frame is overexposed and try to eyeball what gives best result. But at the same time I imagined frames which technically are exposed correctly, i.e. middle part lies on the linear part of curve, but due to dynamic range of the scene highlights/shadows just don’t fit the linear section and fall to nonlinear shoulders. And for my imagined approach there was nothing to choose except +0. And your approach sounds like a clever solution because it will take pixels in isolation and determine the ratio of it to film base, for each of them. So approach for middle tones would vary from approach to highlights for such high dynamic range frames.

I wonder if you could further improve your idea if along with film base (let’s call it Dmin or minimum density) you would also know the Dmax, the maximum density of the film in question. Then by looking at a pixel you would also know whether you are getting close to Dmax so it’s time to apply shoulder correction. As by having just Dmin it’s not entirely clear when to start shoulder correction and different films have different latitude so it will vary.

Hope I’m not saying total nonsense…

Hi everyone,

There’s so much activity, it’s getting hard to keep track of open questions, to be honest :smiley:

:open_book: One important thing: http://rawpedia.rawtherapee.com/Film_Negative rawpedia entry for filmneg module desperately needs a revision. I’d like to contribute (currently requesting an account on rawpedia). At least, that would help me understand a few more things :innocent: I guess I still need a clear view on the processing pipeline and how it’s integrated into it.

  1. would you mind telling me more about the filmneg white balance sliders? are they exactly on the same orthogonal axes as the usual sliders? (rawpedia mentions temperature is on the blue/yellow axis… contrary to what @rom9 said about it being blueish (azure?) / amber. It does not change many things, as long as a 30-degree tilt is performed on the tint axis, too)
  1. is there a way to revert the filmneg WB to the initial median? (sorry for missing that info, if it was shared earlier). It’s not crop-sensitive, right?
  2. is the filmneg WB sensitive to the order in which the operations are performed? Specifically, say I:
    a. do WB with spot tool,
    b. adjust “Output level” slider (imagine I’m stupid, I bump it up way too high),
    c. maybe try some other WB spot?
    ==> will it still work properly or will it be biased / sensitive to some channel clipping?
    ==> same question (but more related to processing pipeline): does it change anything if histogram is stretched with curves and then filmneg spot WB is done again?

Also, the recent discussions about film behaviour with regards to (over)exposure remind me of the lengthy darktable’s Filmic FAQ :smiley: there are quite a few technical aspects in there, for us, film users, not to forget about how film behaves

1 Like

Yes, that’s correct, sorry i didn’t specified it before: everything i said above was in the context of a single film roll. If you then process another roll, developed in a different way, you’ll have different exponents, different balance, etc., so basically you have to start all over again.

In these cases, finding the correct exponents is quite hard. I had an idea floating in my head about that, which i never tried implementing:
What if i let the user pick two spots, not neutral gray but two spots of any color, and then adjust those colors so that they resemble the real colors in the original scene? The adjustment would be done using two groups or RGB or even HSV sliders, basically two colorpickers giving immediate feedback in the preview.
Those two pairs of input and output values will then allow me to find the appropriate exponents via logarithms.
Instead of having exponent/ratios and color balance sliders, we would have 3 sliders for spot#1 and 3 sliders for spot#2. Still 6 sliders in total, but it could be a bit more intuitive… what do you think?

No, that’s totally correct :slight_smile:
I will have to provide a “Latitude” slider to let the user choose when the shoulder kicks in.

Yes, i know, i also have to remember that the pipeline description needs updating (now film negative has moved downstream). But we shouldn’t modify it until RT 5.9 is out, i guess :slight_smile:

Not exactly. The sliders are imitating the behaviour of the normal WB tool, so the blue/yellow slider doesn’t go straight from yellow to blue, but instead follows the planckian locus … i think… well, don’t trust me on these scientific topics :rofl:, the point is that RT provides a very handy utility class that easily converts between color temperature + green equalisationtint to a set of RGB multipliers.
I used that utility in order to have 2 sliders behaving the same as the normal WB, in the hope to make the process more familiar to the user.
The key difference is that i do not show an absolute color temperature value (it wouldn’t make sense as it has no correlation to the actual illuminant of the original scene), but instead i’ve made a very generic and vague “compensation” slider: if you leave it centered at zero, your spot stays neutral gray; if you move it left or right, you make the image cooler or warmer.
Regarding the magenta/green slider, that should be a simple green multiplier.

No. Once you select a WB spot, you override the median esitmate and there is no way to re-do the estimation other than deleting the processing profile and starting all over again.

Correct. Not in this version, planned for the next.

Absolutely not: there is no clipping involved in the sampling operation. If the image is way too bright and you pick a spot, the balancing should work the same. The output level will stay high, but just drag it back down and you should see everything correctly.

Absolutely not, both Exposure compensation and Tone Curves happen after the film negative tool :wink:

Thanks for the Filmic link, that’s plenty of info! I’ll give in a read :slight_smile:

3 Likes

Yes, I see how this is could be the main usage scenario. In a context of a single roll everything above makes sense. Yet, single frame scenarios are I believe are no less important, whether it is sheet film or individual slides.

Technically that would work, realistically not sure. It is hard enough to match a given color with sliders if the example is already on the screen. Matching a color on screen to its real world idea would be even more difficult. Of course, an approximation would work and it could be a useful tool, but how different would it be from adjusting the ratios?

Six sliders would not be very user friendly. Darktable’s solution where the overall balance is adjusted by the RGB sliders for me is hard to use. The reason is in a real scene the dominant illumination color is rarely R or G or B, it is more often somewhere close to the planckian locus you mentioned, which in RGB terms is a mix. Maybe HSV would be more convenient though. Or maybe a freehand choice from a color wheel would be even easier.

The scenes I mentioned before may well contain neutral gray spots, but they lit by colored light, so nothing in the scene is neutral anymore. The difficult thing is to reproduce the color of that light.

Anyway, this is just one opinion. Maybe @Ilya_Palopezhentsev has a different one.

ok, I think I get it. I’m not 100% sure but it looks like both axes are decorrelated (orthogonal).
What I meant to say in the link I shared about the tilt is that:

  • either you have: one blue / yellow axis (as in Lab), opposed to a pseudo magenta / green axis,
  • or, if you have temperature on one axis, you have: one pseudo blue (blue mixed with cyan, so azure?) / amber (red mixed with yellow) axis and, this time, to keep it orthogonal, a true magenta / green axis.

As long as they’re designed this way, you’re fine when it comes to being able to adjust colour individually.

gee, I was brushing my teeth and I was thinking about spots, too xD but not exactly in the same way.
Honestly, I don’t know, in advance, how it’s going to feel, especially with white balance thrown in the middle. How repeatable is it going to be, from frame to frame? Not sure. I can’t tell before trying it out. I have the same concerns as @nicnilov, when it comes to UX

I thought: what if you have some sort of multi-spot tool (make it resizable, too, right ? :stuck_out_tongue_winking_eye: ) for these exponents?

  • current behaviour: user picks two spots, and it’s in their interest that they be as far apart as possible to best approximate the exponentiation curve.
  • suggested idea: user picks more two neutral spots or more (maybe up to 9 or 10 ?) and clicks a button to finish the sampling. This way, instead of having only two dots as material to estimate the curve, you have more samples. I know it adds challenges, when it comes to managing outliers but I thought I’d pitch this :grin:

@nicnilov, I hear you! that’s actually why I’ve been back to WB basics: sometimes, an image is just completely off and, by eye, I don’t know where to start so I’m very happy if I’m given a tool such as an autoWB (auto rgb grey, or something more elaborate, such as the “auto iterate temperature correlation”) or a spot WB for those lucky frames containing a neutral object. At least, it gives a reasonably better place to start. Then, with Temperature and Tint sliders, I know how to move towards the desired look (hello amber sunset shots with absolutely no grey whatsoever).

(oh and it’s time we threw in our special negatives here! you know, those on daylight film, shot with FL-D, FL-W, or with temperature conversion 80/85 filters)

note: I’m mentioning this “auto iterate temperature correlation” as it’s, at least on paper, an interesting feature, that aims at taking into account the overall shift in colour balance, provided lighting has a sufficiently broad spectrum (i.e. forget about using this feature for fluo shots).

additional note: it’s going to be an off-topic conversation if we stray towards WB considerations :smiley: This makes me think of Apple Aperture (probably present in Apple Photos too?) WB tool: you can either target natural grey (and deviate from it with a temperature slider) or target skin tones (and also deviate from them with a slider). I thought I’d share it here, as I guess only “few” people know about this neat feature (which is now probably embedded in Apple’s apps).

2 Likes

This sounds nice!
You know what I thought @rom9 & others? When we let people sample shades of grey that are too far apart in brightness, we in fact are exposing them to potential error: the points we click on may be already on the shoulder part of the characteristic curve and not on the linear part. So I feel maybe having more samples of not very bright and not very black grey shades may instead provide more honest measurement? Averaging them should also probably cancel out random noise from film grain/digital sampling.

2 Likes

Yeah, RGB chooser is very inconvenient in practice. The best I used is color wheel in Silverfast. Although I wonder where is the third coordinate if we just use 2d color wheel?..

This idea by @rom9 about sampling 2 colors sounds interesting but I also think it‘s hard to judge its usefulness without actually trying…

Also unclear what would be with other colors apart from those 2 we sampled. And how target white point would be applied (if at all). In practice I rarely leave target color as it is after picking white point. It almost always requires correcting to make picture good. So wonder how it would look with this new approach… I wouldn’t remove the current one in favor of this new extended one…