Any interest in a "film negative" feature in RT ?

Sorry, that’s most probably because of my bad english, (and my bad math too) :rofl:

Correct, except that my negative exponent is not always “-1”, but some channel-specific negative value, as you pointed out below.
Regarding your black point compensation procedure, it should work, as long as you do that after the exponentiation part.
I don’t automate that, since the user can adjust it manually with the RT’s tone curve, but it’s basically the same concept.

Yesss! Exactly, that’s all i do :wink:

I calculate the correct exponents by asking the user to select two neutral gray patches, at different levels (a dark gray and a light gray), and then using the formula that you can find in the ODS spreadsheet attached in the first post. Look into the cells labeled “p=” (F18 - H18).
That spreadsheet use the blue channel as a reference, while in RT i use the green channel, but the concept is the same.
Basically i get the ratio between the dark and bright gray value for each channel. Then i choose one channel as the reference, and i find the exponents needed for the other two channels so that their ratio between the dark and bright gray is the same as that of the reference.

Let’s say we have:
REFdark which is the dark gray patch value in our reference channel.
REFbright which is the bright gray patch value.

Cdark is the dark gray patch value in one of the other channels, for which we want to determine the exponent.
Cbright is the bright gray patch value in that channel.

The channel exponent will be:

log(Cdark / Cbright) (REFdark / REFbright)

Note that “dark” here means “dark in the original scene”, so it will be bright in the negative, and vice-versa. Anyway the result shouldn’t change if you reverse both :slight_smile:
Also note that in the spreasheet i calculated the reciprocals separately, so the resulting exponents are positive.

no and no…

Yess, exactly! That’s the function of the “Pick white balance spot” button :slight_smile:
And, while the user hasn’t picked the spot yet, i simply balance the channel medians of the whole image :wink:

1 Like

But how does the film density relate to the exposure in relation to the scene? It could be a bright underexposed scene or dark overexposed one and the density would be the same. Yet the contrast and saturation range should be different (here I don’t have much of experience though) as pulled film would have somewhat increased contrast and saturation while the pushed one would have less contrast and saturation. How does one tell the difference based on the density alone (or the film rebate color)?

Sorry if I’m asking about something obvious, I really should read your code to get a better grasp of what’s going on.

Yes indeed, those situations would be the same, that is: the same amount of light would yield the same density, no matter the situation. :slight_smile:

I don’t know about pushing and pulling film, but anyway i’m not interested in evaluating the contrast range of the whole picture. I only look at the level of density in a particular spot, which is solely based on how many photons have hit that particular spot during exposure.
I’m not trying to find a way to judge whether a picture as a whole is over or underexposed, i’m only saying that maybe (i’m not sure), instead of applying a simple power curve to the input, i should apply a slightly different curve.
For example:

save

(you can tweak it here)

The black curve is what i do know: raising my input value to a constant exponent: y = x ^ 1.4 in this example.
The red curve is what i was referring to in my previous post, that is:

if (x<=1.0) then
  y = x ^ 1.4
else
  y = x ^ (1.4 / x)

Or: up to a certain threshold (1.0 in this example) of the input value, apply the constant exponent. Above that threshold, the exponent should decrease as the input increases. Basically it gives a “shoulder” to the curve, to smooth the top end of the output range. Each channel would have a different threshold and possibly a different degree of “smoothing”.
The threshold in this example is a fixed absolute value, but in reality will be a fraction of the film base value for that channel. This will take into account different digitization exposure levels (basically, the exposure setting of your digital camera or scanner when you shoot your neg).
Note that this is just a crude example of such a formula, just to show the idea. Negadoctor has a much nicer formula to do this kind of smoothing, i could steal that one :slight_smile:
Also, i’ve used positive exponents for simplicity.

Don’t worry, nothing is obvious here (at least for me), i’m just taking some wild guesses :rofl:

Not quite, the density depends on the development as well. Meaning you can have two negatives of the same exact scene, one exposed and developed normally, and the other underexposed two stops and then pushed in the development two stops (or one, or three) to compensate. The look will be quite different as the channels will shift non-linearly, meaning a given point’s individual channel density may be the same between the two negatives, but the tonal distribution of the values on both sides of that point will be different. This will not only affect the tone but also saturation and global contrast.

Overall the idea of smoothing looks interesting. I’m curious to see it in action as it is not quite clear how intuitive it will be to have this more dynamic relationship between the ratios.

I’ve been using negadoctor recently. I have some difficult to convert negatives, which have no apparent middle grays. These are sheet film so there are no adjacent frames to refer to. Although negadoctor’s approach to the color balancing is far less intuitive compared to filmneg, in one case the result was closer than I could achieve with filmneg so far. Suprisingly, NLP is the one giving the best result overall for that particular frame. Maybe it speaks more about my skill than the available software options though :slight_smile:

What was a revelation to me regarding negadoctor, is how excellent its approach is when applied to B&W negatives. The result is outstanding and the control is so straightforward (and actually quite in line with a physical darkroom). I’ve been focusing primarily on B&W for some time now and the conversion part have been exceedingly difficult before. Something that might not come through as expected compared to the color negatives :joy:

Maybe a hybrid approach where the flow would resemble the negadoctor’s but the color balancing was based on filmneg’s exponents would be the ultimate solution.

1 Like

Thanks for doing the test @rom9! And I like your quoted idea. Before I imagined something like a discrete slider -3, -2… that user would choose to specify how much his individual frame is overexposed and try to eyeball what gives best result. But at the same time I imagined frames which technically are exposed correctly, i.e. middle part lies on the linear part of curve, but due to dynamic range of the scene highlights/shadows just don’t fit the linear section and fall to nonlinear shoulders. And for my imagined approach there was nothing to choose except +0. And your approach sounds like a clever solution because it will take pixels in isolation and determine the ratio of it to film base, for each of them. So approach for middle tones would vary from approach to highlights for such high dynamic range frames.

I wonder if you could further improve your idea if along with film base (let’s call it Dmin or minimum density) you would also know the Dmax, the maximum density of the film in question. Then by looking at a pixel you would also know whether you are getting close to Dmax so it’s time to apply shoulder correction. As by having just Dmin it’s not entirely clear when to start shoulder correction and different films have different latitude so it will vary.

Hope I’m not saying total nonsense…

Hi everyone,

There’s so much activity, it’s getting hard to keep track of open questions, to be honest :smiley:

:open_book: One important thing: http://rawpedia.rawtherapee.com/Film_Negative rawpedia entry for filmneg module desperately needs a revision. I’d like to contribute (currently requesting an account on rawpedia). At least, that would help me understand a few more things :innocent: I guess I still need a clear view on the processing pipeline and how it’s integrated into it.

  1. would you mind telling me more about the filmneg white balance sliders? are they exactly on the same orthogonal axes as the usual sliders? (rawpedia mentions temperature is on the blue/yellow axis… contrary to what @rom9 said about it being blueish (azure?) / amber. It does not change many things, as long as a 30-degree tilt is performed on the tint axis, too)
  1. is there a way to revert the filmneg WB to the initial median? (sorry for missing that info, if it was shared earlier). It’s not crop-sensitive, right?
  2. is the filmneg WB sensitive to the order in which the operations are performed? Specifically, say I:
    a. do WB with spot tool,
    b. adjust “Output level” slider (imagine I’m stupid, I bump it up way too high),
    c. maybe try some other WB spot?
    ==> will it still work properly or will it be biased / sensitive to some channel clipping?
    ==> same question (but more related to processing pipeline): does it change anything if histogram is stretched with curves and then filmneg spot WB is done again?

Also, the recent discussions about film behaviour with regards to (over)exposure remind me of the lengthy darktable’s Filmic FAQ :smiley: there are quite a few technical aspects in there, for us, film users, not to forget about how film behaves

1 Like

Yes, that’s correct, sorry i didn’t specified it before: everything i said above was in the context of a single film roll. If you then process another roll, developed in a different way, you’ll have different exponents, different balance, etc., so basically you have to start all over again.

In these cases, finding the correct exponents is quite hard. I had an idea floating in my head about that, which i never tried implementing:
What if i let the user pick two spots, not neutral gray but two spots of any color, and then adjust those colors so that they resemble the real colors in the original scene? The adjustment would be done using two groups or RGB or even HSV sliders, basically two colorpickers giving immediate feedback in the preview.
Those two pairs of input and output values will then allow me to find the appropriate exponents via logarithms.
Instead of having exponent/ratios and color balance sliders, we would have 3 sliders for spot#1 and 3 sliders for spot#2. Still 6 sliders in total, but it could be a bit more intuitive… what do you think?

No, that’s totally correct :slight_smile:
I will have to provide a “Latitude” slider to let the user choose when the shoulder kicks in.

Yes, i know, i also have to remember that the pipeline description needs updating (now film negative has moved downstream). But we shouldn’t modify it until RT 5.9 is out, i guess :slight_smile:

Not exactly. The sliders are imitating the behaviour of the normal WB tool, so the blue/yellow slider doesn’t go straight from yellow to blue, but instead follows the planckian locus … i think… well, don’t trust me on these scientific topics :rofl:, the point is that RT provides a very handy utility class that easily converts between color temperature + green equalisationtint to a set of RGB multipliers.
I used that utility in order to have 2 sliders behaving the same as the normal WB, in the hope to make the process more familiar to the user.
The key difference is that i do not show an absolute color temperature value (it wouldn’t make sense as it has no correlation to the actual illuminant of the original scene), but instead i’ve made a very generic and vague “compensation” slider: if you leave it centered at zero, your spot stays neutral gray; if you move it left or right, you make the image cooler or warmer.
Regarding the magenta/green slider, that should be a simple green multiplier.

No. Once you select a WB spot, you override the median esitmate and there is no way to re-do the estimation other than deleting the processing profile and starting all over again.

Correct. Not in this version, planned for the next.

Absolutely not: there is no clipping involved in the sampling operation. If the image is way too bright and you pick a spot, the balancing should work the same. The output level will stay high, but just drag it back down and you should see everything correctly.

Absolutely not, both Exposure compensation and Tone Curves happen after the film negative tool :wink:

Thanks for the Filmic link, that’s plenty of info! I’ll give in a read :slight_smile:

3 Likes

Yes, I see how this is could be the main usage scenario. In a context of a single roll everything above makes sense. Yet, single frame scenarios are I believe are no less important, whether it is sheet film or individual slides.

Technically that would work, realistically not sure. It is hard enough to match a given color with sliders if the example is already on the screen. Matching a color on screen to its real world idea would be even more difficult. Of course, an approximation would work and it could be a useful tool, but how different would it be from adjusting the ratios?

Six sliders would not be very user friendly. Darktable’s solution where the overall balance is adjusted by the RGB sliders for me is hard to use. The reason is in a real scene the dominant illumination color is rarely R or G or B, it is more often somewhere close to the planckian locus you mentioned, which in RGB terms is a mix. Maybe HSV would be more convenient though. Or maybe a freehand choice from a color wheel would be even easier.

The scenes I mentioned before may well contain neutral gray spots, but they lit by colored light, so nothing in the scene is neutral anymore. The difficult thing is to reproduce the color of that light.

Anyway, this is just one opinion. Maybe @Ilya_Palopezhentsev has a different one.

ok, I think I get it. I’m not 100% sure but it looks like both axes are decorrelated (orthogonal).
What I meant to say in the link I shared about the tilt is that:

  • either you have: one blue / yellow axis (as in Lab), opposed to a pseudo magenta / green axis,
  • or, if you have temperature on one axis, you have: one pseudo blue (blue mixed with cyan, so azure?) / amber (red mixed with yellow) axis and, this time, to keep it orthogonal, a true magenta / green axis.

As long as they’re designed this way, you’re fine when it comes to being able to adjust colour individually.

gee, I was brushing my teeth and I was thinking about spots, too xD but not exactly in the same way.
Honestly, I don’t know, in advance, how it’s going to feel, especially with white balance thrown in the middle. How repeatable is it going to be, from frame to frame? Not sure. I can’t tell before trying it out. I have the same concerns as @nicnilov, when it comes to UX

I thought: what if you have some sort of multi-spot tool (make it resizable, too, right ? :stuck_out_tongue_winking_eye: ) for these exponents?

  • current behaviour: user picks two spots, and it’s in their interest that they be as far apart as possible to best approximate the exponentiation curve.
  • suggested idea: user picks more two neutral spots or more (maybe up to 9 or 10 ?) and clicks a button to finish the sampling. This way, instead of having only two dots as material to estimate the curve, you have more samples. I know it adds challenges, when it comes to managing outliers but I thought I’d pitch this :grin:

@nicnilov, I hear you! that’s actually why I’ve been back to WB basics: sometimes, an image is just completely off and, by eye, I don’t know where to start so I’m very happy if I’m given a tool such as an autoWB (auto rgb grey, or something more elaborate, such as the “auto iterate temperature correlation”) or a spot WB for those lucky frames containing a neutral object. At least, it gives a reasonably better place to start. Then, with Temperature and Tint sliders, I know how to move towards the desired look (hello amber sunset shots with absolutely no grey whatsoever).

(oh and it’s time we threw in our special negatives here! you know, those on daylight film, shot with FL-D, FL-W, or with temperature conversion 80/85 filters)

note: I’m mentioning this “auto iterate temperature correlation” as it’s, at least on paper, an interesting feature, that aims at taking into account the overall shift in colour balance, provided lighting has a sufficiently broad spectrum (i.e. forget about using this feature for fluo shots).

additional note: it’s going to be an off-topic conversation if we stray towards WB considerations :smiley: This makes me think of Apple Aperture (probably present in Apple Photos too?) WB tool: you can either target natural grey (and deviate from it with a temperature slider) or target skin tones (and also deviate from them with a slider). I thought I’d share it here, as I guess only “few” people know about this neat feature (which is now probably embedded in Apple’s apps).

2 Likes

This sounds nice!
You know what I thought @rom9 & others? When we let people sample shades of grey that are too far apart in brightness, we in fact are exposing them to potential error: the points we click on may be already on the shoulder part of the characteristic curve and not on the linear part. So I feel maybe having more samples of not very bright and not very black grey shades may instead provide more honest measurement? Averaging them should also probably cancel out random noise from film grain/digital sampling.

2 Likes

Yeah, RGB chooser is very inconvenient in practice. The best I used is color wheel in Silverfast. Although I wonder where is the third coordinate if we just use 2d color wheel?..

This idea by @rom9 about sampling 2 colors sounds interesting but I also think it‘s hard to judge its usefulness without actually trying…

Also unclear what would be with other colors apart from those 2 we sampled. And how target white point would be applied (if at all). In practice I rarely leave target color as it is after picking white point. It almost always requires correcting to make picture good. So wonder how it would look with this new approach… I wouldn’t remove the current one in favor of this new extended one…

this intuition reminds me of a few contre-jour shots (exposed for the shadows, so typically exposed for 12LV when I’d measure 15LV in the other direction) with heavy specular sun reflections: sometimes, I’d attempt to sample the bright neutral white from there, with a rather not pleasing outcome.

you’ve been reading my mind. I know it’s not Christmas :mrs_claus: yet :christmas_tree:
The sampling tool could remain very simple, in nature (a square or a resizable rectangle). But, for practical purposes, with an additional option, a tile (or the entire image) could go through a suitable median and gaussian blur filter stage. It may not be the best visual trick in the world but I’ve used it, sometimes, to make the search for a suitable spot candidate more easy.

1 Like

So, my two cents in this discussion would be: be careful to expand a conceptually simple tool with further bells and whistles. I’m not against it, but please consider the effects on usability carefully. Also keep in mind that if the mechanics (strongly) differ from how the rest of RT works, it could reduce the intuitiveness of the tool and we need to resort to RTFM (which I like to avoid).

2 Likes

Yes, i’ve checked the code and confirm that the green tint parameter is only used as a simple multiplier for the green channel. (… don’t know why i called it “green equalisation” yesterday :rofl:)

i don’t know… sometimes you don’t have so many neutral spots in a picture. I’m afraid the user would end up picking non-neutral colors, and that would throw everything off, despite the averaging…

I know it gives very good result, but i’m not sure how easily the current implementation could be re-used downstream in the pipeline. And reimplementing it from scratch would be quite hard…

Cool! This could be a very nice shortcut in many cases. Since it comes from Apple, i hope the idea is not patented :smiley:

True, this is currently a problem when you click on an overexposed white :cry:
But what happens when there are no neutral spots in a frame? In that case the ability to pick more than two won’t help. That’s why i’d like to find an alternative to the dual-neutral-spot solution… :thinking:

You could still lock one of the spots to gray, and adjusting the other one would change the exponents :slight_smile:

Of course we’re talking about wild experimentation here, nobody will remove anything until we have something better :slight_smile:

Yes that’s the key for a different approach i had in mind:

  • let the user select the whole frame (no borders)
  • get a good median and the dynamic range of the digitized negative frame.
  • stretch levels based on the input dynamic range
  • start with default exponents, balancing so that the input median maps to 50% gray
  • from that, select two “virtual spots” at 25% and 75% respectively, and let the user color-balance the dark and light areas separately. From that we can derive channel exponents and multipliers.
  • two additional “Latitude” sliders can deal (if needed) with the extreme shadows and higlights adjusting the curve’s toe and shoulder.

Once we have the good median, another interesting experiment would be to lock the auto WB, so that whatever exponents we apply, the multipliers are adjusted so that the median is balanced. Maybe this could be of help for quickly finding the correct exponents… the first version did something similar, but with a bad median.

Absolutely agree! :slight_smile:
We’re just throwing ideas around for experimentation, but the end goal will always be to make the job easier.

Anyway, i need to try these things out.
Thank you all for the input!
I’ll try to put together a new “experimental” version ASAP … stay tuned :wink:

3 Likes

Hi :slight_smile:

I have a few images that do not really relate to any other on the same roll. Here’s what I perceive as a challenge:

  • these images err on the side of overexposure (+2EV, give or take, so they’re substantially denser),
  • they lack obvious neutral patches,
  • they’re not shot in a particularly easy light.

what I expect: overexposure leads to:

  • “clean” shadows, i.e. better colour in the darks,
  • worse contrast, overall,
  • some colour shifts in the highlights.

→ This leads me to having much trouble finding proper WB. I get some tones right but that’s it and I seem to be stuck in a loop, adjusting filmneg WB and filmneg red/blue ratios.

How would you, folks, describe a way to get me in the ballpark, methodically? (other way of putting it: do we have a fully manual inversion walkthrough?)
my intuition would be to match RGB histograms in some obvious portions:

  1. start with a frame that has a border, so as to anchor the blacks.
    a. Should I adjust the red and blue ratios to have the red and blue histograms aligned on the green histogram blacks portion?
    b. probably adjust reference exponent to have “some” room on either side of the histogram
  2. <insert missing step here?>
  3. adjust filmneg WB sliders
    a. to obvious whites (not always easy :frowning: ) ?
    b. deviate from previous step to taste.

I’d appreciate your input / comments.

Thanks very much.

PS: I could be daydreaming but is this not paving (a grand way of saying so) the way to an incremental update to the current median WB mechanism? i.e. a mechanism that would also attempt to find “suitable” coefficients automatically

I’m not sure about the histogram matching… i think you could have a good output even though the channel histograms do not match at all :thinking:

Here’s what i would do:

  • pick a spot, even if not gray, of a color that you can easily recognize/remember, and that was very bright in the original scene.
  • adjust the color balance sliders to reproduce the color of that spot as exactly as you can. Use the Output Level slider to make it the intended brightness.
  • Now, using only the red and blue ratio sliders (don’t touch the color balance anymore), adjust the dark and shadow parts of the picture so that any color cast goes away.
  • if you still have some color casts in highlights or shadows, but they’re not interesting parts, you can simply clip them away by narrowing the Tone Curve :wink:

Note: remember that, in this scenario, the ratio sliders will be inverted: raising the red ratio means less red in the shadows.

Let me know if this eyeball-o-metric method can be any good :slight_smile:

Here is what I do with some of my negatives where there are no obvious neutral spots, but I’m afraid, I’m not going to reveal anything new. Since you seem to be asking primarily about “methodically”, I will take it as “not necessarily with RT”. Generally, RT/filmneg is what makes the best conversion for me, but sometimes I use Darktable with Negadoctor, specifically for the way it handles the highlights and shadows.

The approach, like you described, is indeed about matching the channel histograms, but at both sides. Channel histograms often have a well defined peak, matching on which may give a good starting point. Sometimes though, the peak may not be that well defined or there may be more than one. In such cases a good conversion might be when all channel histogram peaks are matched simultaneously.

For this the ability to shift and stretch an individual channel is necessary, and that’s where Darktable becomes relevant. Besides selecting the rebate color as the basis for blackpoint, it allows separate adjustments of the color cast in shadows and in highlights, as well as stretching and contracting the histogram in a pretty intuitive way.

Getting back to the “methodically”, the approach boils down to fitting all channels inside the histogram area; shifting them to get a neutral black point; and then adjusting the highlights so that their shape also syncs on the histogram. This requires a fair bit of compensation, as when we try to e.g. move a balanced histogram as a whole to the left, the channels often get out of sync and need to be adjusted again. Nevertheless, this doesn’t get completely out of hand, and with some negatives results in reasonably well controlled conversions.

This is quite similar to what can be done in Photoshop using Levels. The difference is Levels allow to adjust the channel gamma, and that’s not precise enough. Darktable targets highlights and shadows more precisely, so that when one area changes, the other is not affected that much. In RT when changing the red/blue ratios, the appropriate channel’s peak changes its height (which makes total sense, as the ratio of pixels belonging to the affected channels changes). Changing the white balance offsets shifts the channels in relation to each other without affecting their shape much (also makes sense as a whitepoint adjustment). What is missing is the ability to affect e.g. only the highlights of an individual channel.

When converting such negatives using RT/filmneg I often just select spots reasonably close to neutral, and which are not on the toe or the shoulder, and from there I adjust freehand, meaning there is no hard color reference, and it is of course very difficult to nail a good conversion this way.


With everything said above, this is also true. This makes me think the discussion now is not about a perfect automatic conversion anymore, but about finding some tools which would make the manual conversion of individual difficult frames easier and more reliable.

Here are some unorganized thoughts of mine.

  1. Quite often when I make an analog shot, I also make a digital (still or video) capture, just as a reference to the scene, and as an exposure record. In some cases I do a duplicate digital capture too, for comparison or other reasons. With this, at the negative conversion step I have a reference. Typically it does not match the contrast very well, but it matches the hues close enough. Hue for me is the most difficult part to get right. With it in place, saturation and lightness are kind of easier. So this could be one way out for the new captures, albeit not applicable to the old ones.
  1. Back to the “methodical” approach, an ability to separately affect the color in shadows and highlights seems to be important when converting negatives. Contrary to my previous opinion, there might be no way around having more sets of the color controls just for this purpose. Yet, to make it more intuitive maybe they should be not RGB but HSV, as this is where the most inconvenience comes from (for me) - adjusting RGB to get a precise color mix. HSV may make it easier. To work well this requires an informative and responsive histogram (not just the preview).

  2. Speaking of matching a given color from memory, I still believe it is not realistic due to how color works, but that’s only for an individual spot color. If we try and much a whole hue group, that might work very well. Here is a project I came across which (among other things) extracts dominant hues from an image image analyser. With those extracted hues which are the dominant colors of the prominent objects, as of yet uncorrected, the task of coming up with a correction seems to be easier. They are, as a reference, already on screen; they are dominant, so not random; they are much easier to remember as the color of the whole object, as opposed to the color of an individually picked spot. It should not be difficult to realize which direction the correction should take.

  3. There are also the examples of spectrograms. These could be helpful in matching the color of the same objects between shots, even if the shots were taken from different perspectives, with different lenses and film.

  1. This now seems a very good idea (it took me some time :blush:). I’ve been getting a lot of harsh highlights in my conversions, where the tonality is close to non-existent and the color noise is overwhelming. It didn’t happen this way with Darktable on the same negatives, probably due to the mentioned “nicer formula”.

Parts of this seem to be getting quite off from the direction filmneg was taking so far, but maybe some of this can still be applied.

1 Like

Thanks for the input! :slight_smile:
I’m currently trying to take a more complete measurement of the film response curve, using the 5 target shots from the post above, taken at increasing exposure levels from -1EV to +3EV.
I wrote a small script that extracts the average channel value from each of the 6 gray patches on each of the pictures.
Then i did the same sampling from a digital picture of the same target, shot immediately before the negative was shot. From the 6 patch values of that single digital pictures, i derived the corresponding values of whole range of exposures -1 … +3 by simply halving or doubling the initial values.
Finally, i made a log-log XY plot with the digital values on the X axis, and the corresponding negative values on the Y axis, here’s what i got:

which is surprisingly close to what theory would suggest :slight_smile:
(note that the negative pictures were white-balanced on the film border, hence channels coincide at their highest value).

Unfortunately, this range of exposures is too small to show the real limits of the film, but it seems that there is already some slight “bending” at the extremes. To be able to “zoom out” on these graph, and see what happens on more extreme values, i need to shoot a gradient with a broader dynamic range.
For this purpose, i ordered a small 8x8 LED matrix, that i plan to drive via an arduino, starting from the array fully lit, and turning off one pixel at a time, at increasing delays. All this while taking a 1sec or longer exposure, shot directly at the array. This should give a 64-steps gradient spanning a huge range of light values.
I expect to reach the limit of the lens (due to lens glare) much sooner than the limit of the film; in this case, i’ll split the work in two, and take separate shots for the upper and lower limits.

I hope to observe that all the channels behave the same way at the extremes; this would mean that we won’t need three separate adjustments for toe and shoulder. Also, we’d be very lucky if the curve behaves simmetrically on both ends … we’ll see :wink:

Maybe, better handling of over- and underexposure, could also lead to better and easier-to-achieve color accuracy, because sometimes while adjusting our negative conversions we might be misled by some color cast in a highlight area, and to fix that we throw the rest of the picture off the rails… :thinking:

1 Like

Which… theory? :blush:

There is something bothering me about measuring the WB on the film rebate. Is it correct to treat it as a uniform color cast which just needs to be subtracted? If that would be the case, the rebate could just as well be clear like on a B&W film, couldn’t it?

Instead, the rebate reveals layers of the film which dynamically participate in the color formation. The color of the rebate is the combined color of the magenta and yellow color masks where they have not participated in the dye formation. The rebate color should therefore transform on the positive to the neutral black. Notably, not to the middle gray where the WB is typically measured, not white, not even at the straight portion of the curve. As the masks do participate in the dye formation, the more exposure a given point receives, the less of the mask remains there, meaning the less relevant the WB correction measured on the black becomes for that point, resulting in progressively bigger color cast when moving from shadows to the highlights.

Isn’t this what the diverging RGB curves demonstrate on your chart? Wouldn’t it make more sense then to arrive to a neutral middle gray (which is what is already done through the sampling of two neutral spots) and then correct the toe and shoulder channel divergence locally? Which is similar if not exactly the same to what Darktable does. The benefit should be that the film’s exact dynamic range boundaries and where the image is placed within them becomes less important. As long as the black point, middle gray and white point are neutral, the image should be balanced.

With the middle gray anchored from the two spots selected by the user, and looking at the channels divergence at the toe and the shoulder, shouldn’t it be feasible to come up with the local compensation ratios automatically?

Just an aside, the color positive films may also use a colored film base to improve the color fidelity in conjunction with the projection light color, like the bluish cast on the Kodachrome made to compensate the tungsten projector lamp.

In such a test the black level is likely to get skewed a lot because of the veiling glare like you mentioned, and as described here Vinland's Useful Dynamic Range Test (link credits to Elle). The reason reported though is not the internal lens glare but the light bouncing between the film and the lens inside the camera (although in the referenced post it was the sensor which is likely more reflective). Anyway the solution to reduce the glare is to use a pinhole.

This would be a solid calibration data for a single film, but would it be generally applicable? Also, as this demonstrates, on film a whole density step (a quarter of the negative dynamic range) is dominated by noise, making the TRCs in that area rather arbitrary. Adding to the single film argument, it also shows film brands vary a lot in their response.

Actually, there is variability even within the brand between the formats. For example, Kodak Ektar 100 35mm has a more dark-brownish film base and responds differently to non-normal exposure and development compared to Kodak Ektar 100 4x5, which has much brighter orange-pinkish film base.

Maybe it is the density that should be measured instead, like you were pointing out before. After all, the exposure, the development, any variability there ends up resulting just in the density range of the three channels. The user could be given a control to modify the detected density range, which would in essence define the dynamic range. Combined with the three-point channels normalization that could work in both extracting the available tonal range and achieving the color balance.

Here are some examples of under and overexposed scenes. It is notable how consistent the Fuji Frontier SP3000 scanner is in interpreting the colors regardless of the exposure, provided that its scanning brightness remains constant.

3 Likes

With some further research (e.g. this, p.9) it seems the relationship may not be dynamic after all. The mask transforms to dye where the exposure happens so the dye replaces the mask. This might not be the whole answer though.

There are two masking layers, yellow for extra blue absorption, and pinkish for some extra blue and green absorption. Let’s focus on the green. If the exposure spectrum at any given point consists of mostly (or only) green, the pinkish mask will transform, but the yellow should remain. The color at this spot then has to end up skewed to blue (on the positive). If both masks received their target spectra, both will disappear and the color will be balanced.

When it so happens that both masks are converted to dye, it becomes appropriate to remove the mask by simple addition of its complementary color. But what about the scenario when one mask was converted but the second remains? The outcome would not only be dynamic, but non-deterministic, for when a point has a bluish cast, how can we tell it is a true color, or the effect of the yellow mask that has not been transformed?

Yet, analog prints from color negatives look balanced most of the time. Meaning I have to be missing something.

1 Like