Here is what I do with some of my negatives where there are no obvious neutral spots, but I’m afraid, I’m not going to reveal anything new. Since you seem to be asking primarily about “methodically”, I will take it as “not necessarily with RT”. Generally, RT/filmneg is what makes the best conversion for me, but sometimes I use Darktable with Negadoctor, specifically for the way it handles the highlights and shadows.
The approach, like you described, is indeed about matching the channel histograms, but at both sides. Channel histograms often have a well defined peak, matching on which may give a good starting point. Sometimes though, the peak may not be that well defined or there may be more than one. In such cases a good conversion might be when all channel histogram peaks are matched simultaneously.
For this the ability to shift and stretch an individual channel is necessary, and that’s where Darktable becomes relevant. Besides selecting the rebate color as the basis for blackpoint, it allows separate adjustments of the color cast in shadows and in highlights, as well as stretching and contracting the histogram in a pretty intuitive way.
Getting back to the “methodically”, the approach boils down to fitting all channels inside the histogram area; shifting them to get a neutral black point; and then adjusting the highlights so that their shape also syncs on the histogram. This requires a fair bit of compensation, as when we try to e.g. move a balanced histogram as a whole to the left, the channels often get out of sync and need to be adjusted again. Nevertheless, this doesn’t get completely out of hand, and with some negatives results in reasonably well controlled conversions.
This is quite similar to what can be done in Photoshop using Levels. The difference is Levels allow to adjust the channel gamma, and that’s not precise enough. Darktable targets highlights and shadows more precisely, so that when one area changes, the other is not affected that much. In RT when changing the red/blue ratios, the appropriate channel’s peak changes its height (which makes total sense, as the ratio of pixels belonging to the affected channels changes). Changing the white balance offsets shifts the channels in relation to each other without affecting their shape much (also makes sense as a whitepoint adjustment). What is missing is the ability to affect e.g. only the highlights of an individual channel.
When converting such negatives using RT/filmneg I often just select spots reasonably close to neutral, and which are not on the toe or the shoulder, and from there I adjust freehand, meaning there is no hard color reference, and it is of course very difficult to nail a good conversion this way.
With everything said above, this is also true. This makes me think the discussion now is not about a perfect automatic conversion anymore, but about finding some tools which would make the manual conversion of individual difficult frames easier and more reliable.
Here are some unorganized thoughts of mine.
- Quite often when I make an analog shot, I also make a digital (still or video) capture, just as a reference to the scene, and as an exposure record. In some cases I do a duplicate digital capture too, for comparison or other reasons. With this, at the negative conversion step I have a reference. Typically it does not match the contrast very well, but it matches the hues close enough. Hue for me is the most difficult part to get right. With it in place, saturation and lightness are kind of easier. So this could be one way out for the new captures, albeit not applicable to the old ones.
-
Back to the “methodical” approach, an ability to separately affect the color in shadows and highlights seems to be important when converting negatives. Contrary to my previous opinion, there might be no way around having more sets of the color controls just for this purpose. Yet, to make it more intuitive maybe they should be not RGB but HSV, as this is where the most inconvenience comes from (for me) - adjusting RGB to get a precise color mix. HSV may make it easier. To work well this requires an informative and responsive histogram (not just the preview).
-
Speaking of matching a given color from memory, I still believe it is not realistic due to how color works, but that’s only for an individual spot color. If we try and much a whole hue group, that might work very well. Here is a project I came across which (among other things) extracts dominant hues from an image image analyser. With those extracted hues which are the dominant colors of the prominent objects, as of yet uncorrected, the task of coming up with a correction seems to be easier. They are, as a reference, already on screen; they are dominant, so not random; they are much easier to remember as the color of the whole object, as opposed to the color of an individually picked spot. It should not be difficult to realize which direction the correction should take.
-
There are also the examples of spectrograms. These could be helpful in matching the color of the same objects between shots, even if the shots were taken from different perspectives, with different lenses and film.
- This now seems a very good idea (it took me some time ). I’ve been getting a lot of harsh highlights in my conversions, where the tonality is close to non-existent and the color noise is overwhelming. It didn’t happen this way with Darktable on the same negatives, probably due to the mentioned “nicer formula”.
Parts of this seem to be getting quite off from the direction filmneg was taking so far, but maybe some of this can still be applied.