How to get a super result using Film Negative?

this sounds very interesting. can you briefly describe how it works in a bit more detail? whenever I tried to read gmic code, I got lost pretty quickly…

The short answer is it searches for combinations of Red and Blue multipliers that produce the most variance in the Hue channel.

More detail.

  1. Throw out pixels that are likely to be clipped at the high or low end and throw out some more to make things faster
  2. Convert the remaining pixels from RGB to HSV, keep the Hue channel
  3. create a copy of the hue channel and rotate the hues 180 degrees so that we have one image that does not have a sharp transition between values in the reds (from 360 degrees to 0 degrees).
  4. Find the variance of those two images and take the minimum variance. This is the reference hue variance
  5. Apply some Red and Blue channel multipliers to the image and find the hue variance in the procedure above. If the variance is greater then these Red and Blue multipliers are better.
  6. Repeat this procedure using a sensible searching strategy to find the best Red and Blue channel multipliers
  7. Apply the multipliers to the original image.

I also have a version that changes the ‘a’ and ‘b’ channels in Lab and then measures the Hue as above.

I’m not sure how the variance is calculated I just use a built-in feature of GMIC.

Edit: The idea comes from the fact that when the white balance is really off, like this film negative, images have a very narrow range of hues. In this case it is orange. But we know that the correctly white balanced image will have lots of hues.

2 Likes

Hi,
I have read some comments about digitising prints/negatives, and here is my 5c.

I use dedicated 35 mm scanner, flatbed scanner and my DSLR to digitise old negatives and prints. Scanner is idiot-proof and gives reasonable results. Still, there are always those “special” cases of OLD stuff that requires special treatment. Last thursday I gained access to 20x30 cm, more than 100 years old, corroded glass negative. The only way to preserve it was to use DSLR, and thats where I must say THANK YOU to @rom9. The whole discussion about using DSLR for digitising being an overkill (or being sub-optimal) does not apply to real world of museum conservation or old photo restoration, where medium format digital cameras are the way to go (I wish I could afford that). So - I love RawTherapee because it gives you choice to use every tool the way You want, while other software says “In our opinion you would not need this for your job so we are disabling it”.

Thanks to whole team. If I could wish something more it would be:

  1. enabling/rewritting filmnegative for TIFF files
  2. make aditional tool to counter silver mirroring
3 Likes

Hi’ @Iain
An impressive result I think….:blush:!
I have no knowledge of or experience with the software you mention GMIC, Rawdigger, GIMP and your auto white balance tool.
But since your result is so convincing it seems to me that it would be a good solution to build your process into the film negative tool as an option, if that is possible. In this way you would not have to rely on the white balance tool to do part of the job.

I’d like to see the method included in some raw processing software, but unfortunately I don’t have the skills for that.

This is because the film negative tool alters the raw values in the very early stages of the processing. Strictly speaking, it applies a different exponent to each channel value ( details here ). The white balance tool operates later in the chain, immediately after demosaic, and it does not change the raw values, because it works on a different memory buffer (the demosaiced, “working” image buffer). So, the white balance setting cannot affect the input vaules that get exponentiated in the previous step.

it is unique in that normally, raw values are never altered (except for very small changes like dead pixels filter or dark frame diff). You can think of the film negative tool as a “preprocessing” that is applied to your picture before feeding it into RT; if you change the exponents, you basically have a different picture, and you must start all over again :slight_smile:

You’re welcome! Congrats for your wonderful use-case :slight_smile:
And i must thank all the other devs for their support, and enormous patience :smile:

that’s next on my todo list, at least for linear TIFFs. I’ll see if i can support other type of non-raws as well.

Sorry i’m not an expert, isn’t this a problem with prints? Does it also affect negatives?
In prints, it is solved by using polarizing filters i think.
( Edit: i meant this )

I’d like to test that, i’ve downloaded gmic-qt 2.7.4 but i can’t find an auto-wb filter under the “Testing → Iain Fergusson” folder, can you point me to the script? Sorry for the dumb question, i’m not familiar with gmic-qt, maybe i have some misconfiguration preventing me to correctly update the filter list.

1 Like

You’re right it is not showing up. I don’t know why. I didn’t check because I have my own copy on my machine.

@David_Tschumperle might know why Auto WB by hue variance is not showing up in Testing → Iain Fergusson

Same here. There was an update bug not long ago…

Done:
http://rawpedia.rawtherapee.com/Film_Negative

@rom9 I made the change as you asked, but since it doesn’t matter, than why did you recommend swapping the order of points 3 and 4? Wouldn’t it be easier to pick the gray spots after white-balancing the image?

@jdc might be interested in experimenting with @Iain 's “Auto WB by hue variance” (or does branch autowb already have something similar?)

When you pick the gray spots (unless you pick the same that you’ve already picked before), the exponents will change, the values in rawData will be changed, and so the image that is fed into the white balance tool will have different channel balance.
Most likely, you will first do an initial WB to get an almost-color-accurate overview of the image, then pick the gray spots, and then do a final WB on the newly-calculated raw (and make those gray spots really gray). But the first WB is not actually needed for the tool itself.
Now that you make me think of it, maybe i should automate the last step, so that when picking the gray spots, the filmnegative tool also implicitly triggers a spot-WB on one of those spots …

2 Likes

It affects everything that has silver in gelatine and was not archived properly (too hot and humid, what makes gelatine softer, pollution, electric charge that draws the Ag ion to the surface where it oxydises ). Old gelatin negatives on glass are the actual cases I have (some) experience with. Sometimes owner does not allow to chemically clean the mirroring, sometimes previous cleaning made the gelatin too wet and it started to peel off from the negative (it expands while glass does not).

In that case shining light through and taking a photo is the best way to go. Polarising the light does not work (it is not reflected!). Silver does not “mirror” but appears as brown (blue after inversion) sediment. I can deliver examples if needed.

[my case is sooo specialized that I don’t consider it “a must”, just a wish that would make my work even easier]

In case of prints you are right.

First excuse my bad english, White balance is a complex thing, and for me write in english is difficult

This method by optimizing the variance of the R and B channels is one of the many methods I have experimented with.

We find it in the university literature.

I was very interested in her, but after reading the essays that I could see, it appeared that she did not solve the problem of images with a bad “green”. Which does not mean that we can not use it. especially in this specific case “Film Negative”

The second problem is that of relevance : because the variance (or standard deviation) is the minimum that the result is always good?

On this basis I did other research in academic documents and found a study without algorithm or code.
The idea seems close to that evoked but goes further.
Instead of RGB channels we use xyY, which is more relevant in terms of colorimetry,
And instead “variance”, we use a comparison of samples on the one hand within the image, on the other hand from defined spectral colors, this comparison is realized dy a “Student” test.

In the case of “autowb” (whose algorithm has not changed since 1 year and I always wait to merge with dev), I compare a sufficient number of samples from more than 150 areas on the image and 200 reference colors
This comparison is relized by changing “Temp” : this variations make a change in xyY values of image, and in xyY values of spectral datas . This algorithm is complex and needs a lot (200) of spectral data in the visible domain
The best result is for Student minimum.

But, because there is a big “But”, these algorithms consider that the green is good or that it is necessary to make a manual adjustment … what is the height for a WB auto

I search a long time and finally create 2 loops that interact with each other, one for the green (Y), and one for Red and Blue (x y).

In more of 95% case, result is very good, but in some cases where Illuminant is not with a good CRI (color renderind index) - some LED, some Halogen…, algorithm fails.

It must be remembered that the problem of the WB is mathematically indeterminate which explains : a) the number of works on this subjects; b) the imperfection of the results

jacques

2 Likes

That is a very good idea.
In this way, the user doesn’t risk to forget to apply the white balance tool again. The need to reapply seems strange unless the user has read your thorough explanation on the subject……

Before posting my question to you, I tried to pick the white/black spots and then reset the white balance tool or skip resetting the white balance tool. There is clearly a difference, which I didn’t understand then. However, it can be difficult to decide which version is the best because the image needs a lot of further editing.

Thank you for the explanation and all your effort. I think that you and the other RT developers do a fantastic job.

1 Like

@rom9 docs updated.

I’m not very fond of calling them “white and black spots” since that’s not what they are. How about renaming the button to “Pick neutral spots” and leaving the explanation to the docs?

1 Like

Agreed, much better. I’ll change it this evening.
And what about the button tooltip? It currently says:
Calculate exponents by picking two neutral reference spots in the image; one white (light gray) and one black (dark gray). The order does not matter. The exponents will be updated after the second spot is picked.

should I remove the “white” and “black” terms from there, too?

@rom9 I’ll do it if you don’t mind, as it also requires changing all translations.

I didn’t know about this problem, thanks for the explanation. I’m afraid that color shift is quite “spotty” and not constant across the frame, so it would be very hard to solve by software means.
Just a wild idea: since we’re talking about B&W negatives i suppose, what about shining an orange/yellow light through it, instead of white? If the problem is more evident in the blue channel, removing it from the source backlight could mitigate that.

Hi jdc, thanks for your explanation. I know nothing about color spaces, so i’m talking as a complete ignorant here, but that is the exact use case that makes me dubious about auto-wb methods. I have some concert pictures that were shot with an awful red or purple LED light. The pictures have a strong tint all across the frame. How can an autowb method restore the original tint? I would rather take the color balance from another “reference” frame from the same film, and apply that to the other frames. (i’m working on it at the moment and i’ll post some examples later).
Anyway i’ll try the hue variance method as soon as i can get my hands on the gmic filter :wink:

Sure, thanks! :slight_smile: