Scanning film with DSLR will never be optimal because you need:
to revert the flaws of the DSLR:
demosaic the file (interpolation)
neutralize the WB of the camera
profile the camera colour space (but let me tell you the 3×3 matrix used as input profile in most soft is a very poor way to do so - it should be profiled in spectral space)
fix the lens issues (CA, distorsion, etc.)
then, to revert the flaws of the film:
invert it
compensate for the color of the film substrate
compensate for the white balance and colour issues
fix the lens issues
I just realized that inversion cannot be performed after camera profile in darktable, which expects Bayer mosaiced RGB, so you need to do everything at once. (cc @houz@hanatos).
I would advise to just get a proper film scanner. For 35 mm, they are not super expensive, and with software like SilverFast, you ever get the colour profiles for every film × scanner combination.
I’ve been working an an auto white balance tool in GMIC and decided to put it to the test with this image. Turns out it works really well. Here is the result with the input as linear tiff exported out of Rawdigger, imported into GIMP and in the GMIC plug-in. No other processing.
The filter works by trying to maximize the variation in hues so you don’t need any neutral grey areas at all. In fact, neutral grey would not work for this method and often you don’t need to adjust anything.
I’ve put the filter in y Testing → Iain Fergusson in the GMIC plugin if you want to test it out.
Thank you for the clarification. This answers my question and I think that it should be included in the Rawpedia text.
Maybe you could, for the benefit of me at least, explain how/why the film negative is not affected by a previous white balance setting but requires a white balance setting afterwards? RT has a fixed pipeline and If you choose the exactly the same spot for setting the white balance every time then the result should be the same since, at least in my understanding, the image has to be reprocessed every time you make a change.
Otherwise, this dependency between the two modules seems unique
I’m sure you are right, and I would of course invest in more suitable equipment if I was launching a project scanning a lot of negatives. I tested the film negative tool primarily out of curiosity, and I think it is amazing how good results you can get using the equipment at hand and RT……!
I wonder…
By taking into account the separating power of a good lens, the resolution and dynamics of a film, the exposure parameters, the quality of the chemistry and the respect of the development parameters, finally the light balance of the subject, it seems to me (and may be one should try to quantify it), that a 4 to 5 MP equivalent of a black and white negative and maybe 10 to 15 MP for a slide would be a maximum of real resolution and therefore, using a digital camera of 20 to 24 MP or more would be more than enough to get the most out of a film, especially since the final use of the file would probably not serve totally either to a supposed perfect resolution.
In my humble opinion, we must ask ourselves if we want to recover a faithful and balanced image or recover up to the shape of the silver grains and the defects of the support of the silver layers.
As for the bayer matrix, I am not sure that the difference between a bayer and a foveon, for exempl would resist the entire imperfect processing chain that surrounds photons capture.
But I can be wrong…
this sounds very interesting. can you briefly describe how it works in a bit more detail? whenever I tried to read gmic code, I got lost pretty quickly…
The short answer is it searches for combinations of Red and Blue multipliers that produce the most variance in the Hue channel.
More detail.
Throw out pixels that are likely to be clipped at the high or low end and throw out some more to make things faster
Convert the remaining pixels from RGB to HSV, keep the Hue channel
create a copy of the hue channel and rotate the hues 180 degrees so that we have one image that does not have a sharp transition between values in the reds (from 360 degrees to 0 degrees).
Find the variance of those two images and take the minimum variance. This is the reference hue variance
Apply some Red and Blue channel multipliers to the image and find the hue variance in the procedure above. If the variance is greater then these Red and Blue multipliers are better.
Repeat this procedure using a sensible searching strategy to find the best Red and Blue channel multipliers
Apply the multipliers to the original image.
I also have a version that changes the ‘a’ and ‘b’ channels in Lab and then measures the Hue as above.
I’m not sure how the variance is calculated I just use a built-in feature of GMIC.
Edit: The idea comes from the fact that when the white balance is really off, like this film negative, images have a very narrow range of hues. In this case it is orange. But we know that the correctly white balanced image will have lots of hues.
Hi,
I have read some comments about digitising prints/negatives, and here is my 5c.
I use dedicated 35 mm scanner, flatbed scanner and my DSLR to digitise old negatives and prints. Scanner is idiot-proof and gives reasonable results. Still, there are always those “special” cases of OLD stuff that requires special treatment. Last thursday I gained access to 20x30 cm, more than 100 years old, corroded glass negative. The only way to preserve it was to use DSLR, and thats where I must say THANK YOU to @rom9. The whole discussion about using DSLR for digitising being an overkill (or being sub-optimal) does not apply to real world of museum conservation or old photo restoration, where medium format digital cameras are the way to go (I wish I could afford that). So - I love RawTherapee because it gives you choice to use every tool the way You want, while other software says “In our opinion you would not need this for your job so we are disabling it”.
Thanks to whole team. If I could wish something more it would be:
Hi’ @Iain
An impressive result I think….!
I have no knowledge of or experience with the software you mention GMIC, Rawdigger, GIMP and your auto white balance tool.
But since your result is so convincing it seems to me that it would be a good solution to build your process into the film negative tool as an option, if that is possible. In this way you would not have to rely on the white balance tool to do part of the job.
This is because the film negative tool alters the raw values in the very early stages of the processing. Strictly speaking, it applies a different exponent to each channel value ( details here ). The white balance tool operates later in the chain, immediately after demosaic, and it does not change the raw values, because it works on a different memory buffer (the demosaiced, “working” image buffer). So, the white balance setting cannot affect the input vaules that get exponentiated in the previous step.
it is unique in that normally, raw values are never altered (except for very small changes like dead pixels filter or dark frame diff). You can think of the film negative tool as a “preprocessing” that is applied to your picture before feeding it into RT; if you change the exponents, you basically have a different picture, and you must start all over again
You’re welcome! Congrats for your wonderful use-case
And i must thank all the other devs for their support, and enormous patience
that’s next on my todo list, at least for linear TIFFs. I’ll see if i can support other type of non-raws as well.
Sorry i’m not an expert, isn’t this a problem with prints? Does it also affect negatives?
In prints, it is solved by using polarizing filters i think.
( Edit: i meant this )
I’d like to test that, i’ve downloaded gmic-qt 2.7.4 but i can’t find an auto-wb filter under the “Testing → Iain Fergusson” folder, can you point me to the script? Sorry for the dumb question, i’m not familiar with gmic-qt, maybe i have some misconfiguration preventing me to correctly update the filter list.
@rom9 I made the change as you asked, but since it doesn’t matter, than why did you recommend swapping the order of points 3 and 4? Wouldn’t it be easier to pick the gray spots after white-balancing the image?
When you pick the gray spots (unless you pick the same that you’ve already picked before), the exponents will change, the values in rawData will be changed, and so the image that is fed into the white balance tool will have different channel balance.
Most likely, you will first do an initial WB to get an almost-color-accurate overview of the image, then pick the gray spots, and then do a final WB on the newly-calculated raw (and make those gray spots really gray). But the first WB is not actually needed for the tool itself.
Now that you make me think of it, maybe i should automate the last step, so that when picking the gray spots, the filmnegative tool also implicitly triggers a spot-WB on one of those spots …
It affects everything that has silver in gelatine and was not archived properly (too hot and humid, what makes gelatine softer, pollution, electric charge that draws the Ag ion to the surface where it oxydises ). Old gelatin negatives on glass are the actual cases I have (some) experience with. Sometimes owner does not allow to chemically clean the mirroring, sometimes previous cleaning made the gelatin too wet and it started to peel off from the negative (it expands while glass does not).
In that case shining light through and taking a photo is the best way to go. Polarising the light does not work (it is not reflected!). Silver does not “mirror” but appears as brown (blue after inversion) sediment. I can deliver examples if needed.
[my case is sooo specialized that I don’t consider it “a must”, just a wish that would make my work even easier]