Honestly I like it
I took the .cr2 and developed it with my own toolchain, which only removes the orange cast and inverts the negative. The result of that was a bit over-exposed, so I also applied a curve to pull stuff down:
You’re starting with rather saturated green, IMHO…
The lichen is kind of orange-yellow, maybe somewhere between your result and what I got with Rec2020.
I think your result as a finished image is the best so far. My Rec2020 image probably has a bit too much red in it, but I can’t remember how green or yellow the grass actually was. It is 24 years old.
Well, not that green. And this is just a random example. I’ve got this on all images I’ve tried the Standard color matrix on. Also, if your inversion was correct, the rock would be grey, not blue.
This is what I got with ART after using the Film Negative tool, picking neutral spots and then doing a white balance and some exposure adjustments (no other colour changes made):
The grass is still quite saturated but I’m inclined to say that it’s close to its natural colour.
1997_0001.cr2.arp (10.8 KB)
Need to keep in mind here that the camera doing the capture of the negative is not trying to align scene colors, it’s trying to align the negative colors. The film recorded the scene colors, long ago…
My inversion is four steps:
- Set black and white points for the red channel to the red data min/max
- Set black and white points for the green channel to the green data min/max
- Set black and white points for the blue channel to the blue data min/max
- Apply a tone curve, control points=0,255,255,0
The first three steps align the channels to remove the orange cast, the fourth step inverts the negative. What’s left is what the film emulsion layers recorded, which obviously need white balancing.
What I’m trying to point out is that the saturated green is what the film recorded.
I don’t think that’s true. I posted a long reply about what’s going on over in the previous thread (Input color profile to use for negatives - #13 by NateWeatherly), but essentially it’s that 6500k (ish) light emitted by an iPad screen is nothing like D65 daylight used to profile cameras. The iPad is an excellent light source for scanning negatives (better than flash, tungsten, or any “white” LED panel) but when you start using narrow band RGB light sources normal camera profiles will result in extreme saturation and clipping, especially when inverted. Camera profiles can only deal with color, they don’t work when you’re using narrow band illumination to make your camera into a sort of densitometer.
I read your long post, but I’m not quite getting it. It would seem to me that a light source that provides power across the visible spectrum would be preferable for reproduction, as it would provide power to tickle all of the reflectance of the subject. The color reflectance index (CRI) was developed specifically to characterize this, and by that measure tungsten light is one of the best illuminators:
Narrow-band light mixtures rely on human metamerism to make colors. Shining such a light on a subject will only tickle the reflectances that “resonate”, to borrow an audio term, and the metameric amalgam may not be quite what a full bathe of the spectrum would produce.
So, if the illuminator used for the negative capture had leanings to certain places in the spectrum, that would have some influence on the hue produced by the camera and its matrix characterization. But, that’s not a problem of the matrix, it’s a problem of the illuminator.
N’est ce pas?
I think that is the critical issue. I’m using a Dörr LED Flex Panel, set to 4300 K, having a CRI of > 95 Ra. This gives me an excellent base to get natural looking colors after inversion and compensation of the orange mask. Unfortunately the technical specifications I found for this light source do not contain spectra. I had to create several series of test shoots to find optimal parameters (combinations of light source settings / WB settings of the DSLR / WB settings of darktable).
Now I am a bit confused. From the parallel thread I thought I learned that an illumination at 3 distinct wavelengths would be much better for negatives as the orange mask would not spill into red and green. Now a high-cri spectrum (i.e., more white, including orange) gives better results. Confusing. Any thoughts?
After playing around a while inverting the colors by “hand”, I am quite convinced that they should look somehow like this. Looks “filmic” enough to my .1997_0001_01.cr2.xmp (7.9 KB)
I am reporting. the results of my experiments. I started with a tablet as light source with poor results, and I ended up with the configurable LED Flex Panel. As said, I do not know the spectrum of this device. I took test shoots from the empty focusing screen of the film holder setting the panel to 3000 to 5000 K in 50 K steps. I adjusted the wb of the DSLR to get neutral grey images and I adjusted the wb in darktable to a neutral grey. In a next step I introduced negatives with different base colors. As a result of this (large number) of experiments I found an optimum (compromise of color cast for different film bases) at 4300 K. I have two options now, I can choose the wb of the DSLR or I can individually set the wb manually in dt, whatever gives me the best results.
If I’m not mistaken, there is no real difference between these two options if you shoot raw.
Hm, sounds like you already had a lot of work with your investigations, I think I have to think about the topic a bit more .
On the other hand I could start shooting positive slides to circumvent the issue .
C’est! Everything you’ve said is true—for color reproduction. But scanning film isn’t about reproducing how the negative looks to the human eye/brain. In the world of human vision referred color appearance reproduction there is no difference between a portion of the negative that appears orange because it transmits a specific proportion of 540nm (green) and 640nm (red) wavelengths of light and portion that appears orange because it transmits only 590nm (orange appearing) wavelengths of light, but for scanning one of these is the orange mask, and the other becomes blue in the inverted image.
Here’s an excerpt from the Bible of digital color, Digital Color Management Encoding Solutions by Edward J. Giorgianni, that does a better job explaining it than I can:
In previous system examinations, standard colorimetric measurements were used to quantify the trichromatic characteristics of color stimuli produced by the output media. It was logical to make such measurements because the media involved formed positive images intended for direct viewing. But since negatives are not meant to be viewed directly, it would not seem logical to also measure them using methods based on the responses of a human observer.
If standard CIE colorimetry is ruled out, how else can images on negative ﬁlms be measured? Is there a different type of “standard observer” that would be more appropriate for negative images? Answering these questions requires a closer look at how photographic negative ﬁlms are designed and how they are used in practice.
In typical applications, photographic negatives are optically printed onto a second negative-working photographic medium (Figure 8.4). The second medium might be photographic paper, in which case the ﬁnal image is a reﬂection print. Negatives also can be printed onto special clear-support ﬁlms, such as those used to make motion picture projection prints from motion picture negatives. In either case, the resulting print is a directly viewable positive image.
What is important to appreciate is that each photographic negative ﬁlm is designed to be optically printed onto one or more speciﬁc print ﬁlms or papers, using speciﬁc printer light sources. That fact provides the key to making meaningful measurements of color-negative images.
Optical printing is an image-capture process that is quite similar to several others discussed earlier. As shown in Figure 8.5, there is an object (the negative image) that is illuminated (by the printer light source) and “viewed” by an “observer” (in this case, a print medium). The print medium for which the negative ﬁlm is intended, then, should be considered the “standard observer” for that ﬁlm, and measurements should be made according to the particular red, green, and blue spectral responsivities of the intended print medium. In other words, measurements should be made based on what the intended print medium will “see” and capture when it “looks” at the illuminated negative in the printer.
Photographic print mediums “see” light very differently from the human eye, and therefore also very differently from digital cameras and profiles, which are designed to mimic human vision.
Here are the spectral sensitivity curves of the human eye:
Here are the sensitivity curves for a camera (for those who aren’t already familiar):
And here are the responsivity curves of the specific combination of light source and print medium sensitivity that film is designed to be “seen” by:
So film is designed to be seen by an “observer” with much more deep red (even near infrared) and much less yellow/orange and cyan sensitivity than the human eye or cameras. Because we can’t change the sensitivity of the sensor we have to use the light source to shape the overall system responsivity to be as similar to that which film is designed to be seen by.
Here are a couple other resources that really helped me start to understand the interplay between light source, negative, sensor, and digital profiling/processing:
I read the post of @NateWeatherly in the parallel thread, and I hope I understood most of it correctly, specially the section about the problematic orange mask. But aside this theoretical condiserations my personal experimental findings differ in the point of the “optimal light source”. I found in my experimental investigations :
It is extremely difficult, sometimes impossible, to get well balanced colors over the whole spectrum if one uses a light source with (more or less) sharp peaks and deep valleys in it’s spectrum.
For this reason I switched to the LED panel with a high CRI. This panel “builds” it’s spectrum mixing the emission of two different types of white LEDs (one reddish, one bluish). Unfortunately I do not know, what LEDs they are using for the panel, but from Wikipedia I get spectra shown below :
Spectrum of a white LED showing blue light directly emitted by the GaN-based LED (peak at about 465 nm) and the more broadband Stokes-shifted light emitted by the Ce3+:YAG phosphor, which emits at roughly 500–700 nm. (source : Light-emitting diode - Wikipedia)
Superposing the spectra of two types of such diodes (one reddish, one bluish) results in a broad spectrum much smoother compared to the spectrum of an ipad (or any other tablet). And my experimental findings show that I get the most pleasing results if I set the color temperature to the middle of the range the panel offers (in other words : using both types of LEDs and mixing them fifty-fifty).
Another point : to avoid changing conditions by diffuse light I take the shoots in a darkened room. I spent some time to optimize the lighting conditions but now I save a lot of time having well definded conditions and an easy and very fast work flow.
My initial tests I have done with diapositives.
Choosing the camera white balance you will use the algorithms of the camera, resulting in corresponding entries in exif metadata. Choosing the manual white balance of dt you will use dt’s algorithms and you are free to select any area of the image. In most cases the results will be different.
The screenshot below shows the results of all the available wb settings of my camera, the second last is my custom setting, the result of the experiments, all shoots taken with LED panel at 4300 K.
Hm, for further experiments I wonder what light source would be narrow band, tuneable over the whole visible spectrum but not a laser … To me the theory sounds valid that 3 distinct narrow illuminations would reduce orange spill over into red and green channels, but experiments seem not to clearly proof it.
And I wonder what light source my reflecta crystalscan 7200 is using, how can I measure the spectrum without a lab? Any thoughts?