Any interest in a "film negative" feature in RT ?

side note: if you guys don’t know about this patent US6100924A, I thought it’d help :slight_smile: (just to be assured we’re indeed on track)
Starting from page 25, column 5, line 48
image

2 Likes

Hi all! :slight_smile:

Well, if you mean the cheapo Osram light bulb that i used in this test earlier this year, i have no idea… i couldn’t find any datasheet about it.

Regarding the new led matrix that i’m using right now to generate the gradient, it uses WS2812 leds.
From the datasheet, they don’t seem very narrowband:

Red : 620-630nm
Green : 515-530nm
Blue : 465-475nm

… although i admit i don’t know how narrow a light source should be to be considered “narrowband” :slight_smile:

I’ve had all sorts of trouble with generating the gradient but i’m almost there. Once i’m done with that, i will absolutely test this led matrix as a light source for composite-RGB neg scanning, and report the results :wink:

Wow! Thanks, this is so cool :smiley:
This other part, on the same page, col 6 line 16 is quite surprising:

Each intensity value ranges from “0” to “255’. To convert a
negative intensity value to a positive intensity value, the
value is simply shifted from one end of the range to the
other. For example, if a red negative intensity value were
“0”, this would be shifted to the other end of the range which
would be “255’. Each intensity value is shifted, then,
according to the equation, P=255-N, where P represents a
positive intensity value and N represents a negative intensity
value
.

What ?? We’re back to linear inversion once again? Where’s the reciprocal ? :scream:

This is quite decently narrowband :slight_smile: Yet it’s not what I hoped would be matching this example which is being discussed in parallel threads investigating the particulars of the backlight spectrum:


The blue and green are easy, but the red band is quite far to the right. Curious about the difference it might make with the filmneg conversion approach.

Fret not :smiley:, there’s gamma correction ahead. Here’s an excerpt.

The first step in the color correction analysis is a preliminary expansion. Each image is made up of a series of pixels with each pixel having three intensity values (red, green, and blue) associated with it. Each intensity value can range from “0” to “255”, but will typically be found somewhere near the middle of this range. The intensity values for all pixels in all images are examined to find the lowest intensity value for each color, and the highest intensity value for each color. This color correction data is then stored. When it is time to make the color correction, the range of intensity values for each color is expanded by mapping the lowest intensity value to “0”, the highest intensity value to “255”, and linearly mapping all other values between “0” and “255”. For example, if the lowest red intensity value were “100” and the highest red intensity value were “200”, “100” would be change to “0”, and “200” would be changed to “255”. If there were a red intensity value of “150”, it would be changed to “127”. Similarly, all other values in the range “100” to “200” would be linearly mapped to the range “0” to “255”.

After the preliminary expansion, the next step in the color correction analysis is a negative-to-positive conversion. This conversion involves creating three sets of positive intensity data from the three sets of negative intensity data. Each intensity value ranges from “0” to “255”. To convert a negative intensity value to a positive intensity value, the value is simply shifted from one end of the range to the other. For example, if a red negative intensity value were “0”, this would be shifted to the other end of the range which would be “255”. Each intensity value is shifted, then, according to the equation, P=255-N, where P represents a positive intensity value and N represents a negative intensity value. If a positive film such as E6 slide film is being scanned, the negative-to-positive conversion is not performed.

The third step in the color correction analysis is a gamma correction. The exposure (E) to which film is subjected is defined as the incident intensity (I) multiplied by the exposure time (T). A popular way to describe the photosensitive properties of photographic film is to plot the density (D) of silver deposited on a film versus the logarithm of E. Such a curve is called a characteristic curve or a “D log E” curve of a film. An example of a typical D fog E curve for a photographic negative is shown in FIG. 6. As can be seen from FIG. 6, as E increases, so does the density of silver deposited. However, D peaks at the shoulder region and a further increase in E will not increase D. Similarly, at low values of E, D remains essentially constant until E reaches the toe region, at which point D begins to increase. The region of the curve between the toe and the shoulder is linear, and the slope of this portion of the curve is commonly referred to as “gamma.” The greater the value of gamma, the higher the contrast of the film. Since there is nonlinear relation between density and exposure, the intensity data must be adjusted to compensate for this nonlinearity. This adjustment is called a gamma correction. In the present invention, there are three gamma corrections calculated. One gamma correction is calculated for each set of intensity data (red, green, and blue). The gamma correction is simply a mapping of the intensity data according to the gamma correction curve shown in FIG. 7. The horizontal axis of the curve represents the range of original intensity values, and the vertical axis represents what the original values are mapped to (i.e., the gamma corrected intensity values). Since the value of gamma tends to vary from film to film, a different gamma correction curve may be used for each type of film. Alternatively, a gamma correction curve can be generated using a typical or average value of gamma, and the same curve can be used for all types of film.

The final step in the color correction analysis is a secondary expansion. The secondary expansion is essentially the same process as the preliminary expansion except that the process is performed on each individual image separately rather than all images together. The secondary expansion involves finding, in each image, the lowest and the highest intensity values of each of the three colors. The lowest value for each color is mapped to “0”, the highest value is mapped to “255”, and all other values are linearly mapped between “0” and “255”.

After the color correction data is obtained in the color correction analysis, each image is reduced in size, the color correction data is used to perform a color correction on the reduced images, and the reduced images are then displayed on the monitor along with their corresponding frame numbers.

If the film that is scanned is black and white film, an additional step must be performed prior to display of the images. The three sets of intensity data must be converted to a single set of intensity data, with the single set of intensity data representing varying levels of gray in the film image.

image

They are in fact both logarithmic. Density is log10 and exposure is log2, so the relationship between them is linear. The purpose of the gamma correction is strictly the opposite, to convert sensitometrically linear image to perceptually linear, as perception itself is logarithimic.

The approach may work, but for different reasons than explained.

Here is another quote, from Color negatives at the demise of silver halides, page 6.

Luckily, a processed color film is quite optically homogeneous, thus it is possible to model it as a linear system, assuming that at any wavelength the absorbance of each emulsion layer is proportional to the local dye concentration (and to its extinction coefficient and layer thickness), and the overall absorbance of the film is equal to the sum of the absorbances of the single layers.

This is in line with the results of @rom9’s measurements before.

Hi all, sorry for the late response

Yes, i saw the gamma correction part, but that’s not the same thing. The formula described in this patent would be:
(255 - n) ^ gamma

while the formula from the Wikipedia article is:
(1/n) ^ gamma , or n ^ -gamma

which is not the same “shape” (values chosen to make the curves overlap for comparison) :

i wonder why that is … :thinking:

True, those peaks are way off… i’ve made a test using the led matrix as a backlight, and creating the composite before demosaic with the same DNG metadata trick used in my earlier tests. This way i am in fact canceling out any crosstalk from the bayer filter, meaning that the red pixel value won’t be affected by the green or blue shots, and so on. Here’s a comparison:

Greens seem to look better in the composite shot; yellow and orange i’m not so sure. The problem with this approach, is that i have no idea which input color profile or white balance to use, since the actual channel multipliers depend on several factors: LED color balance, LED wavelength mismatch, etc.; the result above was obtained only via random trial and error.
I’ve also tried to process each monochrome shot separately, then sum the resulting (linear) TIFFs, and apply the film negative tool to the merged non-raw file. The result is also very good, but still, i had to choose input profile and WB only by eye-balling. I’m afraid that figuring out a consistent and generally applicable way to process these composite shots will be no joke… :frowning:

Regarding the gradient tests… after over a month of messing with that led matrix, i realized that i’m an idiot! :rofl:
There was a much easier way to obtain the gradient pattern: i could simply find an old Samsung smartphone, and shoot the OLED display, which has no backlight so the contrst ratio is huge.
So, i borrowed an ancient Samsung Galaxy Advance S from the office, created a basic html page with a stepped gradient and took some shots of the display.
First, i created a 256 step linear gradient with all possible gray levels from #000000 to #ffffff:

To measure the values from the image, i cropped the area of the gradient, saved it as a linear tiff, then wrote a simple script which splits the picture in blocks and extracts the channel values.

The measured values reveal the gamma correction, and maybe some posterization (the actual DAC might be less than 8 bits per channel ?), anyway it should be good enough for our purposes.

Then i stored the 255 values in a LUT, so i can get the exact CSS color value to apply, based on the desired output value, without having to worry about the translation between the “software” value and the actual light output.
This way i created several 64-step (16x4) gradients: gray linear, gray gamma 2.2, red, green, blue and shot them with the analog camera at varying exposure levels.

I’ve also created an “extended” gradient, by splitting it in two sections, and tweaking the exposure time with some javascript in the gradient page:

the first 13 steps turn off after 1/4sec, while the others stay lit for N times as long, while the camera is doing a long exposure. I know this might not be a very useful measurement due to reciprocity failure of the film but hey, why not ? :slight_smile:

Finally, i also included some shots done with the LED matrix (since i had already done all the effort to make it work).

We’ll see as soon as i get the developed film back from the lab (their waiting times are loooong…)

2 Likes

Hi all, sorry again for the huge delay!

So, i got back my negatives from the lab, analyzed the pictures, and … it was quite a failure. Maybe not a complete failure, but still the data is not good enough.
Here’s one of the shots of the gradient from the OLED display. On the right, there is a plot of the readouts that i’ve got from my script, slicing the image and averaging the values of each patch. On the X axis are the values measured from the digital picture of the gradient, while on the Y axis are the values measured from the digitized negative.

The neg looks quite good at first glance, but from the plot you can see that on each “line wrap” (every 16 patches) there is a big “jump”, which was not so severe in digital pictures of the same gradient, shot with the same Konika Hexanon lens, adapted to my Sony A7.
Here is a plot of the measurements from the digital picture:

Initially, i thought the problem might be caused by the un-evennes of the backlight, so to verify this, i rotated the negative and “re-scanned” it:

As you can see, the plot is nearly identical (of course i also reversed the order of values on the X axis). This means that the relationship between input and output is the same regardless of the orientation of the negative, so the backlight was not the culprit.
Also, i scanned a completely empty, unexposed frame, and the measured values were quite stable all over the picture.

After much mumbling and googling, i came to the conclusion that i must have been a victim of some effect of the film negative, like film halation.
I doubt it can be caused by lens flare alone, because in that case it should have also been as evident in the digital picture data.

In any case, those patches are too close together, and their halo (whatever the root cause) make them interfere with each other, biasing the readout.

So, i decided to follow a different approach: i used my 64x64 RGB Led matrix as a single light source for my lightbox, and took several shots of the opening, at increasing brightness levels.

This way i’ll get a single patch at the center of each frame, hence there will be enough separation to avoid any interference. Unfortunately the number of steps limited to 36 (i did 34 to have 2 spares, just in case), but it should give better measurements.

Also, using the entire matrix as a single light source, has the advantage of providing great flexibility in adjusting the color and brightness level; each channel can be adjusted in 255 steps (excluding zero, of course), so there are 255*64 = 16320 possible levels per channel (starting from 1 single led turned on at level 1, up to all 64 leds turned on at full power).

[boring_technicalities]

In order to take full advantage of all this “adjust-ability”, i used my smartphone camera to repeatedly measure the light from the lightbox, and subsequently adjusting the led values to obtain a target RGB value, via successive approximations.
My smartphone is capable of capturing raw pictures, at 10bit per channel (better than nothing), so i grabbed the source code of the Android Camera2 API example, and modified it to be able to remote-control it from the PC, via a TCP connection.
Then, i wrote an “integration” script, running on the PC, talking to both the Arduino (used to drive the led matrix) and to the smartphone app.

With this setup, i can automate the entire process by applying a simple bisection method: given a target RGB value that i want to achieve, i light up the led panel at mid level, take a picture, measure the average RGB values of a patch at the center of the frame, compare those to the target RGB, and try again with the upper or lower half of the range. Rinse and repeat until it stops improving :wink:

Incidentally, the bisection function doing its job is also quite entertaining to watch … :rofl:

In order to find the correct target RGB values i used the Spot WB function of my Sony A7 to confirm that the white patch was correctly balanced (5500K).

[/boring_technicalities]

At this point, i calculated the RGB values for the 34-step gradient, split in 2 parts, like the “extended gradient” from the previous post. This time, instead of adjusting the exposure time, i shot the first part through a ND64 filter, and the second part without any filter:

inputGradient

Keeping the exposure time constant should avoid any timing inconsistencies in the mechanical shutter, and interference from reciprocity failure of the film.

The light levels from the lightbox have a contrast ratio of about 320:1, so the final gradient, as observed by the film, should span 320*64 = 20480 ~= 14stops.

I shot this gradient on Kodak ColorPlus 200, the neg is currently at the lab being developed. Let’s see what comes out … :slight_smile:

6 Likes

well, someone’s been positively busy :smiley:

End of January, I’ve acquired an m1 mac and now, I’m struggling with compilation xD (nomar500 on github) so not exactly photography just yet.

I’m writing to you to share a DIY thing you could be interested in, since you’re playing with illumination evenness.

See this video: https://www.youtube.com/watch?v=8JrqH2oOTK4
I’m sure you’ll be tempted to try something out :smiley:

I hope you’re staying sane.

BTW, that phenomenon you are talking about, I’m not sure if it’s halation - I haven’t researched it. But could it be less dense areas are bleeding light into the other more dense areas?

2 Likes

Yes, i saw some of these “LCD panel mod” videos before. I didn’t try because i don’t have any broken monitors lying around.

I don’t know… it’s certainly cool for a general purpose lightbox, but for digitizing film, i still prefer bouncing light over a white surface. It’s a simpler and more flexible approach.

If you do that LCD mod, then after some time you find some leds with better CRI, and you want to try them out… you need to disassemble the whole thing and start again.

With a simple empty box, on the other hand, it’s very trivial to try out a different light bulb, you just place it in front of the opening.
To optimize the “flat-ness” of the backlight, you can simply move the light source further away, or change its angle, while checking the histogram on the camera display.

In that regard, i’ve recently bought one of these SoLux 4700K halogen lamps. They’re quite expensive (~26 euros) but the light quality is very good.
There’s nothing special with the halogen bulb itself, all the magic is in the dichroic reflector. Compared to a cheap dichroic spotlight, this one is much more translucent for longer wavelengths: all of the blue portion of the spectrum is reflected forward, while most of the red/yellow portion goes through. The resulting light output looks very good, and film “scans” made with it seem at least comparable with xenon light (need to do more extensive testing).

And, with the same simple lightbox i’ve also tried the cheapest and most effective high-CRI source: actual sunlight! :rofl:
Shining on the piece of paper at the bottom, and bouncing up through the negative (for very short periods of time :wink:). It works, although it is a bit inconvenient.

Yes, it could be. In this case, one more reason to shoot a single step per frame, and keep them well separated. So, we might be on the right track… fingers crossed for the response of the lab :wink:

Ok, my ColorPlus 200 film roll was developed, and… i’ve got a much better dataset this time!

Here’s an overview of the thumbnails:

The first gradient step is invisible to the naked eye, but it’s there: i had to stretch the histogram a lot in RT to locate the faint rectangle.
The last frame was not just overexposed : “roasted” might be a better term :grin:

And here is a log-log plot of the measurements:

On the X axis, are the measured channel values from the digital camera shot of the light source, taking into account the multiplication factors of the ND64 filter (remember the gradient was split in 2 sub-gradients, see previous post for details).

  • the first 18 steps were shot through the ND64 filter

  • the next 16 steps were shot directly

  • with the remaining 3 frames (i didn’t make any mistakes :sunglasses: ) i continued shooting the last step, while opening the lens more and more. The entire 34 step gradient was shot at f11, then for the last 3 frames i went to f5.6, f2.8 and f1.4.

I had previously measured the exact multiplication factors of both the ND filter and the different apertures using the digital camera, in order to reconstruct a relative value of light intensity hitting the film.

  ---  R,G,B ratios ---
f/11  : 1                  1                  1
f/5.6 : 4.36557059961315   4.42335766423358   4.40650406504065
f/2.8 : 3.78489144882588   3.83580858085809   3.84481344813448
f/1.4 : 2.94866842259292   2.96875672187567   3.01492935217275

ND64      :  1                1                 1
No filter : 59.231746031746  60.7208931419458  61.0385232744783

The Y axis indicates the pixel values from the digitized negatives. We can see that there is indeed a linear segment in each of these curves, but it’s not so huge. Keep in mind this is a log-log plot, so by “linear” we mean that the input is raised to a constant exponent.

The current version of the RT’s Film Negative tool does just that: applies a constant exponent to the whole image, as if everything was behaving like that linear segment.

If we can model this curve in a reasonable way (without too many knobs to adjust) we might achieve better color reproduction across a wider dynamic range.

So, let’s rotate this plot, in order to see what we need to achieve: going the other way around, from the digitized film value, to the original scene value. Here it is:

The digitized film images are 16-bit linear sRGB, white balanced on the unexposed film base color. Thus, the pixel value of Dmin appears as a neutral gray with all channels at ~18500.

We can see that, for example, the red channel is only linear between ~3000 and ~13000. I wouldn’t be much concerned about the non-linearity near Dmin (lower densities), since those areas will become deep shadows, where color casts are hardly visible anyway.

Now, let’s overlay the histogram of a real-world picture to this plot:


This picture was also white-balanced on the film base, and the curve values were scaled in order to align the tip of the curve with the actual Dmin value of the picture (notice the small notch on the far right of the histogram). This makes a fair comparison, since everyting is relative to Dmin.

In the blue channel, we can see that a significant number of pixels are outside the linear portion: applying the same constant exponent there, would not give the correct result.

So, not knowing anything about math, i did some random googling and found this little function: the Inverse Hyperbolic Secant.

This function has some interesting properties:

  • it somewhat resembles our curve

  • it’s defined between 0 and 1, so it can be easily mapped to our range

  • it has a slope of exactly -2 at x=0.70710… , so this can be used as a “junction point” to stitch pieces together and make a piece-wise curve.

I then made a very crude function, cutting at the junction point and inserting a line segment in between. These are the tweak-able parameters:

  • shoulder size
  • linear segment width
  • toe size
  • slope of linear segment

and here’s how it fits the data (dots are the measured film values, lines are the function plots).

BTW: of course the function is in linear domain, so i had to log the input, and exponentiate the output.

At this point, i stitched together a very primitive processing program in order to experiment a bit with this function (without having to mangle the RT source code every time :rofl:), and the results are somewhat encouraging.
The parameters found by fitting the plot were not perfect at first try, anyway the ability to change the length of the linear segment and the size of the curve “shoulder” feels like an advantage, in that it allows to fix some color cast in the highlights, while keeping the rest of the picture unchanged.

Now, the really hard part, is minimizing the number of parameters that the end user must tweak manually. One possible way that comes to mind, is dividing the work in two “stages”:

  • first, deal with the middle part of the histogram and adjust the slopes, while actively masking the highlights (similar to what the clipping indicator does), so that the user is not distracted.
  • then, uncover the highlights and let the user adjust the linear segments / shoulder size, leaving the slopes unchanged.

As always, ideas and suggestions are welcome :slight_smile:

In the meantime, i’ve shot another test roll, this time with a Fuji Xtra 400; same exact procedure except halved exposure times (400 vs 200 ISO), and i’ve got 36 frames instead of 37 (maybe i’ve been too wasteful when loading the roll… or Fujifilm is a bit less generous than Kodak :grin:).

I’m curious to see how the measurements compare between different film types.

6 Likes

@rom9 What happens if you don’t white balance on the film base?

The curves would appear translated (log-log plot) along the X axis, since each channel would be multiplied by some factor. I chose to white balance on the film base, just to make the curves “start” from a common point, so they’re easier to compare.

By doing that aren’t you changing the relationship between the channel values at any given point? Without this translation some parts of the histogram might map to different, non-linear parts of the (per channel) curve. This way the eventual correction in those areas would preserve the original relationship between the channel values on film.

Could you post an example of the results you’re getting so far?

The channels are treated separately anyway, so the relationship between them should not matter … i hope :thinking:
What counts, is the ratio between the channel value of Dmin, and the channel value of a specific pixel; that should give an indication of relative density at that specific pixel.

Oh, maybe you mean that i should not use WB, since that is applied before input profile conversion, with the matrix mixing things around? That makes sense, i can try re-process the gradient images at “native” WB, and then normalize the channel values after input profile conversion, let’s see what comes out

Well, nothing outstanding, really:

with constant exponents, i was having trouble in the sky/clouds… the ability to tweak the highlights per-channel was helpful (although i know the result is still not even decent).
Aside from the above-mentioned curve, this test program only applies a global gamma, and a soft-clip above 60k.

Kind of. Curious to see the result without the WB correction on the film base (the backlight color should be WB-corrected though).

This does look promising. The dynamic range, global contrast are pleasing and the tonal separation is there. I wonder what happens with the blue cast when no WB is applied to the film base.

I’ve re-processed the gradient images, this time white balancing on the backlight, here are the curves:

If we “normalize” the values by multiplying by some factor, to make the curves “start” at the same point, and compare them to the previous curves with WB on the film base, we see that there’s also a slight change in the slopes:

This is most probably caused by the “mixing” of channel values during input profile conversion, which sums a fraction of one channel into the others.

Here is the “inversion” function fitting the new, backlight-balanced data:

… and here is the result, using the new parameters from the above plot. The picture was also white-balanced on the backlight:

Definitely not good, still needs some manual adjustments.

I’ve noticed that there is also a simpler function, that might fit the toe part a bit better:

exp(SHOULDER * (-x - LINEAR)) + (-x * SLOPE) + TOE * log(-x)

I’ll try this one, too :wink:

2 Likes

No, it’s not. The blue-magenta tint seems even stronger in the highlights.

What if instead of normalizing to a black point, you find a correction such that the linear parts of all three curves become parallel? (Assuming in your original experiment they are not parallel before the normalization step). If the channels are then normalized in terms of the exposure, the whole midtone range will become properly corrected.

Maybe not only that, but the shoulder and toe might get into sync as well.

1 Like

I am not sure where we are going :slight_smile: but these density studies are interesting

  • white balance (of the final image) should come from an area with density (I understand unexposed film is not appropriate though it may, in some cases, get you in the ball park; very dense whites are not good candidates either)

  • white balance of the “capture” itself, also to my limited understanding, if set to multipliers close to 1, allows for optimal expose-to-the-right shots, thereby improving the SNR for the inversion.

(stating the obvious?)

Hi everyone,
@rom9, I have a question pertaining to the standard “White Balance” tool.

Should it be active, at all, in the current state of the “Film Negative” module?

Thanks.

Hi,
yes, White Balance should be active, and should be set to balance the color of the backlight.