Digitizing film using DSLR and RGB LED lights

The idea came from posts in https://discuss.pixls.us/t/any-interest-in-a-film-negative-feature-in-rt/12569/135. The LEDs I use are Philips Luxeon Rebels red, green, and royal-blue. They have half power bandwidth of 20 nm (30 nm for green) and peak wavelengths of 627 nm, 530 nm, and 447.5 nm.

The photos are not professional, but I do have some professional deformations…

Awesome :sunglasses:
I see a few familiar nicknames here :smile:
I am happy @NateWeatherly brought some of the knowledge available in private FB groups discussing this on a daily basis.

While we are at it, elaborating on an ideal capture when it comes to colour “purity”, I have a question and it pertains to sensor design. Since most of them are Bayer arrays, RGGB, should we also not consider acquiring N times more shots on the spatially-deficient channels (red and blue), typically with sensor shifts? (N would be what… two or four?)

If you’re shooting tri-color RGB exposures for each image capturing pixel shifted images won’t really affect the overall color channel purity, but it would theoretically give better detail and color resolution at the pixel level scale. I.E., the overall image color would look the same but if you zoom in to 100% there might be more color variation and depth in the film grain.

That said, by the time you shoot R/G/B pixel-shifted exposures you’d probably be looking at 500MB+ per image haha, plus all the processing time. Also, depending on the resolution of your camera and what lens you’re using you’re going to hit the region of diminishing returns pretty quickly I’d think. With most lenses the best point of focus is going to vary between the extremely saturated LED colors, so to see the benefits of the extra color resolution you might even need to refocus between shots. But then you have to deal with alignment issues as the lens magnification changes slightly, etc. I could be wrong though and I’d love to see some comparisons. The full-color pixels of dedicated film scanners (or monochrome sensors) certainly have more micro detail and depth in small areas… maybe pixel shift would make a big difference?

Hello Nate,

Where can I find such RGB LED panels?

Joseph

There’s a DNG validation tool in the Adobe SDK you can use to check.

I wrote a small program (makeDNG) to rewrap my mosaiced TIFFs into (Cinema) DNG. You may be able to adapt it to your needs.

Separate captures to increase resolution (by skipping demosaicing) are problematic. Alignment has to be perfect, which it almost certainly won’t be if you change focus. I use a micrometer to set focus (DOF ~5 microns) on my film telecine and I can barely see focus move between dye layers. I have to stare hard at the per-channel histograms.

FWIW, for 1:1 slides and negatives, I don’t use pixel shift on my A7RIII.

White light is still the reference for the end result, but to maximize dynamic range and color accuracy you want to capture using something else so you can use UniWB and ETTR. Narrow band LED matched to the film stock is a good idea. I have independent LED control in my telecine and could do separate capture, but I’m not convinced it’s worth the complexity. At least that was my conclusion years ago, but for science and curiosity I may have to try again.

As for a good camera profile, I use a machine vision camera in my telecine, so I had to make a profile from scratch. I used dcamprof and the SSFs from the datasheet. I have big dreams to use the old monochromator I’ve had lying around for years to make something more accurate, but I’m already pretty happy with my results. There’s been some recent discussion on such things here.

My problem is that I’m mostly dealing with Super 8 Ektachrome 160G. The G is for garbage. It’s supposed to be both tungsten and daylight balanced…somehow. The blue channel is trash (huge grains) and the color temperature changes with exposure (typical of color reversal). The trouble is that the exposure (and therefore temperature) can change per frame. I don’t know whether to blame the lab or the in-camera metering more. The 8mm Kodachrome I have seems much more well-behaved. YMMV with slides or other film stock.

1 Like

May I ask why you don’t use pixel shift? When using a very sturdy setup, it should improve the results because demosaic is not needed.

Pixel shift certainly helps, just not enough to want to use it regularly/by default. The file sizes are significantly larger and while compression would have helped, there were compatibility issues with RT and I don’t want to archive something only ACR could open. I’ll try to reproduce and make a proper bug report.

I have a 14-element Scanner Nikkor on rails with a PS-5. Similar to this. It’s rigid enough.

If you were to measure your A7RIII and share the SSF with me you would be my hero. And if that isn’t enough to convince you to do it, I’d pay you for the data :laughing:. I’ve been searching for SSF data for my A7III forever and from everything I’ve seen the A7III and A7RIII have exactly the same CFA.

Also, great point about being able to use UniWB and ETTR with tunable RGB LEDs. It’s astounding how much better the image quality becomes when you do that.

Well, that’s the trick. It all depends how picky you want to be about matching up the ideal wavelengths. I haven’t found a ready made RGB LED panel/strip etc, with 460, 540, and 660nm LEDs (which are the optimal wavelengths for digitizing negative film. Most RGB LEDs on the market use 460, 540, and 625nm LEDS which can also give great results (and in fact, that’s what some scanners like the Fuji SP3000 use), but different film stocks might need a bit of digital profiling for optimal results. If you’re handy with electronics you can solder your own custom diodes and program a control board… I am not that handy and just bought an RGBAW video light called the Luxli Viola2 that has an iOS app that lets you program sequences of different colors, save color balance presets for different film stocks, etc. It may not be 100% ideal if you don’t plan to do separate RGB exposures because you can’t turn on all three R, G, and B channels without the “white” channel also coming on. This is great for photos and videos, but not great for single shot scanning. There are a lot of other RGB panels and strips that you could use to make your own setup, but I’m not familiar enough to recommend a specific brand or anything.

If there are issues opening Sony ARQ file in RT, please report.

You guys made this into a science. I used a much simpler setup and processing - found an old soviet lens (Gelyios, aka Helios, 40a), adjustment ring and distancing rings, put the negatives into the frame of my old enlarger, backlit it with a simple ambiental LED (with a white grocery plastic bag as diffusor) and shot it from a tripod. Did that eight years ago, and the results are as good as I had the patience - the setup would move a bit, until I fastened the tripod with bungee cords to the table leg, and the cardboard box corner, which held the frame, with masking tape to the tabletop. And I had to shoot (my old canon 40d then) on 2s delay, which is how long it would take to stop vibrating from the pressure of my finger.

I processed it then in LR (4? 3? don’t remember), and I’m satisfied with the results, because I wasn’t looking for professional quality. The originals weren’t that good, it’s mostly family history. No numbers were hurt in the process (gamma and the rest).

(p.s. pozdrav susjedu od komšije)

1 Like

Hello ! Time for a progress report.

Having failed to find a software that will do what I needed, I decided to have a look at the makeDNG program @ifb wrote. Now, my only previous c-programming experience was a program for Arduino controlling my LEDs. To make things worse, I only recently installed Mint 19.3 (dual boot) and am just trying to familiarize with it.

The modifications to makeDNG were easy (I made the thumbnail size fixed and corrected small incompatibility with my version of libtiff) and it works very well.

Next, from snippets of code cut from examples found on the web and pasted together, I cobbled a small raw image extractor. That one I promptly enlarged to read three separation tiffs and merge them into one raw image by interleaving red pixels from red separation tiff, green pixels from green tiff and blue from blue as per CFA layout from one of the input separation files.

It seemed logical that there is very little point in writing the raw image into a file only to immediately read it back into makeDNG, so I merged those two programs into one. I tested it with output of dcraw (un-interpolated uncompressed tiff), as well as, of Adobe Dng Converter (un-interpolated uncompressed dng) and it seems to work well (the separations were correctly assembled into a synthetic raw dng).

What is left to do is adding more of camera specific tags to a resulting dng file that could be useful to a raw processor in the future. This is where I need some advice.

Looking at the tags in dng file Adobe Dng Converter created from Nikon’s nef:
Some tags are obvious, but there are six 3x3 matrices (ColorMatrix1 & 2, CameraCalibration1 & 2, and ForwardMatrix1 &2) which may or may not be needed and likely have wrong coefficients. Then there is ProfileHueSatMap with close to 7000 floats and ProfileLookTable with another 14000 floats I do not know what to do with. Reading tiff and dng specifications is of no help.

Words of wisdom from people developing raw developers would be greatly appreciated.

I do minimal processing of raws in my hack software, and I’ve found you “need” these three things:

  1. camera white balance coefficients. Even then, those can be determined after-the-fact. I like to have them for convenience in batch-producing proof images.
  2. black subtraction number: doesn’t apply to all cameras, my Nikon D7000 didn’t need them but the Z6 does. For Nikon, I think it should be a single value, but some cameras deliver a number for each channel in the bayer or xtrans array.
  3. color profile. This is the 3x3 matrix used to convert the raw image from camera colorspace to whatever next colorspace you desire. The raw processor probably already has suitable numbers, but it probably won’t assign them to TIFFs. In a DNG, those would be stored in one of the ColorMatrix tags, the one to which the corresponding CameraCalibration tag is assigned “D65”. I snarfed the D7200 numbers from RT’s camconst.json file for you:

"dcraw_matrix": [ 8322,-3112,-1047,-6367,14342,2179,-988,1638,6394 ], // adobe dng_v9.0 d65

There’s probably more to this needed to support your workflow, but it should get you going. These numbers need to be divided by 10000.0 to produce the float numbers needed in the DNG tag.

BTW, if you are going through the channel separation to retain the original measurements in constructing the RGB pixels, you might investigate just using the ‘half’ demosaic algorithm. In dcraw, this is invoked with -h; what it does is to make a RGB image half the size of the original, with each 2x2 quad of pixels used to make a single RGB pixel, with the original measurements of the quad. FWIW…

1 Like

The three tags you mentioned are already included in makeDNG. As for their values, the white balance (‘as shot neutral’ I believe) will be very close to 1,1,1 (I control R, G, and B exposures), the black subtraction levels from camera are per CFA channel but identical in value so one value is probably enough, and the ColorMatrix2 (produced by Adobe Dng Converter) is identical to the one you listed. Now, the matrix values are most likely wrong because the color separations effectively takes the camera color space out of the equation. I think I need to look into the ICC profile linked by rom9 in ‘Any interest in a “film negative” feature in RT?’ (post #177) and see if I can extract something from there.

There are of course other mysteries in Adobe Dng Converter processed .nef. The white level listed is 15892 while dcraw tiff has all saturated pixels at 16383.

In addition, the pixel values appear to have been scaled (in camera or during decompression) because the histogram has more or less evenly spaced gaps (every 6th or 7th value has zero samples for red and blue, and every 40th value for green); irrelevant but annoying.

Come to think of it, this was the rabbit hole I tried to avoid…

RawTherapee’s camconst.json file has interesting information on this:

Down the rabbit hole we go… :scream:

1 Like

Thanks for the insight. The raw image values from dcraw as well as Adobe dng files clip at 16383 so specifying lower white level seems pointless. In addition the red and blue channels seem to have been boosted by 18% and green by 2.5% straight after digitizing in camera as raw images are identical. I will use Adobe white point for all it is worth.

I looked into the @rom9 ICC profile FilmNegRGB_650_550_460-elle-V4-g10.icc and it contains matrix:
Media White Point : 0.9642 1 0.82491
Chromatic Adaptation : 1 0 0 0 1 0 0 0 1
Red Matrix Column : 0.47734 0.18016 0.0
Blue Matrix Column : 0.1425 0.0294 0.81795
Green Matrix Column : 0.34436 0.79044 0.00696

Of course, I have no idea if I should use it together with, or instead of ColorMatrix1
Still long way from home…

In the Adobe workflow, ColorMatrix1 and ColorMatrix2 are to be interpolated between to arrive at a matrix tailored to the color temperature of the scene lighting. In “regular folks” work flow, the D65 ColorMatrix2 is used and white balance is corrected separately, a legacy from dcraw.

The camconst.json prose I linked to describes why one would want to back the white point off of the integer high-value, 16383 in the case of 14-bit raw data.

My thinking would be that you’d want to use a color matrix that was made for the camera used to capture the digitization, but in this digitizing film business there seems to be some homage required to a matrix corresponding to the original film response, which I don’t understand. @rom9 does, I’d bet… :smiley:

Hi all, sorry for the delay :slight_smile:

@damirk, if i correctly understand your process, it seems to me that you are actually eliminating the filter cross-talk between channels.
For example: when you process the red light shot, the values you read in the green pixels actually contribute to the red channel in the output image, right? In other words, you redirect the interference values to the channel where they belong…
Well, in this case, i would think that the original camera matrix is not useful, or even misleading.

After all, the camera matrix is a way of describing how the sensor responds to a scene illuminated by broad-spectrum white light (D65 or D50), hitting all channels at once. And that description is strongly influenced by the CFA cross-talk; if we remove that by pre-processing three mono shots, i’d think that the camera matrix won’t tell the truth anymore.
(…please note this is entirely based on intuition, i have zero science to support that :rofl:)

The icc profile i’ve made up, is based on three spectral primaries at about the same wavelengths of peak sensitivity of the film. Since we are interested in getting the “amount of light” that each color dye has recorded, i think it would make sense to use those wavelengths as primaries of our input colorspace.
That said , i’m not sure that choosing spectral primaries is the right choice: color dye sensitivity curves are not so narrow-band, so the “true” primaries that better describe film sensitivity could lie inside or outside the spectral locus… who knows, i have no idea how to measure that. Hopefully though, choosing the peak wavelengths should at least take us closer to the truth :slight_smile:

Regarding the interpolation between ColorMatrix1 & 2 as @ggbutcher pointed out: if any of the above makes sense, by extension i would guess that the same matrix should be used, regardless of the color temperature? So, i would write the same matrix in both tags, to be sure :wink:

1 Like

@rom9, when designing my process, I have considered using the leaks into the other channels but decided against it. For example: when processing the red light shot, the values in green pixels (and particularly in blue pixels) are way too low, and thus too noisy if boosted by white balance to acceptable levels. Overexposing and combining (HDR style) is also way too complicated for the benefits (I am already going overboard with this!). So I combine only the red channel pixels from red light shot (discarding the green and blue pixels of the red shot) with only green pixels from green light shot and only the blue pixels from blue light shot. I end up with equivalent of the white light shot minus the cross-talk between the color channels, and without worries about camera color space.

As for the ColorMatrix1 and ColorMatrix2, I can easily make them the same, but what coefficients should I use? The ones from the dng profile? The ones from your ICC profile? Something else? I understand that your profile may not be a perfect mach, but it is likely closer than camera profile.

I still do not know what ForwardMatrix1 &2, ProfileHueSatMap, and ProfileLookTable are all about, but I think it is safe to ignore them. I will copy the white level from Adobe dng although, to me, it does not make any sense and nobody seems to use it (Adobe included)

@NateWeatherly, your first three points are correct. The fourth point: I do not plan to process the captured images immediately. Rather, I intend to archive them as raw dng files encapsulating as much of the info available for future processing (after the original separation NEF and DNG files have long been discarded). The way I see it, should I get run over by the truck, even with best note keeping, nobody will be able to do anything with thousands of separation nefs but synthetic raw dng should be OK. So, raw processing is scheduled after all images have been digitized (and by then, who knows which raw processors will be available).

Hey guys, I got linked here via a private Facebook group dedicated to camera scanning, and it’s really cool to see how many people are trying to work out these problems. I’ve actually been banging my head against these problems for over a year now, and I wrote an article about the color space issues involves in camera scanning with RGB lights: Tri-Color Scanning, Color Negative Film & Color Spaces | by Alexi Maschas | Medium

I’ve also (coincidentally) been working on a way to convert the three RGB images back into DNG so that I can maintain my RAW workflow, and I have to say it’s… not easy. I’m a software developer by profession so I dug into this method really early on as a possible way to keep inverted negatives in RAW for my white light scans (there’s actually a method to correct for the color space transforms without using RGB light if you can manipulate the RAW pixel values). Here’s how I went about learning how to write RAW files:

You need a library that will read RAW files, preferably DNG. The DNG spec is actually an extension of the TIFF spec, so you can use something like LibTiff, but LibRAW has better raw-specific functionality (these are all C/C++ libraries for context). LibRAW is actually a ground-up rewrite of dcraw as a library.

The tricky thing is that the only good way to write RAW files is with the Adobe DNG SDK, which is a giant, horribly documented mess of code. You can technically write a DNG by creating a TIFF with the extra DNG-specific metadata, but that’s actually harder and you may end up with corrupted DNGs.

At the point I got distracted from my DNG experiments, I had managed to open a DNG, access the array of RAW sensor values, and start playing with manipulating the output and writing out RAW files. I didn’t get much further than that though. I’ll probably get back to it at some point in the next couple of months, but if I can help anyone else along with the process I’m happy to.

Hi, not sure if this will be useful, but few months ago I also had similar idea to combine multiple raw files. I started with DNG SDK, but had some issues to build it under Windows and figure out all required dependencies, and eventually gave up. Then I found raw2dng Linux utility on GitHub, which I was able to easily build under Ubuntu running in VirtualBox. So I gave it a try and added quick and dirty functionality to read multiple source DNG files and combine into a single DNG file. So far it worked fine for my DNG files (converted from PEF files) as far as I can tell.

I’ve just pushed my fork if anyone is interested.

Building under Ubuntu 18 works like this:

sudo apt install git cmake build-essential libexiv2-dev libjpeg-dev libraw-dev libexpat-dev zlib1g-dev
git clone https://github.com/mrociek/raw2dng.git
cd raw2dng && cmake . && make && sudo make install

For some reason it refuses to build under Ubuntu 20, but I didn’t have time to investigate the problem.

Basic usage:

raw2dng -o target.dng -g g.dng -b b.dng r.dng
3 Likes