Digitizing film using DSLR and RGB LED lights

I do minimal processing of raws in my hack software, and I’ve found you “need” these three things:

  1. camera white balance coefficients. Even then, those can be determined after-the-fact. I like to have them for convenience in batch-producing proof images.
  2. black subtraction number: doesn’t apply to all cameras, my Nikon D7000 didn’t need them but the Z6 does. For Nikon, I think it should be a single value, but some cameras deliver a number for each channel in the bayer or xtrans array.
  3. color profile. This is the 3x3 matrix used to convert the raw image from camera colorspace to whatever next colorspace you desire. The raw processor probably already has suitable numbers, but it probably won’t assign them to TIFFs. In a DNG, those would be stored in one of the ColorMatrix tags, the one to which the corresponding CameraCalibration tag is assigned “D65”. I snarfed the D7200 numbers from RT’s camconst.json file for you:

"dcraw_matrix": [ 8322,-3112,-1047,-6367,14342,2179,-988,1638,6394 ], // adobe dng_v9.0 d65

There’s probably more to this needed to support your workflow, but it should get you going. These numbers need to be divided by 10000.0 to produce the float numbers needed in the DNG tag.

BTW, if you are going through the channel separation to retain the original measurements in constructing the RGB pixels, you might investigate just using the ‘half’ demosaic algorithm. In dcraw, this is invoked with -h; what it does is to make a RGB image half the size of the original, with each 2x2 quad of pixels used to make a single RGB pixel, with the original measurements of the quad. FWIW…

1 Like

The three tags you mentioned are already included in makeDNG. As for their values, the white balance (‘as shot neutral’ I believe) will be very close to 1,1,1 (I control R, G, and B exposures), the black subtraction levels from camera are per CFA channel but identical in value so one value is probably enough, and the ColorMatrix2 (produced by Adobe Dng Converter) is identical to the one you listed. Now, the matrix values are most likely wrong because the color separations effectively takes the camera color space out of the equation. I think I need to look into the ICC profile linked by rom9 in ‘Any interest in a “film negative” feature in RT?’ (post #177) and see if I can extract something from there.

There are of course other mysteries in Adobe Dng Converter processed .nef. The white level listed is 15892 while dcraw tiff has all saturated pixels at 16383.

In addition, the pixel values appear to have been scaled (in camera or during decompression) because the histogram has more or less evenly spaced gaps (every 6th or 7th value has zero samples for red and blue, and every 40th value for green); irrelevant but annoying.

Come to think of it, this was the rabbit hole I tried to avoid…

RawTherapee’s camconst.json file has interesting information on this:

Down the rabbit hole we go… :scream:

1 Like

Thanks for the insight. The raw image values from dcraw as well as Adobe dng files clip at 16383 so specifying lower white level seems pointless. In addition the red and blue channels seem to have been boosted by 18% and green by 2.5% straight after digitizing in camera as raw images are identical. I will use Adobe white point for all it is worth.

I looked into the @rom9 ICC profile FilmNegRGB_650_550_460-elle-V4-g10.icc and it contains matrix:
Media White Point : 0.9642 1 0.82491
Chromatic Adaptation : 1 0 0 0 1 0 0 0 1
Red Matrix Column : 0.47734 0.18016 0.0
Blue Matrix Column : 0.1425 0.0294 0.81795
Green Matrix Column : 0.34436 0.79044 0.00696

Of course, I have no idea if I should use it together with, or instead of ColorMatrix1
Still long way from home…

In the Adobe workflow, ColorMatrix1 and ColorMatrix2 are to be interpolated between to arrive at a matrix tailored to the color temperature of the scene lighting. In “regular folks” work flow, the D65 ColorMatrix2 is used and white balance is corrected separately, a legacy from dcraw.

The camconst.json prose I linked to describes why one would want to back the white point off of the integer high-value, 16383 in the case of 14-bit raw data.

My thinking would be that you’d want to use a color matrix that was made for the camera used to capture the digitization, but in this digitizing film business there seems to be some homage required to a matrix corresponding to the original film response, which I don’t understand. @rom9 does, I’d bet… :smiley:

Hi all, sorry for the delay :slight_smile:

@damirk, if i correctly understand your process, it seems to me that you are actually eliminating the filter cross-talk between channels.
For example: when you process the red light shot, the values you read in the green pixels actually contribute to the red channel in the output image, right? In other words, you redirect the interference values to the channel where they belong…
Well, in this case, i would think that the original camera matrix is not useful, or even misleading.

After all, the camera matrix is a way of describing how the sensor responds to a scene illuminated by broad-spectrum white light (D65 or D50), hitting all channels at once. And that description is strongly influenced by the CFA cross-talk; if we remove that by pre-processing three mono shots, i’d think that the camera matrix won’t tell the truth anymore.
(…please note this is entirely based on intuition, i have zero science to support that :rofl:)

The icc profile i’ve made up, is based on three spectral primaries at about the same wavelengths of peak sensitivity of the film. Since we are interested in getting the “amount of light” that each color dye has recorded, i think it would make sense to use those wavelengths as primaries of our input colorspace.
That said , i’m not sure that choosing spectral primaries is the right choice: color dye sensitivity curves are not so narrow-band, so the “true” primaries that better describe film sensitivity could lie inside or outside the spectral locus… who knows, i have no idea how to measure that. Hopefully though, choosing the peak wavelengths should at least take us closer to the truth :slight_smile:

Regarding the interpolation between ColorMatrix1 & 2 as @ggbutcher pointed out: if any of the above makes sense, by extension i would guess that the same matrix should be used, regardless of the color temperature? So, i would write the same matrix in both tags, to be sure :wink:

1 Like

@rom9, when designing my process, I have considered using the leaks into the other channels but decided against it. For example: when processing the red light shot, the values in green pixels (and particularly in blue pixels) are way too low, and thus too noisy if boosted by white balance to acceptable levels. Overexposing and combining (HDR style) is also way too complicated for the benefits (I am already going overboard with this!). So I combine only the red channel pixels from red light shot (discarding the green and blue pixels of the red shot) with only green pixels from green light shot and only the blue pixels from blue light shot. I end up with equivalent of the white light shot minus the cross-talk between the color channels, and without worries about camera color space.

As for the ColorMatrix1 and ColorMatrix2, I can easily make them the same, but what coefficients should I use? The ones from the dng profile? The ones from your ICC profile? Something else? I understand that your profile may not be a perfect mach, but it is likely closer than camera profile.

I still do not know what ForwardMatrix1 &2, ProfileHueSatMap, and ProfileLookTable are all about, but I think it is safe to ignore them. I will copy the white level from Adobe dng although, to me, it does not make any sense and nobody seems to use it (Adobe included)

@NateWeatherly, your first three points are correct. The fourth point: I do not plan to process the captured images immediately. Rather, I intend to archive them as raw dng files encapsulating as much of the info available for future processing (after the original separation NEF and DNG files have long been discarded). The way I see it, should I get run over by the truck, even with best note keeping, nobody will be able to do anything with thousands of separation nefs but synthetic raw dng should be OK. So, raw processing is scheduled after all images have been digitized (and by then, who knows which raw processors will be available).

Hey guys, I got linked here via a private Facebook group dedicated to camera scanning, and it’s really cool to see how many people are trying to work out these problems. I’ve actually been banging my head against these problems for over a year now, and I wrote an article about the color space issues involves in camera scanning with RGB lights: Tri-Color Scanning, Color Negative Film & Color Spaces | by Alexi Maschas | Medium

I’ve also (coincidentally) been working on a way to convert the three RGB images back into DNG so that I can maintain my RAW workflow, and I have to say it’s… not easy. I’m a software developer by profession so I dug into this method really early on as a possible way to keep inverted negatives in RAW for my white light scans (there’s actually a method to correct for the color space transforms without using RGB light if you can manipulate the RAW pixel values). Here’s how I went about learning how to write RAW files:

You need a library that will read RAW files, preferably DNG. The DNG spec is actually an extension of the TIFF spec, so you can use something like LibTiff, but LibRAW has better raw-specific functionality (these are all C/C++ libraries for context). LibRAW is actually a ground-up rewrite of dcraw as a library.

The tricky thing is that the only good way to write RAW files is with the Adobe DNG SDK, which is a giant, horribly documented mess of code. You can technically write a DNG by creating a TIFF with the extra DNG-specific metadata, but that’s actually harder and you may end up with corrupted DNGs.

At the point I got distracted from my DNG experiments, I had managed to open a DNG, access the array of RAW sensor values, and start playing with manipulating the output and writing out RAW files. I didn’t get much further than that though. I’ll probably get back to it at some point in the next couple of months, but if I can help anyone else along with the process I’m happy to.

Hi, not sure if this will be useful, but few months ago I also had similar idea to combine multiple raw files. I started with DNG SDK, but had some issues to build it under Windows and figure out all required dependencies, and eventually gave up. Then I found raw2dng Linux utility on GitHub, which I was able to easily build under Ubuntu running in VirtualBox. So I gave it a try and added quick and dirty functionality to read multiple source DNG files and combine into a single DNG file. So far it worked fine for my DNG files (converted from PEF files) as far as I can tell.

I’ve just pushed my fork if anyone is interested.

Building under Ubuntu 18 works like this:

sudo apt install git cmake build-essential libexiv2-dev libjpeg-dev libraw-dev libexpat-dev zlib1g-dev
git clone https://github.com/mrociek/raw2dng.git
cd raw2dng && cmake . && make && sudo make install

For some reason it refuses to build under Ubuntu 20, but I didn’t have time to investigate the problem.

Basic usage:

raw2dng -o target.dng -g g.dng -b b.dng r.dng
3 Likes