Digitizing film using DSLR and RGB LED lights

I do minimal processing of raws in my hack software, and I’ve found you “need” these three things:

  1. camera white balance coefficients. Even then, those can be determined after-the-fact. I like to have them for convenience in batch-producing proof images.
  2. black subtraction number: doesn’t apply to all cameras, my Nikon D7000 didn’t need them but the Z6 does. For Nikon, I think it should be a single value, but some cameras deliver a number for each channel in the bayer or xtrans array.
  3. color profile. This is the 3x3 matrix used to convert the raw image from camera colorspace to whatever next colorspace you desire. The raw processor probably already has suitable numbers, but it probably won’t assign them to TIFFs. In a DNG, those would be stored in one of the ColorMatrix tags, the one to which the corresponding CameraCalibration tag is assigned “D65”. I snarfed the D7200 numbers from RT’s camconst.json file for you:

"dcraw_matrix": [ 8322,-3112,-1047,-6367,14342,2179,-988,1638,6394 ], // adobe dng_v9.0 d65

There’s probably more to this needed to support your workflow, but it should get you going. These numbers need to be divided by 10000.0 to produce the float numbers needed in the DNG tag.

BTW, if you are going through the channel separation to retain the original measurements in constructing the RGB pixels, you might investigate just using the ‘half’ demosaic algorithm. In dcraw, this is invoked with -h; what it does is to make a RGB image half the size of the original, with each 2x2 quad of pixels used to make a single RGB pixel, with the original measurements of the quad. FWIW…

2 Likes

The three tags you mentioned are already included in makeDNG. As for their values, the white balance (‘as shot neutral’ I believe) will be very close to 1,1,1 (I control R, G, and B exposures), the black subtraction levels from camera are per CFA channel but identical in value so one value is probably enough, and the ColorMatrix2 (produced by Adobe Dng Converter) is identical to the one you listed. Now, the matrix values are most likely wrong because the color separations effectively takes the camera color space out of the equation. I think I need to look into the ICC profile linked by rom9 in ‘Any interest in a “film negative” feature in RT?’ (post #177) and see if I can extract something from there.

There are of course other mysteries in Adobe Dng Converter processed .nef. The white level listed is 15892 while dcraw tiff has all saturated pixels at 16383.

In addition, the pixel values appear to have been scaled (in camera or during decompression) because the histogram has more or less evenly spaced gaps (every 6th or 7th value has zero samples for red and blue, and every 40th value for green); irrelevant but annoying.

Come to think of it, this was the rabbit hole I tried to avoid…

RawTherapee’s camconst.json file has interesting information on this:

Down the rabbit hole we go… :scream:

1 Like

Thanks for the insight. The raw image values from dcraw as well as Adobe dng files clip at 16383 so specifying lower white level seems pointless. In addition the red and blue channels seem to have been boosted by 18% and green by 2.5% straight after digitizing in camera as raw images are identical. I will use Adobe white point for all it is worth.

I looked into the @rom9 ICC profile FilmNegRGB_650_550_460-elle-V4-g10.icc and it contains matrix:
Media White Point : 0.9642 1 0.82491
Chromatic Adaptation : 1 0 0 0 1 0 0 0 1
Red Matrix Column : 0.47734 0.18016 0.0
Blue Matrix Column : 0.1425 0.0294 0.81795
Green Matrix Column : 0.34436 0.79044 0.00696

Of course, I have no idea if I should use it together with, or instead of ColorMatrix1
Still long way from home…

In the Adobe workflow, ColorMatrix1 and ColorMatrix2 are to be interpolated between to arrive at a matrix tailored to the color temperature of the scene lighting. In “regular folks” work flow, the D65 ColorMatrix2 is used and white balance is corrected separately, a legacy from dcraw.

The camconst.json prose I linked to describes why one would want to back the white point off of the integer high-value, 16383 in the case of 14-bit raw data.

My thinking would be that you’d want to use a color matrix that was made for the camera used to capture the digitization, but in this digitizing film business there seems to be some homage required to a matrix corresponding to the original film response, which I don’t understand. @rom9 does, I’d bet… :smiley:

Hi all, sorry for the delay :slight_smile:

@damirk, if i correctly understand your process, it seems to me that you are actually eliminating the filter cross-talk between channels.
For example: when you process the red light shot, the values you read in the green pixels actually contribute to the red channel in the output image, right? In other words, you redirect the interference values to the channel where they belong…
Well, in this case, i would think that the original camera matrix is not useful, or even misleading.

After all, the camera matrix is a way of describing how the sensor responds to a scene illuminated by broad-spectrum white light (D65 or D50), hitting all channels at once. And that description is strongly influenced by the CFA cross-talk; if we remove that by pre-processing three mono shots, i’d think that the camera matrix won’t tell the truth anymore.
(…please note this is entirely based on intuition, i have zero science to support that :rofl:)

The icc profile i’ve made up, is based on three spectral primaries at about the same wavelengths of peak sensitivity of the film. Since we are interested in getting the “amount of light” that each color dye has recorded, i think it would make sense to use those wavelengths as primaries of our input colorspace.
That said , i’m not sure that choosing spectral primaries is the right choice: color dye sensitivity curves are not so narrow-band, so the “true” primaries that better describe film sensitivity could lie inside or outside the spectral locus… who knows, i have no idea how to measure that. Hopefully though, choosing the peak wavelengths should at least take us closer to the truth :slight_smile:

Regarding the interpolation between ColorMatrix1 & 2 as @ggbutcher pointed out: if any of the above makes sense, by extension i would guess that the same matrix should be used, regardless of the color temperature? So, i would write the same matrix in both tags, to be sure :wink:

1 Like

@rom9, when designing my process, I have considered using the leaks into the other channels but decided against it. For example: when processing the red light shot, the values in green pixels (and particularly in blue pixels) are way too low, and thus too noisy if boosted by white balance to acceptable levels. Overexposing and combining (HDR style) is also way too complicated for the benefits (I am already going overboard with this!). So I combine only the red channel pixels from red light shot (discarding the green and blue pixels of the red shot) with only green pixels from green light shot and only the blue pixels from blue light shot. I end up with equivalent of the white light shot minus the cross-talk between the color channels, and without worries about camera color space.

As for the ColorMatrix1 and ColorMatrix2, I can easily make them the same, but what coefficients should I use? The ones from the dng profile? The ones from your ICC profile? Something else? I understand that your profile may not be a perfect mach, but it is likely closer than camera profile.

I still do not know what ForwardMatrix1 &2, ProfileHueSatMap, and ProfileLookTable are all about, but I think it is safe to ignore them. I will copy the white level from Adobe dng although, to me, it does not make any sense and nobody seems to use it (Adobe included)

@NateWeatherly, your first three points are correct. The fourth point: I do not plan to process the captured images immediately. Rather, I intend to archive them as raw dng files encapsulating as much of the info available for future processing (after the original separation NEF and DNG files have long been discarded). The way I see it, should I get run over by the truck, even with best note keeping, nobody will be able to do anything with thousands of separation nefs but synthetic raw dng should be OK. So, raw processing is scheduled after all images have been digitized (and by then, who knows which raw processors will be available).

Hey guys, I got linked here via a private Facebook group dedicated to camera scanning, and it’s really cool to see how many people are trying to work out these problems. I’ve actually been banging my head against these problems for over a year now, and I wrote an article about the color space issues involves in camera scanning with RGB lights: Tri-Color Scanning, Color Negative Film & Color Spaces | by Alexi Maschas | Medium

I’ve also (coincidentally) been working on a way to convert the three RGB images back into DNG so that I can maintain my RAW workflow, and I have to say it’s… not easy. I’m a software developer by profession so I dug into this method really early on as a possible way to keep inverted negatives in RAW for my white light scans (there’s actually a method to correct for the color space transforms without using RGB light if you can manipulate the RAW pixel values). Here’s how I went about learning how to write RAW files:

You need a library that will read RAW files, preferably DNG. The DNG spec is actually an extension of the TIFF spec, so you can use something like LibTiff, but LibRAW has better raw-specific functionality (these are all C/C++ libraries for context). LibRAW is actually a ground-up rewrite of dcraw as a library.

The tricky thing is that the only good way to write RAW files is with the Adobe DNG SDK, which is a giant, horribly documented mess of code. You can technically write a DNG by creating a TIFF with the extra DNG-specific metadata, but that’s actually harder and you may end up with corrupted DNGs.

At the point I got distracted from my DNG experiments, I had managed to open a DNG, access the array of RAW sensor values, and start playing with manipulating the output and writing out RAW files. I didn’t get much further than that though. I’ll probably get back to it at some point in the next couple of months, but if I can help anyone else along with the process I’m happy to.

1 Like

Hi, not sure if this will be useful, but few months ago I also had similar idea to combine multiple raw files. I started with DNG SDK, but had some issues to build it under Windows and figure out all required dependencies, and eventually gave up. Then I found raw2dng Linux utility on GitHub, which I was able to easily build under Ubuntu running in VirtualBox. So I gave it a try and added quick and dirty functionality to read multiple source DNG files and combine into a single DNG file. So far it worked fine for my DNG files (converted from PEF files) as far as I can tell.

I’ve just pushed my fork if anyone is interested.

Building under Ubuntu 18 works like this:

sudo apt install git cmake build-essential libexiv2-dev libjpeg-dev libraw-dev libexpat-dev zlib1g-dev
git clone https://github.com/mrociek/raw2dng.git
cd raw2dng && cmake . && make && sudo make install

For some reason it refuses to build under Ubuntu 20, but I didn’t have time to investigate the problem.

Basic usage:

raw2dng -o target.dng -g g.dng -b b.dng r.dng
3 Likes

So it’s been a LONG time since I started working on this - I bought an RGB LED light nearly two years ago, poked at it a bit, but then shelved it. ( Just bought me an Aputure MC. Looks like it is going to be a love affair. - #15 by Entropy512 )

I finally picked the project back up today, and finished a Python script that:

  • Controls a Neewer RGB176 light - Amazon.com - sadly no longer available, so if anyone else wants to use this, hope that Neewer’s other RGB lights use the same BLE protocol. Amazon.com may be fully compatible protocol wise since it claims to be a refresh of what I have, but TBD
  • Captures a red-only, green-only, and blue-only image and displays the maximum value for the captured CFA plane so that you can adjust the light intensity and shutter time.
    –Suggested tuning is to have a segment of unexposed negative, and initially set all brightnesses to maximum. Adjust shutter speed until the dimmest channel is within range with some margin, then adjust the backlight brightnesses of the other channels to bring them into range
  • Merges those three planes into a single image and writes it as a DNG using the same techniques as my Python image conversion tools. Due to some small not-yet-implemented issues in tifffile, I save as a floating-point TIFF. An advantage of this is that I can stack multiple exposures if there’s ever a scenario where that might be beneficial, but so far this seems unlikely.

Right now the DNG color metadata recycles the camera’s. That’s not really valid for this use case. I don’t know if the best solution is to set the ColorMatrix so the primaries match the LED wavelengths, or set the RT filmneg tool to do all work in raw colorspace and also disable the camera input profile. Probably best would be to find a way to characterize the primaries of the film itself… No bueno if your shots are 25 years old. (Edit: Not necessarily! Current Gold 200 and recently-discontinued-but-datasheet-still-available Superia X-Tra 400 are really close matches to Gold 400 from the mid-late 1990s and “S-400” from the very late 1990s/early 2000s, and making a DCP profile using the published SSFs has AMAZING results.)

Current scripts are on github at GitHub - Entropy512/rgb_led_filmscan: RGB LED capture of film negatives, inspired by NateWeatherly on discuss.pixls.us

Edit: Also for reference, so far I’m using Amazon.com for holding my negatives, which has a built-in diffuser since the Neewer light’s diffuser isn’t so hot.

3 Likes

It would be helpful if you could show us the difference between the results of a normal scan and an RGB separation scan with a sample image.

I only have one before/after example where I feel comfortable getting permission from the people in it to post it publically. This is not just the RGB backlight difference and the associated color management changes (such as using a DCP profile derived from the film SSF), but also my work on fixing Film Negative - Improper assumption of mapping between raw values and transmission coefficient · Issue #7063 · Beep6581/RawTherapee · GitHub

Before (Me in December 1998, that film camera’s clock was clearly WAY off as the RPI/Union weekend was December 4/5 according to Cornell’s men’s hockey schedule…):


After:

In general, trying to bump up the saturation of the “old” version to compensate would result in the hue of the reds being way off. In the “new” process, it’s proper Cornell red. Or as close as sRGB can get to it for a recently issued pepband shirt.

I’m trying to find the original raw capture of the “old” version to see what the difference is if I do the “film SSF profile” and “toe adjustment” changes. The “before” example was May 2018

2 Likes

Hello! I’ve been working on a workflow for negative film inversion and found your approach to building input profiles from SSF data extremely interesting.
I’m trying to understand some of the details, but more than a few parts are still unclear to me, and I’d really appreciate any guidance or clarification you might be willing to share. I also reached out briefly by email, but wanted to follow up here in case this is a better place to connect. Thanks in advance for your time.

1 Like

Do keep us updated on how you do. I’d like to try this some day.

Welcome to the forum. I’ve done a lot with using spectral data to make profiles, including measuring my cameras with a spectroscope. If you have data for your camera, the FOSS command-line dcamprof and its commercial gui-sibling Lumariver are the only tools I’m aware of that box the process into something usable. This is a good read on camera profiling in general, oriented to using dcamprof:

https://rawtherapee.com/mirror/dcamprof/camera-profiling.html

About halfway down the table of contents you’ll find “Making a profile from SSFs”, a specific how-to for your interest.

If you don’t have SSF data for a specific camera you’re interested in, post the make/model here and I’ll look for it in my collection. There have been a number of projects to measure such, mostly as university research; whenever I run across one I stash their data. Some of it, if licensed appropriately, I’ve posted here:

https://github.com/butcherg/ssf-data

Of note, the ‘gold-standard’ for measuring a camera’s SSF is using a lab-grade monochromator to produce a sequence of ‘single-frequency’ wavelengths, with a luminance meter to correct illuminant bias. Some of the data in my collection was measured such, but others use various alternatives involving diffraction gratings (my method), filters, etc. My experience is, they’re all close enough for the purpose, giving deltaE numbers from reference spectra well-better than what you get from using a ColorChecker target shot.

3 Likes

Cripes, the camera-profiling.html doesn’t have the specific how-to, that’s located in the dcamprof documentation, here:

https://rawtherapee.com/mirror/dcamprof/dcamprof.html#workflow_ssf

Sorry about that…

2 Likes

Hello Glenn,

I really appreciate your reply! However, I think the approach I’m trying to take with FSS profiling is a bit different from typical camera profiling.

I was wondering if it might be possible to have a quick chat, or if I could send you a message to explain things a bit better. I’d really appreciate it.

Thanks!

1 Like

Ah, get that now, reading above your post.

I haven’t replied to your email yet because I’ve been sick for the past few days, but in general, I vastly prefer to have discussions like this in the open.

I’m pretty sure I linked my github repo where I have my capture utility, but in general:
I use a controllable RGB LED light from Neewer to backlight the negative. Brightness on the LEDs is adjustable so that the green channel lit by green light, blue channel by blue, etc. are all fairly close to each other in raw values
Three captures are taken, one of each color (Red, Green, Blue)
The red CFA sites from the red capture, green from green, and blue from blue are combined into a synthetic DNG that, at least in theory, has the camera’s CFA behaviors completely eliminated. This is similar to how professional film scanners work - usually a monochrome sensor combining red, blue, green, and IR (for scratch/dust detection/compensation). All of this is done automagically by rgb_led_filmscan/capture_negative.py at main · Entropy512/rgb_led_filmscan · GitHub Right now calibrating/choosing the RGB intensities for the Neewer is a manual process, but you need to do it only once per capture session. I just realized I should modify the script to allow not providing an output file, in which case it’ll just print out the max/min level statistics so you can do the brightness tuning. Rough tuning proces is: Set RGB to fullbright on all channels. Adjust the shutter speed until one of the channels is not clipping and has some margin from clipping. Then adjust the light RGB values for the other two channels downwards until those channels are also not clipping.

Because I’ve almost completely eliminated any effect the camera’s CFA has on color management, I can now generate a DCP profile based on the film datasheet SSFs, and use THAT as the camera profile in RawTherapee. The process for doing so is EXACTLY the same as doing so for a digital sensor - I used dcamprof’s documentation with no special steps at all. So all of the tips @ggbutcher gave you are actually valid for this workflow!

Note that RawTherapee’s film negative tool defaults to performing inversion after color conversion. I strongly disagree with this and consider it to be bad practice. It is absolutely the wrong thing to do when working with these composited captures. Fortunately the filmneg module can be reconfigured to operate prior to color conversion.

So the rough pipeline that is occurring in RT is:
Invert the color channels with the filmneg tool
Demosaic
Convert colors to the working profile, using a DCP generated from the film’s spectral sensitivities instead of the capture device - as far as RT is concerned, the film is the camera with respect to color management!

The key here is that the capture method greatly simplifies the color management task, allowing you to just tell RT that you’re using a “camera” with a color profile that is derived from the film datasheet SSFs, completely ignoring the actual spectral behavior of the camera that was used to capture the negative.

rgb_led_filmscan/film_data at main · Entropy512/rgb_led_filmscan · GitHub has data for a few types of film I was able to get datasheets for (I just noticed I need to fix the formatting of README.md to make it easier to read), the CSVs can be converted to dcamprof json using rgb_led_filmscan/ssfcsv_to_json.py at main · Entropy512/rgb_led_filmscan · GitHub

Now the work I had in my RT fork at Commits · Entropy512/RawTherapee · GitHub is a whole other animal - but it doesn’t deal with SSFs at all, it’s more to deal with film response nonlinearities and is a huge work in progress that I need to revisit at some point

1 Like

Hi Andy,

Thank you so much for your reply, I really appreciate it. And no worries at all about the email, I genuinely hope you start feeling better over the next few days.

Thanks as well for the feedback. Just to give you a bit more context, I don’t usually perform my inversion process inside RawTherapee. I tend to do the inversion manually in DaVinci Resolve, and I use RawTherapee more as a way to prepare the input that I’ll later work with in Resolve.

For a long time, I’ve felt the need to find a better way to manage the color of the negative before inversion. I’ve explored a number of approaches, including a tool called negicc (GitHub - arufahc/negicc: ICC Profile for Color Negative Film · GitHub). I’ve been trying to build something with it, although I haven’t fully managed to gather the resources to do so yet.

It was through that process that I came across your method and your proposal of using SSF data to create an input profile. I don’t really have much programming experience, almost none to be honest, so I’ve found it quite challenging to follow some of the instructions. I was also a bit confused by the file structure and naming in the repository. Still, I think I managed to compile, using dcamprof, an input profile based on the data you provided for Kodak Gold 200.

When I use that profile to prep a file using RawTherapee, the processing improves significantly. The colors of the negative become much more balanced, and the whole process feels coherent and appropriate to me.

Regarding your point about performing inversion before color conversion, I assume that means my workflow isn’t technically the most correct approach, but it’s what has been working for me so far.

If possible, I’d really appreciate any guidance you could share on how to gather the SSF data from film datasheets using the plotting tool. I’d love to build an input profile for Portra 400, which is the film I use the most, but I honestly don’t know how to use the plotting tools to derive those curves into the kind of data that ends up in your CSV files.

Any information you could share about this, or anything else related to the topic, would be incredibly helpful. I’m truly very grateful in advance.

Thank you again, and I hope you continue to feel better over the next few days.