inverting photos of color negatives with darktable

Because the thread title originally refers to darktable, here’s my quick try with it.

Tone curve for inverting, white balance manually set to the lateral frame stripes and a second S-shaped tone curve plus exposure correction and mirroring. No further optimization done.
I did not really try to achieve the coloring of the original print but the result looks decent in my eyes.

polar_bear
polar_bear.cr2.xmp (3.5 KB)

Here the image with some more optimization:
polar_bear_01
polar_bear.cr2.xmp (5.1 KB)

2 Likes

There is even structure in the sky …

house
house.cr2.xmp (4.8 KB)

2 Likes

Without having used ImageJ before, it was totally not clear to me that I have to enable select the different channels and use that individually. After I found “Image > Color > Channels Tool”, I was able to reproduce your result. :slight_smile:

If I understood the GIMP auto levels code correctly at a first glance, it searches for the first (and last) point in the histogram that clips at least 0.6 % of the pixels (and is not an empty point in the histogram) and adjust each channel to that range. So it’s not terribly complicated… I also spent a few minutes digging through the ImageJ code, but didn’t immediately find the autoscaling code. I doubt it will be much different.

That is aresome and really helpful. So what actually works is to use the tone curve to invert the image and not the base curve or invert. I gave up too early on the tone curve because I kept “scale chroma” on automatic instead of setting it to manual and inverting the a and b curves like you did. I also found out that “automatic in RGB” also does the trick.

Another question for someone who understands darktable: From what I can tell, the color balance module in darktable can be used to set the black and (indiretly via gain) white points for each RGB channel, so is could be used to apply the RGB histogram adjustment directly in darktable. A) Is this correct? B) Is that the only module or did I miss something? C) Is there an automatic adjustment in darktable, similar to GIMP’s auto levels or the ImageJ auto adjustment?

So at this point, I see 3 options to get to some automated workflow:

  1. Crop the image in darktable (can be done in automatically with a fixed negative holder), export tiff, use GIMP in batch mode to auto-adjust colors. (I’ll post scripts later.)
  2. Edit one picture similar to what @Thomas_Do did in darktable and just use that on all pictures without any auto adjustments. I’ll have to test how the results are with different pictures.
  3. Crop the image in darktable, export tiff, implement my own histogram adjustment and write the calculated parameters for the color balance module back to the xmp sidecar. Sounds like a lot of effort, but it might work out.

Hi!
Here’s my taken with darktable :slight_smile:
Two tone curves and lift/gamma/gain for the color balance


house.cr2.xmp (40.3 KB)

2 Likes

I quickly want to share the results of some experiments of doing the auto adjustment in GIMP. I didn’t fully automate all steps from the raw file to a jpeg (yet), but for the GIMP part I just slightly had to modify an example I found to add the inverting part.

batch-invert-levels-stretch.scm (785 Bytes)
batch-levels-stretch.scm (727 Bytes) (for a version without inverting)

The files are placed in ~/.gimp-2.x/scripts/ or in case of using flatpak (as 2.10 has support for 16 bit tiffs) in ~/.var/app/org.gimp.GIMP/config/GIMP/2.10/scripts/. Then, the following command processes the tiffs in-place:

gimp -i -b '(batch-invert-levels-stretch "*.tif")' -b '(gimp-quit 0)'

Using that, the workflow is

  1. in darktable, crop and disable the base curve
  2. export as 16bit tiff
  3. batch-process the tiffs using the GIMP command above

Here is an overview of the results in comparison with the flatbed scanner results (and also a zip with the full size jpgs). I also tested not disabling the base curve and doing the color inversion using darktable (tone curve with auto RGB mode), but the results are always worse. Also a few other test pictures (that I don’t want to share here because they show people) confirm that. Note that I didn’t do any other processing (e.g. contrast) on purpose.

processing_examples_gimp_batch processed_pictures_gimp_batch.zip (54.3 MB)

I think the automatic results are good enough for me as a first step. Manual processing is always an option to do later (or in a few decades…) for the pictures that are important enough.

Thanks again to everybody who has posted here! And of course, if someone still has thoughts on how to better process the negatives or can explain why the invert module in darktable is not working as expected, I’m always interested. :slight_smile:

1 Like

Apologies if previously posted. Here’s a post I made back in July for a similar thread:

I re-reference it here because I think the histograms are particularly telling. If you look at the histogram in the second screenshot, you can see the plots of each of the three channels are similar, but shifted - note the peak in each, where it occurs. For an image without a dominant color, this histogram illustrates the shift in the channels that the orange cast represents, and points to what needs to be done to correct it. The rest of the thread describes the application of three separate curves, one each to the individual R, G, and B channels, specifically to put their black and white points at the bottom and top of the respective data, so to speak. The next curve is a full RGB one to do the inversion, and ta-da! - positive image, appropriate colors. After this, I’ve found white balance is still usually needed, probably due to the difference in the scene’s light vs the film temperature.

Okay, the above doesn’t directly address darktable, but it might provide insight into why what you’re seeing is occurring.

1 Like

It wasn’t posted here yet, but I’ve seen your post before, just didn’t give it the attention it deserved. :wink:

Without testing rawproc yet, I took a quick look at your code to determine the black and white point and the default settings (hope I found the right code snippets) and it seems that in contrast to GIMP (which clips a fixed amount of the histogram to black or white), you seem to look for the first/last value that has 5% of the maximum value. Just out of curiosity, did you compare different approaches to do this and chose this one for a reason or did you just implement this and it just worked?

I’m also curious about what happened to your box full of negatives. If you already processed many pictures: Are there some cases in which the automatic processing totally fails? Also, did you already write about your physical setup (light source, negative holder etc) somewhere?

Yes, I’ve found the coded defaults for blackwhitepoint to be too intrusive in a few cases, so I’ve amended them in my .conf file to be 0.005.

Note that the screenshots in the thread use the curve tool, and those points are set “by hand”, that is, I used the histogram to find the lower and upper extents of the “data clump”, and manually dragged the black and white points over to them.

The dev branch of rawproc has a “tool list” tool that lets you apply a list of commands from a text file right after the tool selected in the tree. Here’s my tool list for color negative conversion:

set tool.blackwhitepoint.auto=1
blackwhitepoint:red
blackwhitepoint:green
blackwhitepoint:blue
curve:0,255,255,0
whitebalance:auto

This list assumes a pre-cropped image. Also, the ‘set’ command is only applicable for the tool list; whatever setting existed before the tool list is applied is back in play afterward. It’s great fun to open a negative and apply this; it just zips through the operations lickety-split.

Not there yet. Recently procured a macro lens with this project in mind, need to settle on a holder of some sort. I will probably make it, as that sort of endeavor scratches my ‘tinker itch’…

This endeavor will be a priority, as my mom is now 90 and one of the last lucid persons of her generation in my family…

I’ve not tried darktable but am getting good results with RawTherapee… but only because I’m using the Negative.png HaldCLUT referred to above.

Removing the orange mask from colour negs is not as straight forward as one may hope… refer to Filmshooting is under construction
The process here (if I’ve read it right… and I stand to be corrected) reguires the subtraction of inverted colour channels, which can be done using photoshop or GIMP… if you know how… not being a regular user of either program. I have found this too complicated and I don’t think it would be well suited to processing a large numbers of photos.

However the Negative.png HaldCLUT does a fairly good job of emulating this, although better for some film stock than others (seems good for Konica vx100 but not quite so good for Kodak Gold film). Only the colour temperature and exposure adjustment are needed to get an image that is at least as good as the original shop print.

Should mention that I needed to set the working color profile to sRGB and increase the saturation to avoid reds being blown out

RawTherapee allows a very easy and fast work flow. Once a profile has been created it can rapidly be applied to a folder of images and then minor adjusments made to each (including cropping) before queuing them for final processing. I shoot the negs in raw but use the invert function in my Pentax K1 so that the inserted jpg is positive and so easier to make sense of. I can capture a 36 shot film in around 15 minutes and process to jpgs about as fast.

I’ve a feeling that if the channel mixer had an invert switch for the “side” channels then the it could probably emulate the orange mask removal process described in the above link. ie if for the Red channel there was an “invert” switch for both the green and blue channels and so on. Technical comments/corrections welcome. Would anybody support a feature request for this to the development team?

My objective is to efficiently digitize all of our old photos to pass on to our children because there is no way that they would be able to manage the large number of photo albums we have… lets face it… a few cubic centimeters of a couple of USB drives is going to be much easier for them to look after than a cubic meter or so of old photo albums!!

Negative.png is contained in the download at Film Simulation - RawPedia

2 Likes

Looking at the link in the previous post I must admit that it seems to me to contradict what I read in the literature.

I refer to two sources. One is the book by R.W.G. Hunt (former assistant director of research with Kodak) “The reproduction of colour”. In chapter 15 he gives a detailed description why masking improves the quality of colour negatives and how it works. However, the important sentence is in chapter 31.9 on white balance. Quote: "grey world: The average RGB signals are made equal to one another. (This method works quite well on photographic negatives …). To first approximation this is what you get by setting the black point and white point in the histograms for each channel. This also is the method used by David Dunthorn in CFS-244 “Negative to Positive” on page 10, which was quoted here several times.

A second, very good reference with an explanation of why the orange mask is needed and how it works may be found in the document “Fotografie” on page 27/28. The authors are from the Agfa company.

In the expert dialog of SilverFast one can nicely see the main steps needed for the conversion of colour negative scans to colour images (gamma correction, inversion, removal of orange mask via histogram settings).

Coming back to how I understood the initial question of this threat, these steps can be performed in batch mode by the image processing software of your choice. The results will already be quite good and enable selection of those images which deserve a detailed manual treatment for optimization.

Hermann-Josef

1 Like

I tested darktables invert module with the images provided above and with my own “real” scans, and it did not work in the raw case. Since I did not see this stated clearly above, I think this is a bug in darktable and therefore created a ticket: https://redmine.darktable.org/issues/12347. Feel free to add information there. I did not add the test pictures to the bug tracker since there is no license information given, but @dani_l, if you would do so by yourself, I think it could ease the devs’ work to have all data in the bug tracker.

1 Like

Thanks for looking into it. If this is a bug in darktable or I was missing something was the reason I created this topic, so it’s good to know that you also think it’s a bug.

I uploaded the pictures there. For future reference: Feel free to use the samples posted here (and my edits of them) as CC-BY-SA.

As I also just wrote in the issue, it is also possible to use the color of the unexposed part of the film to invert the photo using GIMP and the results are quite good. (I think I first read about the method on this website, but it’s currently down so I cannot check).

Based on a tiff exported using darktable with the basecurve disabled:

  • pick the color of the unexposed part of the film
  • create a new layer with that color and choose the blend mode “divide”
  • merge the two layers
  • invert the image (colors > invert)

The image still looks flat (which can easily be solved), but the colors are quite good.

Here are some results for this method (second last column):

Luckily, the web archive saved this article: https://web.archive.org/web/20170702193137/https://www.iamthejeff.com/post/32/the-best-way-to-color-correct-c-41-negative-film-scans.

And thanks for adding to the bug report :smile:.

Hm, this seems more difficult than I thought. I had a look at the code (darktable/invert.c at master · darktable-org/darktable · GitHub), did not understand much, but to me it seems that the colour picker is not the source of this behaviour, it seems to work on the visible colour data only. However, in the code there is a difference for the inversion itself dependent on the kind of input data (bayer raw, xtrans raw or full rgb).

It would be a great addition if the colour coordinates selected by the invert module would be shown in the module and if there would be a possibility to directly enter the colour coordinates. This would be convenient not only for understanding the behaviour here but in general, I think. As a workaround one could decode them from the xmp file for better understanding, but today it seems I do not have more time.

Yes, I also found that and even wrote about it in the first post. :wink: I had also recently started to try to understand the code better and try to modify things to find what is going on. Your post pushed me to continue that and I think I have found out what is going on and how to potentially fix it.

What darktable is doing is film_color - in (for each channel, where film_color is the value of the unexposed film and in is the input value of the pixel). When inverting in GIMP using the divide layer, the operation basically is 1 - in/film_color. The difference between the operations is a multiplication with film_color, which means each color channel is scaled differently.

I have hacked the divide version into the invert module (in a not very nice way by disabling the SSE implementation – I don’t have opencl anyway). It turns out that when disabling white balance (as picking the film color already does white balance) and also disabling the base curve (why?), I get an inverted picture with the right colors. :grinning: It is still quite dull, so I cropped it and stretched the histogram using a tone curve with RGB chroma scaling:


polar_bear.xmp (3.6 KB)

Here is the quick and dirty patch I used (won’t work if you have OpenCL enabled I guess):
0001-quick-hack-to-make-inverting-sort-of-work.patch (1.4 KB)

I also just wrote about it in the bug report, but I’m not really sure if this already the full proper fix and if so, how to develop into a patch that could be merged.

2 Likes

Great that you already made it that far. I am either not sure if it is the correct way to fix this.
Maybe we can pull somebody into the discussion that understands much more about colour science than me: @hanatos?

At a later stage, I would guess that even if it is, sse code would be beneficial, and opencl most probably as well. But first the “scientific” part I would guess …

I wrote this recently:

This is the output after a couple of simple tweaks in RT.

The code is missing still missing an important calibration step but it works quite well regardless. It is modelled on documention on get hold on from Kodak Cineon System.

One comment, your shots of the negatives are very underexposed, you make much better use of the digital sensor if you increase the exposure perhaps 2 stops or so.

The orange mask is actual quite useful, in that if is a problem then you know you are going down the wrong path…

Thanks! I’ve stumbled across your project before, but didn’t try it yet. I suppose for the two pictures you used “invertscan” in some way? Can you describe your steps in more detail so I can try to reproduce it? (How did you generate the .tif as input for it and what did you modify in RT?)

I agree that there is still room to optimize the expose (which I will do once the setup is complete), but I don’t think it will be 2 stops if I do not want clipping at the unexposed parts of the film. (There are also some people using blue filters to make sure all color channels are used more or less equally, but I am not sure if that will be worth the effort and the risk of finding out about side effects later.)

Yes, first I open the file in RT and save it as tiff, then converted it gamma of 1 using imagemagick, but you could use RT to do that. I have not figured out the steps to correctly to this in dcraw yet…

Then I did
#invertscan -c1 -fb 0.259,0.078,0.051 -dmax auto house.tif

0.259,0.078,0.051 are normalised values of the film base. The red exposure is only 25.9% of the maximum so you have almost two stops to go before you clip… You can get these values from RT.

This then results in a image, where both the colours and tonal range is roughly linear in relation to the scene. You can think of this a sort of “film raw of the scene”. You can then use RT to linear balance the colours and apply an appropriate tone curve.

I intend to explain this in much more detail in the future but I have just been too busy. One of the outstanding steps is to properly calibrate the sensor to match the printing density that negative/positive density is built around. This IMHO is essential step to emulate the result you would get from printing it traditionally.

If you get the exposure right you will get an improvement, at the moment the Blue layer has a maximum intensity of 5% so two stops should give an improvement.

I realised that first run the inventscan I amade with the wrong values the first time :slight_smile: , I adjusted them, and tweaked them in RT to match the example you gave for the actual print in your first post. (very quickly)

-g 1 1

This gives you the image in so-called “linear gamma”, an identity transfer function. Or, in formal terms, goezintas = comesoutas