Any interest in a "film negative" feature in RT ?

Hi, this is my first post on the forum. I’d like to ask the RT developers if there is any interest in adding a feature to RT, to facilitate the task of DIY “scanning” of color film negatives using a digital camera.
The traditional methods described in the RawPedia “Negative” page, have some annoying drawbacks (inverted control sliders, need for manual tweaking to get the colors right, etc). After some tinkering with the RT source code, i found a simple solution that looks promising.
This method is different from the one used in Darktable’s “invert” and digiKam’s “film negative” modules, and seems to give better results (see below for details). My idea is to work directly with the raw values from the sensor, upstream of white balance, and to add a “Film Negative” tool panel in the Raw page, where the user can select the film type, or manually enter the necessary parameters for the formula.

So, what do you think? Should i go on and try to implement this? Would it be a desireable feature in RT? Or do you think it would just increase software bloat, for a functionality that is rarely used? I realize that diy film scanning is a very niche problem, so i don’t know if such a feature really belongs in RT.
Maybe it would be better to do that in an external “pre-processor” program, creating an intermediate file? That would be closer to the unix philosophy… in this case, see the end of the post for a ready-to-use gmic command line that already works pretty well.

In any case, i’d like to hear some opinions. Thanks :slight_smile:

== Details ==

The approach i found is nothing new, actually i took it word by word from Wikipedia:


this article contains the following sentence:

[…] the transmission coefficient of the developed film is proportional to a power of the reciprocal of the brightness of the original exposure."

This is different from what happens when we apply the negative HaldCLUT, for example: in that case, we’re doing MAX - v (where MAX is the maximum value, and v is the current value for a channel).
Instead, the article suggests we should be doing k*(1/v)^p, which is much different.
In fact, lurking around in the forum i’ve also found an old discussion pointing out this exact issue:

Then, i had a look at the Darktable and digiKam source code (both have a film negative module), and to my surprise i discovered that both seem to use inversion (MAX - v).

Darktable:

digiKam:

So, i modified the RT source code in RawImageSource::scaleColors with a quick and dirty patch in order to compute the formula. For now, i read the exponents and coefficients from a text file, just to try things out (no GUI or integration with settings data).
Then i created a LibreOffice spreadsheet to calculate the parameters based on known values sampled from a test picture.

The most annoying thing to do with the traditional “inversion” methods, in my opinion, is getting white balance right: most of the time i pick a light gray spot for WB and all seems ok, but then i notice that another darker gray spot has become somewhat red. So i pick that one, and the previous brighter spot becomes blue-ish. So i adjust the RGB Curves and after some tweaking, i finally get perfectly balanced grays all across the range, BUT… i am now bound to that exact brightness level; if i make a slight change in exposure comp., brightness, contrast, tone curves or whatever, the histogram moves along the X axis in the RGB Curves, and the balance is gone. I can only use Lab controls from this point on.
The same applies when i process another negative: if i change the light source or exposure slightly, the RGB Curves created before need to be retouched.

So i decided to concentrate on this white balance problem; to make a cheap test, i took a picture of a color checker displayed on my PC monitor (i don’t have a real one; at least the screen is factory calibrated :smiley: ) with a Kodak ColorPlus 200 film roll.
Then i digitized the developed film, using a Sony A7 and a speedlight as a light source (xenon).

In the attached spreadsheet you can find the channel values from the 6 gray patches in the bottom row of the checker. These were obtained by reading the channel values inside RawImageSource::scaleColors (so they are normalized 0…65535), and averaging an area of 32x32 pixels.

negativeCalc_curve.ods (25.5 KB)

The “p” and “k” values are the exponent and coefficient, respectively, for each channel. The B channel was used as the reference. These parameters are calculated based just on the first and last patch values, not taking into account intermediate vaues. In the graph on the right, you can see the results: the curves are not perfectly overlapped, but not too bad, either.
At this point i fed the parameters in the formula in RT (multiplying by a global factor to re-normalize the output in the rage 0…65535), and the test chart looked pretty good. If i change exposure comp. or white balance, or brigthness/contrast, now everything remains stable.
I had a couple of color patches that were definitely off, but to my surprise, those were easily fixed by using my camera DCP profile, and enabling the look table. I thought that those corrections would not work after mangling the channel values… anyway, this doesn’t matter: i could also have fixed those patches via Lab hue equalizer.
The final result was not bad. I’ve also tried digitizing the same checker negative using my smartphone (which produces raw DNG) and, with the same parameters, the output was good (using the DNG-embedded camera profile here, too).
Then i’ve tried some other negatives (also from previous rolls of the same type), and the results seem quite stable.
Here you can see two examples showing the difference between inverting the tonce curve and using the formula described above.

inv

As you can see, the light gray patch has a blue cast, while the dark gray tends towards red. Impossible to white balance without touching the RGB curves.

recip

This instead is using the formula with the parameters calculated in the spreadsheet, as it appears “out of the box”, with just white balance and exposure comp. The WB is much more constant across the entire gray range.

ex1 ex2

Here are other 2 examples from a different roll of the same type, processed using the same parameters calculated from the checker above. No tweaking, just WB, exposure and contrast.

As a bonus, below you can find a G’MIC command line that implements the same formula, with the same parameters from the spreadsheet. For example, try to download the CR2 raw file from this tutorial

http://www.frogymandias.org/imagery/camera-scanning-dcr.html

(download link on top of the page). Get a linear tiff from the raw file using dcraw:

dcraw  -T -4 -r 1 1 1 1 -o 0 -h /tmp/60D_11930_negative.cr2

and finally, run this G’MIC pipeline:

gmic /tmp/60D_11930_negative.tiff \
  -fill '[(R^-1.57969792927765)*149.305039862836,(G^-1.15851358827476)*3.91704903038636, B^-1 ]' \
  -fill '[R*1.3,G/1.3,B/3.2]' \
  -cut 0,{ic*4} \
  -apply_gamma 2.2 \
  -adjust_colors 0,30

The first “fill” command contains the formula. Here i used negative exponents instead of doing 1/n , it’s the same.
The second “fill” command does white balance, those coefficients were just eyeballed.
The “cut” command limits the maximum value to something not too far from the median value of the picture. Then gamma and contrast to taste :slight_smile:

Note that the same parameters calculated for the ColorPlus 200, also work fine with the Kodak Gold 200 used in the tutorial, and with a different camera used to perform the “scanning”.

That’s all. Hope this is useful to somebody
Sorry for the long post, and my terrible english :smiley:

alberto

8 Likes

No, it is well written, both the English and the content.

Yes, that is what I typically do. It makes a lot more sense energy-wise anyway. Other equations have a similar form; e.g.,

image

Go for it. The beauty of open source is that anyone can contribute.

1 Like

I can’t speek for all “the RT developers”, but I’m personally interested in such a feature (and was talking about the color film problem with @heckflosse lately). I can offer you my help for getting this into RT.

Definitely. I don’t think DIY film scanning is a niche, there are so many negatives lying dormant and waiting to be recovered. A ready-to-use solution in RT would bring that to the masses. :grin:

While we’re all fans of the UNIX philosophy this wouldn’t help John Doe (who isn’t aware of something like the command line).

Hmm. :wink:

(Shush! Look at this. Maybe you could ask the one with the color target to lend it to you.)

Great! :+1:

That’s a lot. Now let’s make it useful for everybody. Fork RT on GitHub and make a pull request. When you need a helping hand (or two), don’t hesitate to ask here, in a PM, or in the PR at GitHub. Really looking forward to it.

Best,
Flössie

2 Likes

Cool! Thank you for the replies. Ok, I’ll try to do it. I don’t have much spare time so I can’t really commit to a deadline, but I’ll try to get this done.
Stay tuned :wink:

6 Likes

Here is an improvement on the G’MIC pipeline. I added a rudimentary form of auto WB, because you have to manually white balance the resulting image anyway.
Basically i’m reading the median for each individuall channel, and using them as coefficients to keep the median values roughly aligned. This way the resulting image will be almost balanced, and yow only have to make slight adjustments by hand.
This way, there’s no need to have the coefficients as parameters, so a film type will be characterized by just 2 exponents.

film_neg:
  # Calculate formula with Kodak ColorPlus 200 parameters
  -fill [(R^-1.57969792927765),(G^-1.15851358827476),B^-1]
  # Split the image in separate channels and read channel medians
  --split c rk={ic#1} gk={ic#2} bk={ic#3}
  # Discard single channel images
  -rm[1,2,3]
  # Divide each channel by its median, take them all roughly in the same ballpark
  -fill[0] [(R/$rk),(G/$gk),(B/$bk)]
  # Clip away outliers; threshold arbitrarily chosen as 6 times the global median
  -cut 0,{ic*6}
  # Gamma and contrast to taste
  -apply_gamma 2.2
  -adjust_colors -10,10
  # Normalize to 16bit range in case you want to save the result
  -normalize 0,65535

This is more or less what i plan to implement inside RawTherapee. Turns out G’MIC is a perfect tool for prototyping!

I’ve tried digitizing some very old Kodak negatives from 1969; the color cast is different from more modern film. Using the exponents calculated for the ColorPlus 200, the result is quite bad, and of course i can’t get a colorchecker shot on a 1969 film because i don’t have a time machine :smiley:
BUT… i noticed you can still get decent results by using the beginning of the film as white/black patches to calculate the coefficients. You know, the part of film that sticks out of the roll when you buy it. That part has been outside and is overexposed. The part inside the roll, before the first frame, is completely blank. For example, using this (they cut the dark part very short in the lab, and put a sticker on it) :

start

…yields this:

neg2

Far from perfect of course, but not too bad for a starting point.

2 Likes

Hello,
I signed up specifically to voice my support for your idea! Hope this speaks about how much I hope you’ll add negative film scans support to RT. :wink:

1 Like

Thank you @xfer :slight_smile:

Lately i’ve made some progress. Here is a somewhat usable version:

It does not introduce any additional dependency, so to try it, just checkout the filmnegative branch from the above repo, and compile normally as per the RawPedia instructions.
I wanted to limit my modifications to the bare minimum, but unfortunately i had to touch a bunch of files in order to support parameters load/save and connect all the layers between the toolpanel and rawimagesource.
I tried to figure things out by looking at similar interactions made by other toolpanels like gradient.cc and whitebalance.cc. Code-wise, it doesn’t seem too crazy to me, but i’m not sure, since i’m not a C++ programmer.

Here is how it works (or should work) from a user standpoint:

  • open a raw image (only bayer sensor supported at the moment, otherwise the tool is not shown in the panel)
  • select the “Raw” page.
  • scroll down and enable the “Film Negative” tool. This should give you a positive image, using the default exponents which should be ok for a modern Kodak ColorPlus 200.
  • now, perform white balance as usual; actual values will not be the same that you would have with a normal, positive picture. The easiest way is to use spot WB.
  • adjust exposure / contrast / tone curves as you would do with a normal, positive picture. You’re done.
  • bonus: to get more accurate exponents for any film type, you can use the “Pick white and black spots” button. Think of it like a sort of “dual spot white balance”. Click on it, then select a piece of clear unexposed film (for example the strip between frames, or on the sides where the perforations are. Then click on a dense, highly exposed spot (that was white or bright gray in the original scene). Actually, the order in which spots are picked does not matter. After you select the second spot, the exponent sliders are updated with values guessed from the spots, and the processing is started.
    Here is an example; the green arrows show location of the chosen spots. The result is shown below:
    spots
    You only have to do this exponents calculation once per roll. If you have several film rolls of the same type and the same age, most probably they will work fine with the same exponents.

Here is what’s still missing:

  • only raw, bayer sensor images are supported. Can someone post, or PM me a Fuji X-Trans raw file that i can use for testing?
  • thumbnails still appear as negative. Other processing functions like exposure, contrast, WB are reflected in the thumbnail, but not the film negative raw calculations. I didn’t track down the exact path in the source code, but i guess the thumbnail is obtained from the jpeg preview embedded in the raw file.
  • after applying reciprocal and exponent, RawImageSource::filmNegativeProcess does a “pre-balance” of the image in order to make WB easier. Channel multipliers are guessed from the median of each channel and are not saved as parameters in the processing profile. This may not be very future-proof because, should we ever change the way those multipliers are “guessed” in the future, an existing processing profile made with a previous version of RT would now yield a different result, forcing the user to re-balance all his previous photos. I wonder if it would be wise to always save the “guessed” multipliers along with the exponents, to make sure the same pp3 will alway produce the same output. And re-guess new multipliers only when the user changes an exponent.

Should i send a PR now, or try to fix the above issues first? Do you see other problems in the code? What about the usability?

Please send opinions / feedback / suggestions …

Thanks! :wink:

3 Likes

I assume that this needs an image taken of a film negative. I have an X-Trans camera but no film negatives. Is there any way I can fake it?

I am also active on the Fuji sub in the DPR forum and could possibly give a shout out there for a sample file.

Actually no, any raw photo would do , i just need to activate the X-Trans code path in RT.
You gave me the idea: there are plenty of raw files available for download in the DPR galleries and camera test pages! I didn’t think about it :smiley:
Downloaded a couple of .RAF files, let’s see if i can make it work.

Thanks!

There’s also http://rawtherapee.com/shared/test_images/

IMHO, you can send the PR early and we can discuss further ideas there. @heckflosse?

2 Likes

Yes, please :+1:

Added support for X-Trans. It was quite straightforward.

Seems to work fine:

dpr
shared

I had to duplicate the entire loop code blocks in RawImageSource::filmNegativeProcess, because i did’t want to put the sensor type check inside the hot path, and i couldn’t figure out a way to assign a function pointer to FC or XTRANSFC before the loop.
I tried this:

    unsigned int (*theFC)(int,int);
    if(ri->getSensorType() == ST_BAYER) {
        theFC = &this->FC;
    } else if(ri->getSensorType() == ST_FUJI_XTRANS) {
        theFC = &ri->XTRANSFC;
    }

but the compiler barked at me:

ISO C++ forbids taking the address of an unqualified or parenthesized non-static member function to form a pointer to member function.

Does anybody have some advice? Is code duplication really the best way in this case? I saw there are other places in RawImageSource, where code is duplicated for Bayer/Xtrans/Foveon.

Sorry again, i’m not a C++ programmer.

Thank you all for the support :wink:

1 Like

Sure. :slightly_smiling_face:

Use a member function pointer:

diff --git a/rtengine/rawimagesource.cc b/rtengine/rawimagesource.cc
index 9a9c9d08f..4c76a1129 100644
--- a/rtengine/rawimagesource.cc
+++ b/rtengine/rawimagesource.cc
@@ -3592,51 +3592,32 @@ void RawImageSource::filmNegativeProcess(const procparams::FilmNegativeParams &p
 
     float exps[3] = { (float)params.redExp, (float)params.greenExp, (float)params.blueExp };
     
+    unsigned (RawImage::* const the_fc)(unsigned, unsigned) const =
+        ri->getSensorType() == ST_BAYER
+            ? &RawImage::FC
+            : &RawImage::XTRANSFC;
+
     MyTime t1, t2, t3,t4, t5;
     t1.set();
 
-    if(ri->getSensorType() == ST_BAYER) {
 #ifdef _OPENMP
-        #pragma omp parallel
-#endif
-        {
-
-#ifdef _OPENMP
-            #pragma omp for nowait
-#endif
-            for (int row = 0; row < H; row ++) {
-                for (int col = 0; col < W; col++) {
-                    float val = rawData[row][col];
-                    int c  = FC(row, col);                        // three colors,  0=R, 1=G,  2=B
-
-                    // Exponents are expressed as positive in the parameters, so negate them in order
-                    // to get the reciprocals. Avoid trouble with zeroes, minimum pixel value is 1.
-                    val = pow_F(max(val, 1.f), -exps[c]);
-
-                    rawData[row][col] = (val);
-                }
-            }
-        }
-    } else if(ri->getSensorType() == ST_FUJI_XTRANS) {
-#ifdef _OPENMP
-        #pragma omp parallel
+    #pragma omp parallel
 #endif
-        {
+    {
 
 #ifdef _OPENMP
-            #pragma omp for nowait
+        #pragma omp for nowait
 #endif
-            for (int row = 0; row < H; row ++) {
-                for (int col = 0; col < W; col++) {
-                    float val = rawData[row][col];
-                    int c  = ri->XTRANSFC(row, col);                        // three colors,  0=R, 1=G,  2=B
+        for (int row = 0; row < H; row ++) {
+            for (int col = 0; col < W; col++) {
+                float val = rawData[row][col];
+                int c  = (ri->*the_fc)(row, col);                        // three colors,  0=R, 1=G,  2=B
 
-                    // Exponents are expressed as positive in the parameters, so negate them in order
-                    // to get the reciprocals. Avoid trouble with zeroes, minimum pixel value is 1.
-                    val = pow_F(max(val, 1.f), -exps[c]);
+                // Exponents are expressed as positive in the parameters, so negate them in order
+                // to get the reciprocals. Avoid trouble with zeroes, minimum pixel value is 1.
+                val = pow_F(max(val, 1.f), -exps[c]);
 
-                    rawData[row][col] = (val);
-                }
+                rawData[row][col] = (val);
             }
         }
     }

The syntax is awful, but this is fast.

Here’s a solution with a non-capturing lambda that is more readable but has one more indirection:

diff --git a/rtengine/rawimagesource.cc b/rtengine/rawimagesource.cc
index 9a9c9d08f..0a3b255d7 100644
--- a/rtengine/rawimagesource.cc
+++ b/rtengine/rawimagesource.cc
@@ -3592,51 +3592,38 @@ void RawImageSource::filmNegativeProcess(const procparams::FilmNegativeParams &p
 
     float exps[3] = { (float)params.redExp, (float)params.greenExp, (float)params.blueExp };
     
+    const auto the_fc =
+        ri->getSensorType() == ST_BAYER
+            ? [](const RawImage& ri, unsigned row, unsigned column) -> unsigned
+            {
+                return ri.FC(row, column);
+            }
+            : [](const RawImage& ri, unsigned row, unsigned column) -> unsigned
+            {
+                return ri.XTRANSFC(row, column);
+            };
+
     MyTime t1, t2, t3,t4, t5;
     t1.set();
 
-    if(ri->getSensorType() == ST_BAYER) {
-#ifdef _OPENMP
-        #pragma omp parallel
-#endif
-        {
-
-#ifdef _OPENMP
-            #pragma omp for nowait
-#endif
-            for (int row = 0; row < H; row ++) {
-                for (int col = 0; col < W; col++) {
-                    float val = rawData[row][col];
-                    int c  = FC(row, col);                        // three colors,  0=R, 1=G,  2=B
-
-                    // Exponents are expressed as positive in the parameters, so negate them in order
-                    // to get the reciprocals. Avoid trouble with zeroes, minimum pixel value is 1.
-                    val = pow_F(max(val, 1.f), -exps[c]);
-
-                    rawData[row][col] = (val);
-                }
-            }
-        }
-    } else if(ri->getSensorType() == ST_FUJI_XTRANS) {
 #ifdef _OPENMP
-        #pragma omp parallel
+    #pragma omp parallel
 #endif
-        {
+    {
 
 #ifdef _OPENMP
-            #pragma omp for nowait
+        #pragma omp for nowait
 #endif
-            for (int row = 0; row < H; row ++) {
-                for (int col = 0; col < W; col++) {
-                    float val = rawData[row][col];
-                    int c  = ri->XTRANSFC(row, col);                        // three colors,  0=R, 1=G,  2=B
+        for (int row = 0; row < H; row ++) {
+            for (int col = 0; col < W; col++) {
+                float val = rawData[row][col];
+                int c = the_fc(*ri, row, col);                        // three colors,  0=R, 1=G,  2=B
 
-                    // Exponents are expressed as positive in the parameters, so negate them in order
-                    // to get the reciprocals. Avoid trouble with zeroes, minimum pixel value is 1.
-                    val = pow_F(max(val, 1.f), -exps[c]);
+                // Exponents are expressed as positive in the parameters, so negate them in order
+                // to get the reciprocals. Avoid trouble with zeroes, minimum pixel value is 1.
+                val = pow_F(max(val, 1.f), -exps[c]);
 
-                    rawData[row][col] = (val);
-                }
+                rawData[row][col] = (val);
             }
         }
     }

Please file a PR on GitHub. It will ease testing and discussion. Thanks!

Best,
Flössie

1 Like

I think it’s fine to duplicate the loops because for the SSE version we would have to do that anway (for bayer we need += 4 and process one vector of 4 elements per iteration while for xtrans we need += 12 and process 3 vectors of 4 elements per iteration)

Thanks!

PR created, sorry for the delay :wink:

Hi rom9 thanks for your effort, this gestire looks very interesting!

And sorry for my ot, I was reading this page
https://redmine.darktable.org/issues/12347
There was described a generic procedure for inverting negatives in gimp

pick the color of the unexposed part of the film

  • create a new layer with that color and choose the blend mode “divide”
  • merge the two layers
  • invert the image (colors > invert)

Actually darktable is doing this
film_color - in
The proposed patch will do this
1 - in/film_color

My doubt is that gimp inverts the colors using sRGB gamma even if the image is linear (it converts linear to srgb before the invert tool, the linear invert tool gives different results)

Should be something like this to match the gimp ?
( 1 - (in/film_color) ^1/2.2 ) ^2.2

It’s very easy now to get a quite good starting point for negative processing using the code from @rom9:

  1. load file and apply neutral.
  2. Enable Film Negative in raw tab:
  3. Set Green exponent to 2
  4. Pick a point for spot white balance

Just 4 steps to get a reasonable result \o/

4 Likes

Hi, @age , thank you :slight_smile:

i don’t think this will give much different results, because it’s still doing plain inversion (subtraction) instead of the reciprocal (1 / in) as per the Wikipedia article linked at the top of this thread. You can visualize the difference by plotting a chart with libreoffice.
Regarding the gimp instructions, the first 2 steps should be sufficient: they already give you film_color / in , which is the reciprocal you need.
See this for an example (sorry for the small resolution, floating point gimp files are huge):
https://filebin.net/sux7nt0pn1u2c9qc
But this gimp solution does not consider exponents: from my understanding of the wikipedia article, we should think of a negative like an image where each channel has it’s own gamma correction. This was the reason for the instability of the whitebalance across the 6 gray patches in my original post.
So the complete formula should be: (1/in) ^ p , where p is an exponent that you have to guess. To do that, you need not one, bu two “film_color” spots: one of the clear unexposed film, an the other of a dense, exposed area. From these, you can derive the right exponents (see the spreadsheet above).

Anyway, the best way to experiment with this stuff is using G’MIC, either command line or via the gimp plugin. That tool is outstanding :slight_smile:

Gamma correction is a whole new topic that i skipped so far, by limiting this feature to raw-only. I will have to deal with that, to fix the thumbnail problem. I guess the final formula should be
(1 / (in ^ 1/2.2)) ^ p
but i’m not sure… stay tuned :wink:

1 Like

Couldn’t that be ( or just exaggerated) due to input color profile and white balance applied in an inverse “color space” ?

I don’t think so, because before finding the wikipedia article, i had already made various other tests with only offsets and coefficients, and all of them gave the same bad results. :wink:

If you look at the 6 input values in the spreadsheet, you will see that the R,G,B curves have a slightly different “concavity”; if you make their endpoints overlap exactly just by shifting and stretching, you’ll see that one is more “flat”, and another is more “curved”. This, i think, is the demonstration that the difference between the channels is non-linear.
This is mostly my intuition, unfortunately i don’t have any formal maths education :cry:

1 Like