Any interest in a "film negative" feature in RT ?

Thank you @xfer :slight_smile:

Lately i’ve made some progress. Here is a somewhat usable version:

It does not introduce any additional dependency, so to try it, just checkout the filmnegative branch from the above repo, and compile normally as per the RawPedia instructions.
I wanted to limit my modifications to the bare minimum, but unfortunately i had to touch a bunch of files in order to support parameters load/save and connect all the layers between the toolpanel and rawimagesource.
I tried to figure things out by looking at similar interactions made by other toolpanels like gradient.cc and whitebalance.cc. Code-wise, it doesn’t seem too crazy to me, but i’m not sure, since i’m not a C++ programmer.

Here is how it works (or should work) from a user standpoint:

  • open a raw image (only bayer sensor supported at the moment, otherwise the tool is not shown in the panel)
  • select the “Raw” page.
  • scroll down and enable the “Film Negative” tool. This should give you a positive image, using the default exponents which should be ok for a modern Kodak ColorPlus 200.
  • now, perform white balance as usual; actual values will not be the same that you would have with a normal, positive picture. The easiest way is to use spot WB.
  • adjust exposure / contrast / tone curves as you would do with a normal, positive picture. You’re done.
  • bonus: to get more accurate exponents for any film type, you can use the “Pick white and black spots” button. Think of it like a sort of “dual spot white balance”. Click on it, then select a piece of clear unexposed film (for example the strip between frames, or on the sides where the perforations are. Then click on a dense, highly exposed spot (that was white or bright gray in the original scene). Actually, the order in which spots are picked does not matter. After you select the second spot, the exponent sliders are updated with values guessed from the spots, and the processing is started.
    Here is an example; the green arrows show location of the chosen spots. The result is shown below:
    spots
    You only have to do this exponents calculation once per roll. If you have several film rolls of the same type and the same age, most probably they will work fine with the same exponents.

Here is what’s still missing:

  • only raw, bayer sensor images are supported. Can someone post, or PM me a Fuji X-Trans raw file that i can use for testing?
  • thumbnails still appear as negative. Other processing functions like exposure, contrast, WB are reflected in the thumbnail, but not the film negative raw calculations. I didn’t track down the exact path in the source code, but i guess the thumbnail is obtained from the jpeg preview embedded in the raw file.
  • after applying reciprocal and exponent, RawImageSource::filmNegativeProcess does a “pre-balance” of the image in order to make WB easier. Channel multipliers are guessed from the median of each channel and are not saved as parameters in the processing profile. This may not be very future-proof because, should we ever change the way those multipliers are “guessed” in the future, an existing processing profile made with a previous version of RT would now yield a different result, forcing the user to re-balance all his previous photos. I wonder if it would be wise to always save the “guessed” multipliers along with the exponents, to make sure the same pp3 will alway produce the same output. And re-guess new multipliers only when the user changes an exponent.

Should i send a PR now, or try to fix the above issues first? Do you see other problems in the code? What about the usability?

Please send opinions / feedback / suggestions …

Thanks! :wink:

4 Likes

I assume that this needs an image taken of a film negative. I have an X-Trans camera but no film negatives. Is there any way I can fake it?

I am also active on the Fuji sub in the DPR forum and could possibly give a shout out there for a sample file.

Actually no, any raw photo would do , i just need to activate the X-Trans code path in RT.
You gave me the idea: there are plenty of raw files available for download in the DPR galleries and camera test pages! I didn’t think about it :smiley:
Downloaded a couple of .RAF files, let’s see if i can make it work.

Thanks!

There’s also Index of /shared/test_images

IMHO, you can send the PR early and we can discuss further ideas there. @heckflosse?

2 Likes

Yes, please :+1:

Added support for X-Trans. It was quite straightforward.

Seems to work fine:

dpr
shared

I had to duplicate the entire loop code blocks in RawImageSource::filmNegativeProcess, because i did’t want to put the sensor type check inside the hot path, and i couldn’t figure out a way to assign a function pointer to FC or XTRANSFC before the loop.
I tried this:

    unsigned int (*theFC)(int,int);
    if(ri->getSensorType() == ST_BAYER) {
        theFC = &this->FC;
    } else if(ri->getSensorType() == ST_FUJI_XTRANS) {
        theFC = &ri->XTRANSFC;
    }

but the compiler barked at me:

ISO C++ forbids taking the address of an unqualified or parenthesized non-static member function to form a pointer to member function.

Does anybody have some advice? Is code duplication really the best way in this case? I saw there are other places in RawImageSource, where code is duplicated for Bayer/Xtrans/Foveon.

Sorry again, i’m not a C++ programmer.

Thank you all for the support :wink:

1 Like

Sure. :slightly_smiling_face:

Use a member function pointer:

diff --git a/rtengine/rawimagesource.cc b/rtengine/rawimagesource.cc
index 9a9c9d08f..4c76a1129 100644
--- a/rtengine/rawimagesource.cc
+++ b/rtengine/rawimagesource.cc
@@ -3592,51 +3592,32 @@ void RawImageSource::filmNegativeProcess(const procparams::FilmNegativeParams &p
 
     float exps[3] = { (float)params.redExp, (float)params.greenExp, (float)params.blueExp };
     
+    unsigned (RawImage::* const the_fc)(unsigned, unsigned) const =
+        ri->getSensorType() == ST_BAYER
+            ? &RawImage::FC
+            : &RawImage::XTRANSFC;
+
     MyTime t1, t2, t3,t4, t5;
     t1.set();
 
-    if(ri->getSensorType() == ST_BAYER) {
 #ifdef _OPENMP
-        #pragma omp parallel
-#endif
-        {
-
-#ifdef _OPENMP
-            #pragma omp for nowait
-#endif
-            for (int row = 0; row < H; row ++) {
-                for (int col = 0; col < W; col++) {
-                    float val = rawData[row][col];
-                    int c  = FC(row, col);                        // three colors,  0=R, 1=G,  2=B
-
-                    // Exponents are expressed as positive in the parameters, so negate them in order
-                    // to get the reciprocals. Avoid trouble with zeroes, minimum pixel value is 1.
-                    val = pow_F(max(val, 1.f), -exps[c]);
-
-                    rawData[row][col] = (val);
-                }
-            }
-        }
-    } else if(ri->getSensorType() == ST_FUJI_XTRANS) {
-#ifdef _OPENMP
-        #pragma omp parallel
+    #pragma omp parallel
 #endif
-        {
+    {
 
 #ifdef _OPENMP
-            #pragma omp for nowait
+        #pragma omp for nowait
 #endif
-            for (int row = 0; row < H; row ++) {
-                for (int col = 0; col < W; col++) {
-                    float val = rawData[row][col];
-                    int c  = ri->XTRANSFC(row, col);                        // three colors,  0=R, 1=G,  2=B
+        for (int row = 0; row < H; row ++) {
+            for (int col = 0; col < W; col++) {
+                float val = rawData[row][col];
+                int c  = (ri->*the_fc)(row, col);                        // three colors,  0=R, 1=G,  2=B
 
-                    // Exponents are expressed as positive in the parameters, so negate them in order
-                    // to get the reciprocals. Avoid trouble with zeroes, minimum pixel value is 1.
-                    val = pow_F(max(val, 1.f), -exps[c]);
+                // Exponents are expressed as positive in the parameters, so negate them in order
+                // to get the reciprocals. Avoid trouble with zeroes, minimum pixel value is 1.
+                val = pow_F(max(val, 1.f), -exps[c]);
 
-                    rawData[row][col] = (val);
-                }
+                rawData[row][col] = (val);
             }
         }
     }

The syntax is awful, but this is fast.

Here’s a solution with a non-capturing lambda that is more readable but has one more indirection:

diff --git a/rtengine/rawimagesource.cc b/rtengine/rawimagesource.cc
index 9a9c9d08f..0a3b255d7 100644
--- a/rtengine/rawimagesource.cc
+++ b/rtengine/rawimagesource.cc
@@ -3592,51 +3592,38 @@ void RawImageSource::filmNegativeProcess(const procparams::FilmNegativeParams &p
 
     float exps[3] = { (float)params.redExp, (float)params.greenExp, (float)params.blueExp };
     
+    const auto the_fc =
+        ri->getSensorType() == ST_BAYER
+            ? [](const RawImage& ri, unsigned row, unsigned column) -> unsigned
+            {
+                return ri.FC(row, column);
+            }
+            : [](const RawImage& ri, unsigned row, unsigned column) -> unsigned
+            {
+                return ri.XTRANSFC(row, column);
+            };
+
     MyTime t1, t2, t3,t4, t5;
     t1.set();
 
-    if(ri->getSensorType() == ST_BAYER) {
-#ifdef _OPENMP
-        #pragma omp parallel
-#endif
-        {
-
-#ifdef _OPENMP
-            #pragma omp for nowait
-#endif
-            for (int row = 0; row < H; row ++) {
-                for (int col = 0; col < W; col++) {
-                    float val = rawData[row][col];
-                    int c  = FC(row, col);                        // three colors,  0=R, 1=G,  2=B
-
-                    // Exponents are expressed as positive in the parameters, so negate them in order
-                    // to get the reciprocals. Avoid trouble with zeroes, minimum pixel value is 1.
-                    val = pow_F(max(val, 1.f), -exps[c]);
-
-                    rawData[row][col] = (val);
-                }
-            }
-        }
-    } else if(ri->getSensorType() == ST_FUJI_XTRANS) {
 #ifdef _OPENMP
-        #pragma omp parallel
+    #pragma omp parallel
 #endif
-        {
+    {
 
 #ifdef _OPENMP
-            #pragma omp for nowait
+        #pragma omp for nowait
 #endif
-            for (int row = 0; row < H; row ++) {
-                for (int col = 0; col < W; col++) {
-                    float val = rawData[row][col];
-                    int c  = ri->XTRANSFC(row, col);                        // three colors,  0=R, 1=G,  2=B
+        for (int row = 0; row < H; row ++) {
+            for (int col = 0; col < W; col++) {
+                float val = rawData[row][col];
+                int c = the_fc(*ri, row, col);                        // three colors,  0=R, 1=G,  2=B
 
-                    // Exponents are expressed as positive in the parameters, so negate them in order
-                    // to get the reciprocals. Avoid trouble with zeroes, minimum pixel value is 1.
-                    val = pow_F(max(val, 1.f), -exps[c]);
+                // Exponents are expressed as positive in the parameters, so negate them in order
+                // to get the reciprocals. Avoid trouble with zeroes, minimum pixel value is 1.
+                val = pow_F(max(val, 1.f), -exps[c]);
 
-                    rawData[row][col] = (val);
-                }
+                rawData[row][col] = (val);
             }
         }
     }

Please file a PR on GitHub. It will ease testing and discussion. Thanks!

Best,
Flössie

1 Like

I think it’s fine to duplicate the loops because for the SSE version we would have to do that anway (for bayer we need += 4 and process one vector of 4 elements per iteration while for xtrans we need += 12 and process 3 vectors of 4 elements per iteration)

Thanks!

PR created, sorry for the delay :wink:

Hi rom9 thanks for your effort, this gestire looks very interesting!

And sorry for my ot, I was reading this page
https://redmine.darktable.org/issues/12347
There was described a generic procedure for inverting negatives in gimp

pick the color of the unexposed part of the film

  • create a new layer with that color and choose the blend mode “divide”
  • merge the two layers
  • invert the image (colors > invert)

Actually darktable is doing this
film_color - in
The proposed patch will do this
1 - in/film_color

My doubt is that gimp inverts the colors using sRGB gamma even if the image is linear (it converts linear to srgb before the invert tool, the linear invert tool gives different results)

Should be something like this to match the gimp ?
( 1 - (in/film_color) ^1/2.2 ) ^2.2

It’s very easy now to get a quite good starting point for negative processing using the code from @rom9:

  1. load file and apply neutral.
  2. Enable Film Negative in raw tab:
  3. Set Green exponent to 2
  4. Pick a point for spot white balance

Just 4 steps to get a reasonable result \o/

6 Likes

Hi, @age , thank you :slight_smile:

i don’t think this will give much different results, because it’s still doing plain inversion (subtraction) instead of the reciprocal (1 / in) as per the Wikipedia article linked at the top of this thread. You can visualize the difference by plotting a chart with libreoffice.
Regarding the gimp instructions, the first 2 steps should be sufficient: they already give you film_color / in , which is the reciprocal you need.
See this for an example (sorry for the small resolution, floating point gimp files are huge):

But this gimp solution does not consider exponents: from my understanding of the wikipedia article, we should think of a negative like an image where each channel has it’s own gamma correction. This was the reason for the instability of the whitebalance across the 6 gray patches in my original post.
So the complete formula should be: (1/in) ^ p , where p is an exponent that you have to guess. To do that, you need not one, bu two “film_color” spots: one of the clear unexposed film, an the other of a dense, exposed area. From these, you can derive the right exponents (see the spreadsheet above).

Anyway, the best way to experiment with this stuff is using G’MIC, either command line or via the gimp plugin. That tool is outstanding :slight_smile:

Gamma correction is a whole new topic that i skipped so far, by limiting this feature to raw-only. I will have to deal with that, to fix the thumbnail problem. I guess the final formula should be
(1 / (in ^ 1/2.2)) ^ p
but i’m not sure… stay tuned :wink:

1 Like

Couldn’t that be ( or just exaggerated) due to input color profile and white balance applied in an inverse “color space” ?

I don’t think so, because before finding the wikipedia article, i had already made various other tests with only offsets and coefficients, and all of them gave the same bad results. :wink:

If you look at the 6 input values in the spreadsheet, you will see that the R,G,B curves have a slightly different “concavity”; if you make their endpoints overlap exactly just by shifting and stretching, you’ll see that one is more “flat”, and another is more “curved”. This, i think, is the demonstration that the difference between the channels is non-linear.
This is mostly my intuition, unfortunately i don’t have any formal maths education :cry:

1 Like

Hi,
thanks for interesting contribution.

I tried inversing some scanned negatives and… the tool is not visible when TIFF files are opened. I can invert only raw files. Is there a chance someone could enable it for non-raw files in near future as I am preparing to print some old scans and I am eager to test new approach?

Hi @cuniek ,
yes, this feature was intended only for raw files, because raw values from the sensor are just a proportion to the amount of light, which simplifies things a lot.
When dealing with non-raw files, i guess i should translate from the source color profile to linear before applying the formula, and i don’t know how to do it reliably, for all possible devices/firmwares and color profiles.

Can you get 16-bit linear TIFFs from your scanner? If so, please can you try using the G’MIC pipeline from this post above? Assuming you save the pipeline to a file called filmneg.gmic, this would be the command line to use it:

gmic your_neg_scan.tiff -m filmneg.gmic -film_neg -output result.png

That pipeline applies the same formula used inside the RT feature, so we can get an idea if it can work or not. If there’s some hope, i can try implementing it in RT, but i don’t promise anything :smiley:
In the meantime, you can already use the G’MIC pipeline as a working solution, by pre-processing the negatives with it, and then fine-tuning the (positive) result with RT as usual :wink:

2 Likes

Sorry for REALLY late reply, but I was very busy.

  1. I failed to try your procedure on my scanned negatives - there is “syntax error in expression -fill…” Red few tutorials and could not find any obvious error in Your script or on my side.

  2. I have tested film negative on one of my OLD negatives I am renovating now. They are old and dirty, so not really a good point of reference, but results are great. Previously I had to work much harder to get something similar - thumbs up.

  3. I think I found a bug, or maybe I just overdo film negative? Big, colorfull halos around high contrast edges when “contrast” is increased. Is it a known limitation? Should I report that?

Whoa, how did I not notice this gem before?

I right now have three semi-independent “workflow backlogs” I’m trying to churn through. Digitizing my old negatives from 20 years ago is part of that. I’m definitely interested in improvements to the workflow and hope to take a look at your approach over the weekend.

An observation I have on the darktable workflow (or, why I shelved this project a year ago to revisit later):

  1. Choosing the inversion point needs to be done with something other than a colorpicker because even small errors here can lead to huge visual artifacts in the result.
    1b) Inversion point color should not be derived directly from raw data without some form of correction to make it look at least semi-intuitive. dt’s implementation does not take typical whitebalance multipliers into account, so basically any inversion point looks like puke green in the colorpicker
  2. The inversion should happen after correction for the capture device’s distortion and (most importantly) vignetting. If you invert without vignetting correction, you effectively have a black point that shifts throughout the frame.

It sounds like you’ve addressed item 1 but possibly there might still be some considerations for item 2 if the capture setup has any vignetting?

I’m going to take @rom9’s RT pull request and test it tonight or tomorrow. I’m really excited by this.

It’s already in RT dev

1 Like

Even better!

Edit: So far, a completely different workflow will take some getting used to. I’m fairly easily able to get better results from the negatives I’ve digitized so far than I ever was able to get with dt’s invert module + whitebalance-after-invert (doing WB-before-invert took a little getting used to…)

I think I’ll be good once I figure out rotations more than +/-45 degrees… Time to RTFM. :slight_smile:

Edit 2: Works great on some of my negatives, but on a few others, no matter what I do there’s a green color cast due to the “black” level of the image being way off no matter what I pick in the black-after-inversion areas.