Developing a FOSS solution for inverting negatives

RawTherapee is not set up to deal with RGB values outside the display range between 0.0f and 1.0f, neither for importing nor for exporting.

RawTherapee is not set up to allow the user to directly make mathematical operations on linear RGB, which is what I’m fairly sure you want to do, but maybe I misunderstand you.

RawTherapee is an awesomely excellent raw processor and has some amazing algorithms that allow for accomplishing various editing tasks. But I have a feeling it’s not the right software for the specific tasks that you have in mind.

For importing and exporting floating point images, you need an image editor that allows to import and export floating point tiffs or openexr images. One or both (depending on the software) can be imported and exported by Krita, GIMP 2.9, darktable, and PhotoFlow.

There’s a bunch of other software that also can be used to import/export floating point tiffs and/or openexr files. I only listed software that I use on a regular basis. @ggbutcher - can rawproc import and export floating point images?

There are some other file formats that can support or partially support RGB channel values outside the display range, but these other file formats are not as widely supported among various softwares as floating point tiffs and openexr images.

Please note that a “tonemapped” HDR image, though commonly referred to as “HDR”, is no longer an actual HDR image, precisely because it’s been tonemapped to fit in the display range (just as an interpolated raw file is no longer a raw file after it’s been interpolated and saved as some other format).

Maybe the task you have in mind is tonemapping the scans specifically using RawTherapee algorithms, after you’ve processed the scans using other software? In this case RT has some very nice tonemapping options. But if the “other software” has produced channel values outside the display range, you’ll need to save the processed scans in a file format that supports floating point values. And then you’d need to open these files with software that can open floating point files, and then do an exposure adjustment to bring all the channel values to fit below 1.0f, and then export as a 16-bit integer image, which RawTherapee can open.

Yes, exactly. To get a handle on what “out of gamut”, “outliers”, “high dynamic range”, etc actually means, and a handle on what floating point processing allows you to do, experimenting is the best way to get started.

For experimenting, I’d suggest using my GIMP-CCE via the appimage that @Carmelo_DrRaw puts to gether (Community-built software) , working in a linear gamma RGB color space, and experimenting specifically with the Addition/Subtract/Multiply/Divide blend modes, and with Exposure, Auto-Stretch, and Invert, and with the PhotoFlow plug-in for tonemapping, maybe starting with the filmic tonemapping options.

I uploaded a couple of test images, in case you might find them useful:

  • A linear gamma sRGB high dynamic range (not tonemapped) 32-bit floating point tiff: ninedegreesbelow.com/files/linear-srgb-dcraw-matrix-no-alpha-no-thumbnail-402px.tiff

  • A linear gamma sRGB test image with gray blocks running from 0.0f to 1.0f. The top set of blocks increase by equal Lab Lightness increments. The bottom set of blocks increase by equal steps in linear gamma RGB. Try adding this image to itself in GIMP-CCE and color-picking the resulting RGB values. Also it’s a nice image for exploring what different tone-mapping algorithms do to different tonal ranges - I put this image together for exploring the PhotoFlow filmic tonemapping:

1 Like

Hello,
having read the various contributions, I must admit, I do not see, what you are aiming at. Just some basic remarks:

The linear (so-called) raw scanner data are in a device-specific colour space. The only way to convert them into sRGB or AdobeRGB is via an ICC-profile set up for your scanner (see contribution by Morgan).

To make calculations with up to 32bit images you might want to look into imageJ, including conversion into other bit-depths. However, this package knows nothing about color management.

Isn’t handling out of gamut colours the task of the rendering intents? To my understanding they determine, how these colours are to be transformed into the colour space.

Hermann-Josef

My apologies, I have a very bad habit of saying “out of gamut” to refer to two very different situations:

  1. In a floating point unbounded ICC profile conversion sometimes colors in the destination color space will have one or two channel values that are less than 0.0f. These color fall outside the triangle defined by the color space chromaticities. Also sometimes this situation can happen while editing at floating point, for example by adding saturation. This is “out of gamut”.

  2. In floating point processing, it’s entirely possible to have channel values that are greater than 1.0. As long as the resulting color has a chromaticity inside the triangle defined by the color space chromaticities (I mean on the xy plane of the xyY color space), these colors are “high dynamic range” but not “out of gamut”. A better term might be “out of display range”.

As to handling out of gamut colors using rendering intents, yes, you can use perceptual intent when converting to a destination profile. But unless the destination profile has a perceptual intent table, what you get is relative colorimetric intent, which clips any out of gamut colors, well, at least if the conversion is done at integer precision. And even if the destination profile does have a perceptual intent table, some colors might be outside the color space that was used to make the perceptual intent table, in which case you’d still get clipping.

Hello,

one more caveat:

If you want to remove gamma-correction and are already in the integer space, this transformation is not unique for the higher intensities, since different values in linear space project to the same value in gamma-corrected space (integer).

Hermann-Josef

That’s great will explore that and see where it goes! BTW all the outliers are positive. i.e. they are the highlights only.

On one hand I agree with you. To do everything properly is way to ambitious, and I never envisioned doing it 100% properly. But I am a firm believer in the 80/20 rule. So far maybe I have 15% of the work complete, maybe with a bit more thought and help I might complete another 5%, which is realistically my limit. This I feel would be an improvement to what’s on offer.

Certainly I have got some good ideas.

Very possible, but I don’t yet fully understand what ‘tonemapping’ is, but I certainly like RT.

Thanks very much once again, especially the image. I was doing some experiments with a step wedge, and doing tests against commercial scanner software. I will explore that properly this week.

That could be my problem… :grinning:

The equation I posted is supposed to represent the analogue inversion of a film negative. It is presented in a paper by Dunthorn. It took me several days to understand it because I don’t have those skills or brains, but I do understand other crucial parts of the process. In this software space your choices are very restricted and in FOSS non existent (at least that that i am aware of), you can use what is embedded in your scanner software, and the good one is expensive and annoying. However from work flow perspective even the optimal setting by definition creates some clipping, so that’s a restriction in itself. You can of course ignore all that and ‘attempt’ a fix later, that’s essentially what colorperfect does, with a linear scan that is not clipped, or use your own methodology. I am quite happy to purchase colorperfect (and I have) but I then need to work in Photoshop or Photoline.

So my aims are (now):

To create a FOSS solution for inverting negatives that:

  1. Provides a practical workflow, such that no significant decision needs to taken at scan time. i.e. Scan once and never again.
  2. Provides a very high quality inversion that needs minimal refinement
  3. Provides an input into tools like RT that is optimised to complete that minimal refinement.

Ha, not yet… right now, just 8- or 16-bit, one-color or three-color images. I really didn’t know fp was a supported pixel type for TIFF until this thread. I’m adding it to the 0.7 punchlist. That’s a no-brainer, as rawproc’s internal representation is fp; any image opened is converted to fp 0.0-1.0, including 8-bit JPEGs

rawproc has RGB min/max stats in the Image Information popup menu you get when you right-click on any tool in the chain. With that, you can see how the tone range is exceeded in either direction, tool-by-tool.

Hmm quite a rabbit hole this thread, as expected with any mention of colour space!

Some questions occur regarding the inversion… what number range is the input? Any 0 values will obviously result in an invalid result. Is there a link to the paper somewhere (maybe I even missed it on this page!)? It seems simple enough that perhaps layer blending modes could accomplish the same.

G’MIC could also do that part easily - internally it doesn’t care about ranges, a float is just a float and data is just data (not even necessarily an image). Yes I’m also a big G’MIC fan :slight_smile:

1 Like

Here’s a link to what I think is the paper to which you refer:
http://www.c-f-systems.com/PhotoMathDocs.html

I’m working through it, interleaved with housekeeping chores on my own contrastingly simple software… :smile:

1 Like

Thanks, that doesn’t look too onerous at a glance. There’s a lot of explanation which would take time to read, it does mention dealing with values at the limits and the problems encountered (which may be the hardest part). I too hope to get some time to read it all!

typically normalised values of .1 to about .5

I need to finish some other stuff that the script does to do with cropping, but hopefully will post it on github next week.

Elle, I have moved a little further along, with what I was working on, and realized now the last piece of the puzzle that I need to implement is some tonemapping of the highlights, (or just math that will work well enough)

For the moment this needs to done outside of RT, in imagemagick. But I am looking for clues/inspiration on how to implement a curve like this: specifically the curve inside the box, but flipped around to work on the highlights only.

Screenshot from 2017-11-07 22-51-35

Hmm, this is a short question with many possible answers. I’ve been experimenting with PhotoFlow filmic curves:

There are links in the above thread to some nice papers. But the PF filmic stuff is not easy to control, and further discussion of the PF filmic curves should probably be done in the above thread (where I’m planning to make a post in the next hours or couple of days).

The base curves in darktable are well worth looking at - there are presets that emulate various camera styles from various camera manufacturers - these curves are all basically “film like” and can be modified until you see something you like, and then you can save a new preset.

What you are asking for is basically just Curves, but finding the right Curves algorithm and interface isn’t the easiest thing to do. I’ve been trying to figure out those HD curves myself, but I suspect you know a lot more about them than I do.

Any “film-like” or “print-like” curves shown in various digital imaging forums are suspect from the get-go unless they specify the actual RGB working space.

Most algorithms and LUTS that purport to be film-like fail to mention to the user that the actual tonality of a paper print made using traditional darkroom processing depends on the film, the film exposure and processing, the paper on which the film is printed, and the process and chemicals with which the paper is developed. So at the very least that nice highlights rolloff in a paper print is the result of not one, but two “wet darkroom” transfer curves, each of which depends on many variables. This is why I find the PF filmic tonemapping both intriguing and frustrating - all the ingredients seem to be in place, but I’m having trouble getting the sliders to correlate with what I think I know about the things the sliders seem to be emulating.

Other people hopefully will chime in with references to algorithms in various softwares for emulating the rolloff in the highlights achievable using traditional darkroom techniques.

1 Like

Like this?

image

Couldn’t you just apply a curve or two to the data using -fx (IM) or fill (G’MIC) and then compare the result with the analog counterparts, rinse and repeat?

As people have pointed out, the shape of the curves will vary depending on your input and output data and what your expectations are. I think your post, #42 (which is apparently the Answer to the Ultimate Question of Life, the Universe, and Everything), would have been a great opener to this thread. Now we know what you are aiming to do. In general, it is better to put this up front so we know where you are going with your questions.

BTW, I just changed the title, category and tags to reflect the thrust of this thread. Let me know if this title is okay and / or feel free to refine it if you have the privileges.

G’MIC can be useful for testing curves:

gmic 256,1,1,1,x/256 -dg 400,400,1,1

Which gives:

flat

So perhaps a mirrored gamma curve : 1-(1-x)^g

gmic 256,1,1,1,x/256 -oneminus -^ 2.4 -oneminus -dg 400,400,1,1

mirrored-gamma

In any case a LUT will usually help speed wise.

2 Likes

Thanks for those, I think I managed to do all the heavy lifting now, in that what comes out of the script is what would be projected by an enlarger. It’s colors and intensities are correct. This curve is for the paper the ‘personality of which I am not try to model’ as it has a very constant gamma, except for the toe, which is where I want a little bit of highlight compression.

This will allow me to use a ‘better’ exposure in my inversion.

Yes that’s what I intend to but the math’s is hard for me, and toe curve has to be applied at the time of inversion, as optimum exposure creates values greater than 1.

At the moment I just let them clip, or choose a less than optimum exposure and adjust in RT.

The size and shape should be fairly straight forward as I believe it is precisely defined, there are not many choices left to choose from in the world when it comes to photographic color paper. This toe, which is now a shoulder will suffice.

Thanks and this thread has helped me a lot.

Will have a look at that, maybe that will help figure out to apply such a curve “inline” with my inversion.

The curve needs to be applied to values of:

Screenshot from 2017-11-10 22-11-13

Where K is the exposure, Jn is the normalised intensity of the pixel and Yp is typically 1.8

at that font size I can take off my glasses :wink:

1 Like

I find the source of that formula tricky to follow, but not because of the maths - certain ‘leaps’ seem to be made without explanation. Still reading it on occasion…

If the optimal settings result in values being clipped and you’re looking for a way to compress rather than ‘lose’ them, it suggests the curve/formula itself has an issue for your input.

Edit: some explanation of it may help

The formula written in another way is just (K/J^p)^c where J represents the set of all pixels (but this equally applies to any one pixel individually). Rewritten it can be K^c * J^(-pc) and given that K is some constant of your choosing and we can normalize to any range, it may as well just be J^(-pc).

In other words, this is equivalent to reciprocal then gamma, or simply a negative gamma exponent : J^-g

A demonstration of this using gmic… let’s start with a cat image, negate in the ‘digital’ sense then normalise to your input range:

gmic -sp cat -negate -n 0.1,0.5

cat1

At that point it’s negated. Now try a negative exponent and normalise at the end:

gmic -sp cat -negate -n 0.1,05 -^ -0.2 -n 0,255

cat2

That’s really all that’s happening with the formula.

Just to be clear, it’s basically saying negate with 1/x instead of 1-x. Then you can apply a usual gamma/tone curve in whatever editor you like so long as it has sufficient gamma range. That of course ignores any colourspace or normalisation…

1 Like

Not sure I can explain it fully, as I have followed the approach contained in the documents published, which I believe is the foundation of the colorperfect product:

What does the negate do to the pixels in gmic? is it something like (QuantumRange - pixel)

I get what you say, though it takes me a lot longer. :blush: Though your discussion has made me realise that the highlight compression done by the Photographic paper, can probably be implemented independently, and perhaps that step is not that critical after all as it can be done later. Though I guess only testing will tell.

It’s probably best, that I finish off the last, bits of the work, I was working on and get it on gitbhub.