Developing a FOSS solution for inverting negatives

At the end of the day, that is what it is all about. Thanks @everyone who is a dev for dealing with the hard stuff! Fortunately, for the purposes of most FOSS photography, parity of values among sensor readout, mathematics and computing isn’t that critical. I mean, there are solutions around certain problems, but they would be computationally expensive and not as fun to use. Who wants to hear: [App3084] is so slow! Why can’t it be fast like [App567]?

For a short time his website was actually off-line, leaving only a single page with a rather cryptic quote from a George Harrison song. I guess when the website was put back up, maybe the stuff that depended on java wasn’t added back in. But this is just a guess. The site is pretty important as a reference site for programmers and anyone interested in color science. I hope Bruce Lindbloom himself is doing OK.

Anyway, thanks! for checking. I was kinda hoping it was just because I don’t have java installed, but it looks like that nice3D viewer is really gone.

I understand enough about the principles of float point and I appreciate the excellent help I have received on the thread, I now understand enough about sRGB to more forward with some confidence.

What I don’t understand is there way using 32f or any other “practical” way to export in a file out of range sRGB values and for RT to import that file . I could see for example how I could store the out of range values in another image, and merge them, as required. That’s probably the most efficient way to store this anyway as it should compress well. But that’s probably bighting off way more than I could possible chew.

Or is there something in RT imagestacking stuff (which I never used) that I could explore?

@LaurenceLumi The best way to find out is to experiment. I don’t think that RT would be able to process a secondary image that contains just the outliers. However, what you can do is create an HDR image1. I believe that RT can handle those without a problem. Be careful with negative values, which indicate out-of-gamut colors; as of this moment, I still can’t decide what to do with them.

1 Edit: Well, RT’s website says the features are:

  1. 96-bit (floating point) processing engine.
  2. Can load most raw files including 16-, 24- and 32-bit raw HDR DNG images, as well as standard JPEG, PNG (8- and 16-bit) and TIFF (8-, 16- and 32-bit logluv) images.
  3. Can save JPEG, PNG (8- and 16-bit) and TIFF (8- and 16-bit) images.

@Elle Among the reasons that I thought of, the most likely one would be that the Java applet itself is too old for modern browsers. The applet web page is dated June, 24, 2008 [sic].

RawTherapee is not set up to deal with RGB values outside the display range between 0.0f and 1.0f, neither for importing nor for exporting.

RawTherapee is not set up to allow the user to directly make mathematical operations on linear RGB, which is what I’m fairly sure you want to do, but maybe I misunderstand you.

RawTherapee is an awesomely excellent raw processor and has some amazing algorithms that allow for accomplishing various editing tasks. But I have a feeling it’s not the right software for the specific tasks that you have in mind.

For importing and exporting floating point images, you need an image editor that allows to import and export floating point tiffs or openexr images. One or both (depending on the software) can be imported and exported by Krita, GIMP 2.9, darktable, and PhotoFlow.

There’s a bunch of other software that also can be used to import/export floating point tiffs and/or openexr files. I only listed software that I use on a regular basis. @ggbutcher - can rawproc import and export floating point images?

There are some other file formats that can support or partially support RGB channel values outside the display range, but these other file formats are not as widely supported among various softwares as floating point tiffs and openexr images.

Please note that a “tonemapped” HDR image, though commonly referred to as “HDR”, is no longer an actual HDR image, precisely because it’s been tonemapped to fit in the display range (just as an interpolated raw file is no longer a raw file after it’s been interpolated and saved as some other format).

Maybe the task you have in mind is tonemapping the scans specifically using RawTherapee algorithms, after you’ve processed the scans using other software? In this case RT has some very nice tonemapping options. But if the “other software” has produced channel values outside the display range, you’ll need to save the processed scans in a file format that supports floating point values. And then you’d need to open these files with software that can open floating point files, and then do an exposure adjustment to bring all the channel values to fit below 1.0f, and then export as a 16-bit integer image, which RawTherapee can open.

Yes, exactly. To get a handle on what “out of gamut”, “outliers”, “high dynamic range”, etc actually means, and a handle on what floating point processing allows you to do, experimenting is the best way to get started.

For experimenting, I’d suggest using my GIMP-CCE via the appimage that @Carmelo_DrRaw puts to gether (Community-built software) , working in a linear gamma RGB color space, and experimenting specifically with the Addition/Subtract/Multiply/Divide blend modes, and with Exposure, Auto-Stretch, and Invert, and with the PhotoFlow plug-in for tonemapping, maybe starting with the filmic tonemapping options.

I uploaded a couple of test images, in case you might find them useful:

  • A linear gamma sRGB high dynamic range (not tonemapped) 32-bit floating point tiff: ninedegreesbelow.com/files/linear-srgb-dcraw-matrix-no-alpha-no-thumbnail-402px.tiff

  • A linear gamma sRGB test image with gray blocks running from 0.0f to 1.0f. The top set of blocks increase by equal Lab Lightness increments. The bottom set of blocks increase by equal steps in linear gamma RGB. Try adding this image to itself in GIMP-CCE and color-picking the resulting RGB values. Also it’s a nice image for exploring what different tone-mapping algorithms do to different tonal ranges - I put this image together for exploring the PhotoFlow filmic tonemapping:

1 Like

Hello,
having read the various contributions, I must admit, I do not see, what you are aiming at. Just some basic remarks:

The linear (so-called) raw scanner data are in a device-specific colour space. The only way to convert them into sRGB or AdobeRGB is via an ICC-profile set up for your scanner (see contribution by Morgan).

To make calculations with up to 32bit images you might want to look into imageJ, including conversion into other bit-depths. However, this package knows nothing about color management.

Isn’t handling out of gamut colours the task of the rendering intents? To my understanding they determine, how these colours are to be transformed into the colour space.

Hermann-Josef

My apologies, I have a very bad habit of saying “out of gamut” to refer to two very different situations:

  1. In a floating point unbounded ICC profile conversion sometimes colors in the destination color space will have one or two channel values that are less than 0.0f. These color fall outside the triangle defined by the color space chromaticities. Also sometimes this situation can happen while editing at floating point, for example by adding saturation. This is “out of gamut”.

  2. In floating point processing, it’s entirely possible to have channel values that are greater than 1.0. As long as the resulting color has a chromaticity inside the triangle defined by the color space chromaticities (I mean on the xy plane of the xyY color space), these colors are “high dynamic range” but not “out of gamut”. A better term might be “out of display range”.

As to handling out of gamut colors using rendering intents, yes, you can use perceptual intent when converting to a destination profile. But unless the destination profile has a perceptual intent table, what you get is relative colorimetric intent, which clips any out of gamut colors, well, at least if the conversion is done at integer precision. And even if the destination profile does have a perceptual intent table, some colors might be outside the color space that was used to make the perceptual intent table, in which case you’d still get clipping.

Hello,

one more caveat:

If you want to remove gamma-correction and are already in the integer space, this transformation is not unique for the higher intensities, since different values in linear space project to the same value in gamma-corrected space (integer).

Hermann-Josef

That’s great will explore that and see where it goes! BTW all the outliers are positive. i.e. they are the highlights only.

On one hand I agree with you. To do everything properly is way to ambitious, and I never envisioned doing it 100% properly. But I am a firm believer in the 80/20 rule. So far maybe I have 15% of the work complete, maybe with a bit more thought and help I might complete another 5%, which is realistically my limit. This I feel would be an improvement to what’s on offer.

Certainly I have got some good ideas.

Very possible, but I don’t yet fully understand what ‘tonemapping’ is, but I certainly like RT.

Thanks very much once again, especially the image. I was doing some experiments with a step wedge, and doing tests against commercial scanner software. I will explore that properly this week.

That could be my problem… :grinning:

The equation I posted is supposed to represent the analogue inversion of a film negative. It is presented in a paper by Dunthorn. It took me several days to understand it because I don’t have those skills or brains, but I do understand other crucial parts of the process. In this software space your choices are very restricted and in FOSS non existent (at least that that i am aware of), you can use what is embedded in your scanner software, and the good one is expensive and annoying. However from work flow perspective even the optimal setting by definition creates some clipping, so that’s a restriction in itself. You can of course ignore all that and ‘attempt’ a fix later, that’s essentially what colorperfect does, with a linear scan that is not clipped, or use your own methodology. I am quite happy to purchase colorperfect (and I have) but I then need to work in Photoshop or Photoline.

So my aims are (now):

To create a FOSS solution for inverting negatives that:

  1. Provides a practical workflow, such that no significant decision needs to taken at scan time. i.e. Scan once and never again.
  2. Provides a very high quality inversion that needs minimal refinement
  3. Provides an input into tools like RT that is optimised to complete that minimal refinement.

Ha, not yet… right now, just 8- or 16-bit, one-color or three-color images. I really didn’t know fp was a supported pixel type for TIFF until this thread. I’m adding it to the 0.7 punchlist. That’s a no-brainer, as rawproc’s internal representation is fp; any image opened is converted to fp 0.0-1.0, including 8-bit JPEGs

rawproc has RGB min/max stats in the Image Information popup menu you get when you right-click on any tool in the chain. With that, you can see how the tone range is exceeded in either direction, tool-by-tool.

Hmm quite a rabbit hole this thread, as expected with any mention of colour space!

Some questions occur regarding the inversion… what number range is the input? Any 0 values will obviously result in an invalid result. Is there a link to the paper somewhere (maybe I even missed it on this page!)? It seems simple enough that perhaps layer blending modes could accomplish the same.

G’MIC could also do that part easily - internally it doesn’t care about ranges, a float is just a float and data is just data (not even necessarily an image). Yes I’m also a big G’MIC fan :slight_smile:

1 Like

Here’s a link to what I think is the paper to which you refer:
http://www.c-f-systems.com/PhotoMathDocs.html

I’m working through it, interleaved with housekeeping chores on my own contrastingly simple software… :smile:

1 Like

Thanks, that doesn’t look too onerous at a glance. There’s a lot of explanation which would take time to read, it does mention dealing with values at the limits and the problems encountered (which may be the hardest part). I too hope to get some time to read it all!

typically normalised values of .1 to about .5

I need to finish some other stuff that the script does to do with cropping, but hopefully will post it on github next week.

Elle, I have moved a little further along, with what I was working on, and realized now the last piece of the puzzle that I need to implement is some tonemapping of the highlights, (or just math that will work well enough)

For the moment this needs to done outside of RT, in imagemagick. But I am looking for clues/inspiration on how to implement a curve like this: specifically the curve inside the box, but flipped around to work on the highlights only.

Screenshot from 2017-11-07 22-51-35

Hmm, this is a short question with many possible answers. I’ve been experimenting with PhotoFlow filmic curves:

There are links in the above thread to some nice papers. But the PF filmic stuff is not easy to control, and further discussion of the PF filmic curves should probably be done in the above thread (where I’m planning to make a post in the next hours or couple of days).

The base curves in darktable are well worth looking at - there are presets that emulate various camera styles from various camera manufacturers - these curves are all basically “film like” and can be modified until you see something you like, and then you can save a new preset.

What you are asking for is basically just Curves, but finding the right Curves algorithm and interface isn’t the easiest thing to do. I’ve been trying to figure out those HD curves myself, but I suspect you know a lot more about them than I do.

Any “film-like” or “print-like” curves shown in various digital imaging forums are suspect from the get-go unless they specify the actual RGB working space.

Most algorithms and LUTS that purport to be film-like fail to mention to the user that the actual tonality of a paper print made using traditional darkroom processing depends on the film, the film exposure and processing, the paper on which the film is printed, and the process and chemicals with which the paper is developed. So at the very least that nice highlights rolloff in a paper print is the result of not one, but two “wet darkroom” transfer curves, each of which depends on many variables. This is why I find the PF filmic tonemapping both intriguing and frustrating - all the ingredients seem to be in place, but I’m having trouble getting the sliders to correlate with what I think I know about the things the sliders seem to be emulating.

Other people hopefully will chime in with references to algorithms in various softwares for emulating the rolloff in the highlights achievable using traditional darkroom techniques.

1 Like

Like this?

image

Couldn’t you just apply a curve or two to the data using -fx (IM) or fill (G’MIC) and then compare the result with the analog counterparts, rinse and repeat?

As people have pointed out, the shape of the curves will vary depending on your input and output data and what your expectations are. I think your post, #42 (which is apparently the Answer to the Ultimate Question of Life, the Universe, and Everything), would have been a great opener to this thread. Now we know what you are aiming to do. In general, it is better to put this up front so we know where you are going with your questions.

BTW, I just changed the title, category and tags to reflect the thrust of this thread. Let me know if this title is okay and / or feel free to refine it if you have the privileges.

G’MIC can be useful for testing curves:

gmic 256,1,1,1,x/256 -dg 400,400,1,1

Which gives:

flat

So perhaps a mirrored gamma curve : 1-(1-x)^g

gmic 256,1,1,1,x/256 -oneminus -^ 2.4 -oneminus -dg 400,400,1,1

mirrored-gamma

In any case a LUT will usually help speed wise.

2 Likes