Developing a FOSS solution for inverting negatives

With HDRI it doesn’t. See ImageMagick – High Dynamic-Range Images.

I believe that they have made it easier to do than before. Also, their Q16 binary includes HDRI now, which is wonderful and useful for most applications.

it’s actually far worse than that, unfortunately. if anyone is interested, this is an excellent read (maybe what @Elle was mentioning above?)

1 Like

Hi @agriggio - Yes, that’s the article I was thinking about. Or if it’s not, then I was thinking about an article that covers exactly those topics.

Hi @ggbutcher - as too often is the case when I try to answer questions, I totally overlooked the important stuff :slight_smile: such as what these floating point numbers look like in the image editor, as you say from 0.0 to 1.0 covers the integer range, but at floating point the channel values can go above 1.0 and depending on the editor and the operations, also below 0.0.

@LaurenceLumi - As Glenn pointed out, at some point in the editing process something has to be done to bring out of gamut channel values back into the display range from 0 to 1 floating point. Many tonemapping algorithms have been developed to do this, and sometimes you might want to take care of the problem “by hand”, depending on the image and your editing goals. In case it might be helpful, a couple of articles on my website talk about high dynamic range images and dealing with out of gamut channel values in GIMP:

Models for image editing: Display-referred and scene-referred
https://ninedegreesbelow.com/photography/display-referred-scene-referred.html

Autumn colors: An Introduction to High Bit Depth GIMP’s New Editing Capabilities
Edit tonality and color separately using high bit depth GIMP - and specifically the section on out of gamut colors

Yes. The farther apart the primaries are as located in XYZ space, the larger the resulting color gamut.

This article explains how you get from three primaries to an actual color gamut - a 3D volume - in XYZ space: Programmer's Guide to XYZ, RGB

This page has gamut outlines in LAB space for various older RGB working spaces, but doesn’t include any of the newer spaces such as the ACES color spaces and Rec.2020 : http://brucelindbloom.com/index.html?WorkingSpaceInfo.html

The same Lindbloom page used to allow you to view and compare 3D color gamuts in different reference working spaces, including LAB and XYZ. There used to be a 3D gamut viewer at the bottom of the page (as shown here: https://web.archive.org/web/20130406193252/http://brucelindbloom.com/index.html?WorkingSpaceInfo.html). But it seems to have disappeared. @afre - can you see the gamut viewer on the current brucelindbloom page? I seem to recall it required java (not just javascript) installed to actually be used, and I don’t have java installed on my computer any more.

Such a 3D gamut viewer would be nice to have in free/libre software - is there such a thing? icc_examin has a bit of what the Lindbloom viewer had, and ArgyllCMS can generate 3D views, but I think only in LAB space, and only if the user is willing to work at the command line.

This page has some basic definitions for discussing ICC profiles, and also shows a sample printer, monitor, and camera input profile in LAB space: Color spaces inside CIELAB

Yes. Camera manufacturers are not usually so obliging as to just tell us what the effective primaries of their cameras are. So instead we have to make profiles for use when interpolating camera raw files, by making target chart shots to get a set of known color patches. Scanners also can be profiled starting with a scan of a target chart, if the scanner allows to disable the conversion to sRGB.

With monitors the process involves first calibrating the monitor, and then using profiling software and a color measuring device connected to the computer and monitor, which work together to send known color patches to the screen to be read by the device, which informaton is used by the profiling software to create a monitor profile.

With respect to the word “primaries” - this implies a matrix RGB profile, which is the most commonly used type of profile for cameras and monitors.

1 Like

Replying to my own post, is this like talking to one’s self? :smiley:

It occurs to me to point out that ‘overflow’ in floating point isn’t unique to the range 0.0 - 1.0. @David_Tschumperle, correct me if I’m wrong, but I think G’MIC uses an internal representation of 0.0 -255.0, and the same concept applies, in manipulation you can get tones like 257.0843, and they have to be corralled back to display range. Thus, the G’MIC tutorial on -cut and -normalize. For my hack software, I chose 0.0-1.0 because Little CMS handles floating point in the same range.

The key difference to realize in how floating point vs integer representations behave is that, if you use the full range of an integer data type like unsigned short in c++ to store fully-scaled 16-bit tone values, pushing a value off the high end in a math expression results in ‘rolling’ that data to the bottom of the range in all computers I know, mixing it in along with the shadow values. Once that happens, it’s harder (hmmm, impossible?) to retrieve it for a highlight recovery algorithm.

Oh, one more thing, @Elle is right, the same behavior is available at the bottom of the floating point data range. If black=0.0, manipulations that “push left” result in negative numbers, e.g., -0.00023. But, all digital imaging I know anchors the analog measurements at 0, so there’s less interest in recovering that arbitrary-ness, and there’s no informed consideration of such a thing as ETTL…

When processing raw files, as you say usually it’s best to clip values that are less than the camera-indicated black point, to chop off what is mostly noise. And in some cameras even in the raw file the noise has been clipped before the raw file is saved to disk. But for cameras that don’t clip to the black point, sometimes that “really mostly noise” information can be useful, so not clipping it automatically is a nice option that some raw processors do provide.

The situation changes considerably when you consider unbounded floating point ICC profile conversions, because then the negative channel values can result simply from colorful colors in the image that exceed the color gamut of the destination color space.

This whole business of editing using unbounded 32f is kind of new, at least for photographers. But consider an RGB image editor such as GIMP that allows to work at 32f without clipping - suppose you add saturation, you might push the saturation so high that one or two channels goes negative as expressed in whatever working space you might be using. My “Autumn colors” tutorial talks about dealing with this situation.

Also consider user-generated actions that might produce temporary negative channel values, when the user actually intends this to happen or at least knows it might happen, because subsequent editing will bring the channel values back above zero. Clipping the channel values just because they happen to be negative at the moment is risking throwing away valid color iinformation. It’s kinda of a whole new world of editing possibilities and also editing traps and pitfalls to be avoided.

Every time I expose to the right I end up exposing to the left to avoid clipped highlights :slight_smile: Well, at least when taking pictures outdoors in direct sunlight.

Yes, that occurred to me when I was replacing FreeImage with my homegrown library and chose to just have one internal pixel representation, fp 0.0-1.0. So, I just let all the Apply operations operate unbounded, and I don’t clip until I either go to display, or file output. Now that I’m understanding both tone and color gamut better, for my next iteration of rawproc I’m going to include some sort of out-of-gamut display depiction, or “blinkies” as colloquially referenced. A highlight reconstruction tool is also under consideration.

I’m still messing with an ETTR shooting flow, but can’t settle on something that I can manage in real-time with varying scene contrasts. I also screwed up my thinking helping a friend with film exposure, where ETTL is what you want to do for reasons manifest in the fundamental notion of a “negative” . Ack, all this hard thinking, just when my age-related dementia is settling in…

I disabled Java for the browser for security reasons. I will enable it and restart Firefox to give it a try. The non-archive web page doesn’t have the 3D Gamut Viewer section. Perhaps Lindbloom removed it for one reason or another?

Edit: I enabled Java and launched Firefox in Safe Mode and the viewer doesn’t work. There may be a litany of reasons but that would be off-topic.

Pretty comfortable with those concepts.

But I am still confused how the 32f works in this context I thought is was just greater precision, still the same normalized range of 0-1. I can understand out of range values used for a temporary calculation, which Is how I though IM worked even with 16bit. Or is there away to interchange these out of range values, or does the 32bit standard reserve some bits for values greater than nominal value?

@LaurenceLumi I don’t understand this myself. Maybe if I read the all of the links offered in this thread carefully I would get a better picture… I just remember in school that there was something called significant figures and that apps like MATLAB allow the user to define and work with different data types and structures.

Edit: I just remembered that I took a course called Numerical Methods for fun back in university. Of course, I can’t recall any of it :rofl:.

This course provides a theoretical and practical introduction to numerical methods for approximating the solution(s) of linear and nonlinear problems in the applied sciences. The topics covered include: solution of a single nonlinear equation; polynomial interpolation; numerical differentiation and integration; solution of initial value and boundary value problems; and the solution of systems of linear and nonlinear algebraic equations.

Actually, precision is not a good thing to expect from floating point numbers. It’s very hard to express a precise fp value, whereas in integers, 128 is exactly that, 128. Now, your next opportunity to be precise is 129, and there are an infinite number of quantities between 128 and 129, some of which you may care about. fp gives you the means to dig between the integer values, but its still a digital representation, and further, the gaps between quantities changes as you move about the number line.

With respect to image tones, you’re just using a small part of the total range of 32-bit fp, which means there’s room on either end for overflow. fp doesn’t know the difference between in-gamut and out-of-gamut, that’s for the poor programmer to keep track of… :slight_smile:

At the end of the day, that is what it is all about. Thanks @everyone who is a dev for dealing with the hard stuff! Fortunately, for the purposes of most FOSS photography, parity of values among sensor readout, mathematics and computing isn’t that critical. I mean, there are solutions around certain problems, but they would be computationally expensive and not as fun to use. Who wants to hear: [App3084] is so slow! Why can’t it be fast like [App567]?

For a short time his website was actually off-line, leaving only a single page with a rather cryptic quote from a George Harrison song. I guess when the website was put back up, maybe the stuff that depended on java wasn’t added back in. But this is just a guess. The site is pretty important as a reference site for programmers and anyone interested in color science. I hope Bruce Lindbloom himself is doing OK.

Anyway, thanks! for checking. I was kinda hoping it was just because I don’t have java installed, but it looks like that nice3D viewer is really gone.

I understand enough about the principles of float point and I appreciate the excellent help I have received on the thread, I now understand enough about sRGB to more forward with some confidence.

What I don’t understand is there way using 32f or any other “practical” way to export in a file out of range sRGB values and for RT to import that file . I could see for example how I could store the out of range values in another image, and merge them, as required. That’s probably the most efficient way to store this anyway as it should compress well. But that’s probably bighting off way more than I could possible chew.

Or is there something in RT imagestacking stuff (which I never used) that I could explore?

@LaurenceLumi The best way to find out is to experiment. I don’t think that RT would be able to process a secondary image that contains just the outliers. However, what you can do is create an HDR image1. I believe that RT can handle those without a problem. Be careful with negative values, which indicate out-of-gamut colors; as of this moment, I still can’t decide what to do with them.

1 Edit: Well, RT’s website says the features are:

  1. 96-bit (floating point) processing engine.
  2. Can load most raw files including 16-, 24- and 32-bit raw HDR DNG images, as well as standard JPEG, PNG (8- and 16-bit) and TIFF (8-, 16- and 32-bit logluv) images.
  3. Can save JPEG, PNG (8- and 16-bit) and TIFF (8- and 16-bit) images.

@Elle Among the reasons that I thought of, the most likely one would be that the Java applet itself is too old for modern browsers. The applet web page is dated June, 24, 2008 [sic].

RawTherapee is not set up to deal with RGB values outside the display range between 0.0f and 1.0f, neither for importing nor for exporting.

RawTherapee is not set up to allow the user to directly make mathematical operations on linear RGB, which is what I’m fairly sure you want to do, but maybe I misunderstand you.

RawTherapee is an awesomely excellent raw processor and has some amazing algorithms that allow for accomplishing various editing tasks. But I have a feeling it’s not the right software for the specific tasks that you have in mind.

For importing and exporting floating point images, you need an image editor that allows to import and export floating point tiffs or openexr images. One or both (depending on the software) can be imported and exported by Krita, GIMP 2.9, darktable, and PhotoFlow.

There’s a bunch of other software that also can be used to import/export floating point tiffs and/or openexr files. I only listed software that I use on a regular basis. @ggbutcher - can rawproc import and export floating point images?

There are some other file formats that can support or partially support RGB channel values outside the display range, but these other file formats are not as widely supported among various softwares as floating point tiffs and openexr images.

Please note that a “tonemapped” HDR image, though commonly referred to as “HDR”, is no longer an actual HDR image, precisely because it’s been tonemapped to fit in the display range (just as an interpolated raw file is no longer a raw file after it’s been interpolated and saved as some other format).

Maybe the task you have in mind is tonemapping the scans specifically using RawTherapee algorithms, after you’ve processed the scans using other software? In this case RT has some very nice tonemapping options. But if the “other software” has produced channel values outside the display range, you’ll need to save the processed scans in a file format that supports floating point values. And then you’d need to open these files with software that can open floating point files, and then do an exposure adjustment to bring all the channel values to fit below 1.0f, and then export as a 16-bit integer image, which RawTherapee can open.

Yes, exactly. To get a handle on what “out of gamut”, “outliers”, “high dynamic range”, etc actually means, and a handle on what floating point processing allows you to do, experimenting is the best way to get started.

For experimenting, I’d suggest using my GIMP-CCE via the appimage that @Carmelo_DrRaw puts to gether (Community-built software) , working in a linear gamma RGB color space, and experimenting specifically with the Addition/Subtract/Multiply/Divide blend modes, and with Exposure, Auto-Stretch, and Invert, and with the PhotoFlow plug-in for tonemapping, maybe starting with the filmic tonemapping options.

I uploaded a couple of test images, in case you might find them useful:

  • A linear gamma sRGB high dynamic range (not tonemapped) 32-bit floating point tiff: ninedegreesbelow.com/files/linear-srgb-dcraw-matrix-no-alpha-no-thumbnail-402px.tiff

  • A linear gamma sRGB test image with gray blocks running from 0.0f to 1.0f. The top set of blocks increase by equal Lab Lightness increments. The bottom set of blocks increase by equal steps in linear gamma RGB. Try adding this image to itself in GIMP-CCE and color-picking the resulting RGB values. Also it’s a nice image for exploring what different tone-mapping algorithms do to different tonal ranges - I put this image together for exploring the PhotoFlow filmic tonemapping:

1 Like

Hello,
having read the various contributions, I must admit, I do not see, what you are aiming at. Just some basic remarks:

The linear (so-called) raw scanner data are in a device-specific colour space. The only way to convert them into sRGB or AdobeRGB is via an ICC-profile set up for your scanner (see contribution by Morgan).

To make calculations with up to 32bit images you might want to look into imageJ, including conversion into other bit-depths. However, this package knows nothing about color management.

Isn’t handling out of gamut colours the task of the rendering intents? To my understanding they determine, how these colours are to be transformed into the colour space.

Hermann-Josef

My apologies, I have a very bad habit of saying “out of gamut” to refer to two very different situations:

  1. In a floating point unbounded ICC profile conversion sometimes colors in the destination color space will have one or two channel values that are less than 0.0f. These color fall outside the triangle defined by the color space chromaticities. Also sometimes this situation can happen while editing at floating point, for example by adding saturation. This is “out of gamut”.

  2. In floating point processing, it’s entirely possible to have channel values that are greater than 1.0. As long as the resulting color has a chromaticity inside the triangle defined by the color space chromaticities (I mean on the xy plane of the xyY color space), these colors are “high dynamic range” but not “out of gamut”. A better term might be “out of display range”.

As to handling out of gamut colors using rendering intents, yes, you can use perceptual intent when converting to a destination profile. But unless the destination profile has a perceptual intent table, what you get is relative colorimetric intent, which clips any out of gamut colors, well, at least if the conversion is done at integer precision. And even if the destination profile does have a perceptual intent table, some colors might be outside the color space that was used to make the perceptual intent table, in which case you’d still get clipping.