# Unbounded Floating Point Pipelines

(Ingo Weyrich) #21

Sometimes you need to clip. For example when using a Hald CLUT.

(Luis Sanz RodrĂ­guez) #22

@Elle, well, I was just thinking about the raw data pipeline: applying black and white point, CA, impulse noise and other corrections, white balancing, demosaicing, noise reduction, luminance recovery, etcetera. After converting the raw data to a color space, Iâm not sure if itâs feasible at all.

#23

We should totally use that from now on.

(Glenn Butcher) #24

I would like to add the word âscoochâ to the technical lexicon. With respect to curve control points, it refers to their slight movement to achieve a particular image transform. For each scooch, Iâm sure thereâs a polynomial somewhereâŚ

(Alan Gibson) #25

Negative (more generally, out-of-gamut) values confuse the heck out of me, but they are a sad fact of life.

They can arise even when in monochrome, even when not messing with profiles. For example: suppose an image has only three pixels. They are black, white and 25% gray. We resize the image to be twice as large, six pixels. What gray levels are they?

It depends on the resizing method we use. The default ImageMagick HDRI (floating-point) resizing gives:

convert xc:black xc:white xc:gray(25%) +append -resize â6x1^!â txt:

0,0: srgb(-3.08924%,-3.08924%,-3.08924%)
1,0: srgb(24.6607%,24.6607%,24.6607%)
2,0: srgb(76.497%,76.497%,76.497%)
3,0: srgb(83.3832%,83.3832%,83.3832%)
4,0: srgb(44.1264%,44.1264%,44.1264%)
5,0: srgb(22.6831%,22.6831%,22.6831%)

Thatâs squirrelly.

What has happened? Think of a graph that passes through 3 points: 0, 1 and 0.25. Resizing will re-sample the graph. The algorithm used by IM assumes the graph is a curve, rather than a pair of straight lines. So the resampled points can be outside the values bounded by the inputs.

Feature request: save as floating-point
(Alan Gibson) #26

Another example: unsharp masking can push values out of gamut. This is obvious if you think about what USM does, but hereâs an example. Make an image that is black and white only, 3x3 white pixels surrounded by a black border 3 pixels thick:

convert -size 3x3 xc:white -bordercolor Black -border 3 -format âMIN=%[fx:minima]\nMAX=%[fx:maxima]â info:

MIN=0
MAX=1

convert -size 3x3 xc:white -bordercolor Black -border 3 -unsharp 0x2 -format âMIN=%[fx:minima]\nMAX=%[fx:maxima]â info:

MIN=-0.197727
MAX=1.75652

Eek! Values now range from -19.7% to +176%.

Yes, OOG values (especially the negative ones) will cause some algorithms to fail, eg:

f:\web\im>%IMDEV%convert xc:gray(-10%) -evaluate Pow 2.2 txt:

0,0: gray(-nan%)

Double-eek! The pixel is not-a-number!

Software routinely avoids the problem by clipping, but this is a sledgehammer technique that destroys data. We can do better, I hope.

#27

@snibgo This becomes increasingly apparent as I use GâMIC and IM more regularly. It confuses the heck out of me as well. I suppose it is a case by case but how have you tackled the problem besides clipping? No one seems to be ready to discuss this in practical terms whenever I bring this up but we have to start somewhere.

(Glenn Butcher) #28

I think itâs just the inevitable consequence of manipulation. Almost every operation inflicted upon digital data requires an implicit decision to âbucketâ the result into an adjacent discrete value, losing accuracy. This starts with the sensor ADC.

The effects described in earlier posts are new and interesting to me, compelling me to reconsider some indiscriminate image manipulation. Well, maybe not, the resulting images appealed to me, and the loss of data didnât take anything from that. Might be a different story if a museum came to me with a proposal for a showing, all images in 6â x 8â (thatâs feet) size. Fat chance of thatâŚ

What I decided when I took on a floating point internal format for rawproc was to let the over/underruns accumulate, and clip for output. This intuitively seems prudent, as itâs the data Iâll see that matters to me. Indeed, Iâve had some images where Iâve had to lose some clarity in light sources in order to properly develop the rest of the scene, pushing the light off the visible range. Canât afford a D850, for the dynamic range.

Passing unbounded image data to, say, GIMP opens the opportunity to recover some of the spill with tools Iâll never be able to implement. Thatâs what I proposeâŚ

#29

My current strategy, if you can call it that, is to favor manipulations that donât deviate from the bounds as much. For me, +ve values are scoochable but the -ve OOG ones are squirrel-aggressive.

(Glenn Butcher) #30

We need to write a paperâŚ

(Alberto) #31

maybe (in fact, likelyâŚ) Iâm being naive, but if you donât know how to handle out of range values, why donât you just leave them untouched? i.e.:

1. apply transformations that are identity for values out of the [0,1] interval, and

2. clip upon reading the pixels instead of when writing (i.e. if you are combining the pixel with its neighbour, just pretend that it is in range)

why wouldnât this work (besides introducing overhead if you read more than you write â which is likely the case)? experts please enlighten me

#32

Thanks for you input. Compared to my level, you are all experts .

(Alan Gibson) #33

Software could give users a choice about out of gamut (OOG) values, such as:

1. Never clip or tame OOG values.

2. Always clip after every operation.

3. Always clip before every operation that OOG would cause to fail.

4. As (1) but apply a non-destructive process to put OOG inside the box.

5. As (2) but apply a non-destructive process to put OOG inside the box.

Non-destructive processes (âscoochingâ?) include applying a gain and bias (eg ImageMagickâs â-autolevelâ), and methods I show in Putting OOG back in the box. Other methods are possible, eg the four ICC profile intents (only two of which have a precise definition).

This choice could be a generic user-preference, and/or selected for individual operations.

(Andrew) #34

I have limited time to give to this at the moment, but some thoughtsâŚ

1. @snibgo, in your 3 pixel example I can see I think why applying some maths in the upscaling would lead to negatives, but it makes no physical sense if weâre talking black and white and colour spaces are not involved. Of course with lots of these operations, the boundary conditions have to be dealt with, and with only 3 pixels in a line (if I understand correctly) thatâs much more boundary than interior!

2. Iâve not seen anyone address the point I made recently in the other thread where values are multiplied by 2 and then clipped; as opposed to just multiplying by the max suitable value and not needing to clip.

3. Elleâs site has warnings about unbounded operations ( https://ninedegreesbelow.com/photography/unbounded-srgb-as-universal-working-space.html#editing-operations-fail ) but on a quick read these problems perhaps go away as long as the working space is big like prophoto (and better still if linear gamma also used). Though I have yet to try linear g. properly.

4. Re. further discussion of this and âwhite paperâ, maybe an approach is to limit it initially to photos intended to be realistic / natural-looking, thus perhaps simplifying the issue by not considering some of the more complicated operations, e.g. ? blend modes that are rarely needed? (though I donât know what most of them do in any case!). Consider the complication raised where two negative values are multiplied hence a plus. I can understand multiplying an image by say 1.6 to increase contrast, sort of, but 2 negatives, i.e. multiplying one OOG colour by another OOG, what does that mean in reality?!

sounds good, does this mean it scales all the time (in fl.point) to keep everything in gamut? (havenât tried IM)

1. Say youâre editing / RT-ing in prophoto and want to see how itâs going re. gamut. How about a nice new OOG tool where you say what space youâre thinking of converting to, and it highlights the OOG parts, but in addition you have a slider, which at one end says âshow everything thatâs OOGâ and at the other âJust show the most OOG partâ. Like soft-proofing but more generalised. It shows where the problems are then itâs up to you what you want to do about it.

(Alan Gibson) #35

My 3-pixel example was chosen for simplicity. The same problem can occur with mega-pixel images: if it has an area of high contrast, a USM can increase contrast to push values OOG.

Yes, operations can be designed to never generate OOG values. This would mean an unsharp mask would be adaptive, with varying effect depending on how much it would push values OOG. Thatâs not a trivial problem.

Sadly, no. The issues I show apply to any bounded working space. No matter how large it is, there will always be operations that can push pixels OOG.

Of course, for many purposes we just donât care. If we just want a pretty picture and a handful of pixels out of a few million get clipped, so what? But how about 1%? Or 10%? Or what if we are preparing images for use in a larger project, where damage by clipping might be magnified?

The ImageMagick â-auto-levelâ operation is called only when the user asks for it. So I can call it after the USM, eg:

convert -size 3x3 xc:white -bordercolor Black -border 3 -unsharp 0x2 -auto-level -format âMIN=%[fx:minima]\nMAX=%[fx:maxima]â info:

MIN=0
MAX=1

In this case, â-auto-levelâ pushes all values inwards, so they span just 0.0 to 1.0. Aesthetically, this is often a bad idea. Imagine a photo with all pixels in gamut except for one pixel that is at 200%. â-auto-levelâ will halve the brightness of all pixels.

Yes, I think we need better OOG tools. And my point in these posts is to highlight that OOG occurs frequently, but we just donât notice because software silently clips the issue away.

(Glenn Butcher) #36

rawproc might be the basis for a case study in what tools do. Without considering the things weâve discussed here, I simply let all my tools work the data, and what will be, will be. Clipping doesnât occur except for 1) display, and 2) output to one of the image formats, currently JPEG and integer TIFF, soon to add PNG and unbounded floating point TIFF.

I have a reference image I use for testing, slightly underexposed (in both ETTR and JPEG terms) except for a glaring locomotive headlight, which clips even in the original linear capture. I have a image stats dialog box which I can call for any tool in the chain, reports among other things min/max RGB values. i was somewhat surprised to see maxes around 2.0 in some channels on this image after applying a series of tools, but it then occurred to me I was just pushing more of the headlight into oblivion.

Currently, all looks well in display because the image pulled for display is clipped to the display range, but Iâm considering a selectable âblinkiesâ mode (donât worry, I in no way am going to make pixels blink, probably some cyan/magenta coloration) to show out-of-bounds regions of pixels.

Even then, Iâll not consider automatically clipping or recovering OOB values in the tools, save maybe to implement a separate ârecoverâ tool. Iâm probably not smart enough to do that, however, so I look forward to the opportunity to save to unbounded floating point TIFF, then open that image intact in GIMP to do recovery magicâŚ

#37

The white paper this thread reminds me of the most is on audio technology, the dbx type IV a/d conversion system where input is limited logarithmically after a linear segment in order to extend headroom before reaching logical clipping. It is a form of dynamic range compression, and introduces aurally pleasing artifacts (third harmonic distortion) instead of a hard clip.

ftp://ftp.dbxpro.com/pub/pdfs/WhitePapers/Type%20IV.pdf

(Andrew) #38

I like this because you can see very clearly something is happening and deal with it. Suppose the tools in RT or Gimp had an option to do the operation either with clipping or auto-level, and you could flick between them and see the difference. Or at least have the ability to do the operation with auto-level for some tools. If you were increasing contrast, I suppose it would work as expected until pixels hit the limit, then the image would mostly go darker and less contrasty the more you increased the slider. This would indicate an OOG or highlights issue and then depending on its importance in the photo, you could decide to do the contrast adjustment with clipping, or e.g. do a local edit and reduce the saturation of the offending parts (or the whole image), etc.

#39

@RawConvert Unsure whether it is wise to expose an auto-level v clip option, etc., to user-facing tools. That would over-complicate and clutter the GUI. I would rather have the devs make most of those decisions for us. Also, certain things only work as intended with clipping, as least visually. If you are interested in that stuff, I suggest you try using GâMIC and / or IM, if you arenât already doing so.

(Elle Stone) #40

@RawConvert is right to note that when starting from a raw file, converting the interpolated image to a smaller color space such as sRGB, using an unbounded ICC profile conversion at floating point precision, is much more likely to produce RGB values with channel values less than 0.0 and/or greater than 1.0 (âout of display rangeâ, letâs say âoodrâ for short), compared to converting to converting to a larger color space such as Rec.2020.

As @snigbo notes, some operations can cause âoodrâ channel values even when thatâs not the intention and regardless of how large the RGB color space might be, such as resizing and sharpening. For unsharp mask, personally I use a layer mask to mask out highlight and shadow areas. For resizing, maybe Normalize? followed by Curves to restore the shadow/highlight tonality? Or as this only affects brightest/darkest portions of an image, I suppose one could decrease the dynamic range, resize, and then restore the dynamic range, either way might work.

In terms of âwhat operations cause completely squirrelly colors with âoodrâ channel valuesâ, thatâs easy enough to answer:

• Any operation that involves multiply or divide, except when at least one of the colors is gray, white, or black (thereâs an exception, thatâs too complicated to explain here, but see Section D of this article, in which âmultiplyâ essentially acts like addition: https://ninedegreesbelow.com/photography/combining-painting-and-photography.html

• Many blend modes such as Soft Light, Hard Light, etc that use a midpoint to determine whether a color gets darker or lighter - multiply/divide are used here.

• âgammaâ operations, which also ultimately involves multiply/divide

• And etc.

Operations that donât produce squirrelly colors include:

• operations that are elaborations on addition/subtract: gaussian blur, most or all Transform operations such as rotate, scale, etc

• operations that convert from RGB to XYZ (and maybe then to other color spaces such as LAB/LCH/xyY), such as Luminance conversions to black and white, the various LAB/LCH operations.

• Levels, Curves, Brightness, Exposure, etc, when all three RGB channels are changed by the same amount. On the other hand, changing channels individually is like multiplying/dividing by a color, so if the operation causes colors to go âoodrâ, squirrelly colors can result.

GEGL and GIMP code has already made a lot of progress dealing with âwhen is it better to clipâ and âwhen should the operations be kept unclippedâ, as searching through open and closed bug reports will show. Plus there is a âclipâ operation the user can invoke, that allows to set shadow and highlight clipping separately.

There is no âone size fits allâ solution. It depends on where the values came from, whether the user made such values deliberately and plans subsequent edits to deal with them, and what the ultimate editing goals are. For example, Normalize (Auto-stretch contrast in GIMP) is often a good step when the goal is to bring all colors to within display range. But a sprinkling of really really âoodrâ values can completely defeat getting good results out of this operation, as I found when I filed a bug report about a layer mask turning to solid gray: https://bugzilla.gnome.org/show_bug.cgi?id=777836

Before even beginning to talk about these issues, itâs important to understand the difference between âoodrâ channel values, which involves any channel value thatâs greater than 1.0 or less than 0.0, and channel values that are only greater than 1.0, which of course are required for HDR images (my âModels for image editing: Display-referred and scene-referredâ referenced above goes through the difference).

I wrote a GIMP tutorial that works through the process of generating and dealing with âoodrâ channel values while editing separately for tonality and color (which for some people is a sort of âholy grailâ of editing). As with most of my articles about GIMP, the tutorial is somewhat out of date. But the tutorial - which includes a file to download and process, to follow along the steps - does go through some issues with, and benefits of, keeping âoodrâ channel values even when the ultimate goal is to output a display-range image:

Autumn colors: An Introduction to High Bit Depth GIMPâs New Editing Capabilities
https://ninedegreesbelow.com/photography/high-bit-depth-gimp-tutorial-edit-tonality-color-separately.html