Out-of-gamut again: what to do with those negative baddies

This topic weaves in and out of those lengthy and spicy threads from time to time. Yet, I don’t think I have created a topic on it. In my PlayRaw entries, whenever you see the words interpolate, fix or filter (bad or unwanted) pixels, you know something is up. In that step, I attempt to deal with the baddies commonly known as out-of-gamut colours, which manifest as negative numbers in a RGB pixel. Could affect one channel, two or all three.

The following two posts reinvigorated my interest in continuing this discussion. (Obviously, an echo of previous posts we have written.)

The issue has to do with negative values post demosaicing and ICC conversion by the raw processor. The common way of dealing with them is to clip, except that would introduce colour shifts or simply black if all 3 channels are negative. “Modulation”, as proposed by @snibgo, is out of the question in this case because I don’t think that these negative values were simply values near the boundary being pushed out.

What I am doing currently is inpainting them at the risk of introducing more artifacts. In this thread, I would like to compile a list of techniques and options available in apps thus far and also where the state-of-the-art is at. Perhaps, I can find a better way than my current strategy. Naturally, I prefer simplicity because I am not a math, programming or colour genius by any stretch of the imagination. I just like to learn and try new things.

I’ll stick my neck out, someone must have thought of this…
Would it be possible to make some hopefully well-contained s/ware changes to help with negatives, along these lines -
Before demosaicing, take the raw values with black point subtracted and let’s say they are in range 0 - 14000. Now scale them linearly to the range 500 -14000 where 500 is a guess right now. Now go into unchanged existing processing to demosaic and apply camera input profile. Hopefully if 500 was a good choice there will be no negative values for any pixel. Now leave it to the user to makes the shadows darker if they need to and/or adjust according to taste.:face_with_hand_over_mouth:

How do you scale from [0;14000] to [500;14000] linearly?

Just to clarify: In my opinion the raw processing is completed after demosaicing. Further steps are not raw processing anymore as they work on demosaiced data in whatever color space…

I know. There are two apects to the problem:
1. Strategies to preserve as much information as possible for as long as possible, and their pros and cons. Also, how far is too far?
2. At some point, a decision has to be made on what the boundaries are and what to do with the resultant out-of-bounds values.

For 1, I tweak things in the raw and ICC profile parts of the pipeline to minimize negative values but it is haphazard and uninformed at best. For 2, as aforementioned, I inpaint.


PS Negative values are okay as long as they have context to them and are possible to use meaningfully if the user so chooses. At the moment, at least in my workflow, they are non-data.

Surely y = 500 + 13500x/14000 ?

How do you get negative values by inverting the equation?

Ingo, I don’t understand the question.

(My reasoning is that it is the darkest parts which will produce negative pixel components with standard processing, so if the raw values are lifted a bit first, perhaps less or no negative values will occur)

Where does that happen? Is that caused by wrong black levels or not clipping to zero after subtracting black levels?

I wasn’t thinking of those, rather, that I thought negative values tended to arise from processing the camera input profile and transforming from one colour space to another. But you probably know better than me, where do you think it happens?

Got curious, so I opened a raw, assigned a camera profile, whitebalanced with the camera multipliers, demosaiced, and then did a colorspace convert to Rec2020. After that, here are the pixel stats:

channels:
rmin: -0.000740	rmax: 0.587706
gmin: -0.000597	gmax: 0.296857
bmin: -0.000544	bmax: 0.362339

That’s a Nikon D7000 NEF, with a sunlight-based Argyll-processed camera profile of a 24-patch ColorChecker, and a linear gamma Rec2020 profile.

Curiouser and curiouser, I changed the Rec2020 profile to also apply 1.8 gamma. That resulted in:

channels:
rmin: 0.000000	rmax: 0.744408
gmin: 0.000000	gmax: 0.509446
bmin: 0.000000	bmax: 0.569076

These transforms are all relative_colorimetric, 32-bit float data, with LittleCMS2, 2.9.

I then did a blackwhitepoint set, and that moved data negative, but that’s because I have percentage thresholds that do precisely that.

Here’s my surmise: black is black, and the raw data from the camera won’t be any lower because it’s supposed to represent measurements of light intensity, negative light would open a black hole and suck the whole of what we know into it. :smile:

We make data go negative by doing our modifications of tone. Thing is, most of those modifications drive the data toward white, because we’re intrinsically interested in those effects. Recently though, I’ve developed certain images with intention to drive some shadows negative, because I wanted to separate them from the subject. In that case, I am satisfied with how they just desaturated to R=G=B, 0=0=0 when saved to a file. In output JPEGs sized for display, there were no discernible artifacts as the remaining shadows above 0=0=0 came into play. Works for me…

Raw data from the camera can be darker than black because of read noise.

1 Like

Is that in cases where the camera has a positive black point?

Yes.

2 Likes

I’m currently working in this area, and don’t have definite views. But it seems that:

  1. Some images have a few pixels that are slightly negative, eg -0.00001%. I expect this is due to imprecise arithmetic, and clamping these to zero will be fine.

  2. Sometimes the camera matrix isn’t accurate for the photograph. This results in pixels that are outside the CIE chromaticity horseshoe, and even with negative X and Y values in XYZ space. I suspect the best cure is to use a correct camera matrix. Failing that, use an algorithm that pushes values into the horseshoe, or into the triangle of RGB primaries. I currently do this by adjusting x and y in xyY space (so chromaticity doesn’t change), but it might be better in Lu’v’ or some other space.

I’ve been concentrating on chromaticity. I haven’t investigated “bad” lightness. If that is (also) negative, I simply make it zero.

1 Like

I should add:

  1. When editing in (linear) rgb space, operations such as resize and sharpen can create values substantially outside the range 0 to 100%. This might happen to one or two or all three channels of pixels. Clamping those values to the range 0 to 100% probably changes chromaticity and lightness, so we can instead use an algorithm that operates on all channels while maintaining hue, at least.

Please could you expand on this. What does the workflow look like, what s/ware, what tools etc.

The workflow is: dcraw debayers etc and writes the image in XYZ space. Then I use ImageMagick, with either the module “barymap” which can read the XYZ image, then manipulate xy values and write XYZ or RGB, or with the script “sqshxyy” which can currently only read and write xyy so needs some pre- and post-processing.

Barymap needs some numbers at the command line to tell it how to manipulate chromaticity. It uses one or three triangles where the corners are the primaries and white point, and it maps chromaticity values from input triangles to output triangles. The parameters to barymap are not highly intuitive, and difficult to calculate automatically.

So sqshxyy is simpler. It squishes the xy chromaticity into the CIE horseshoe, or any triangle of primaries. (Actually, it could squish the chromaticity into any arbitrary shape.) Squishing changes chroma only, either by linear division or clamping or a filmic curve. It does this automatically, without needing a cryptic series of numbers as parameters. At every hue, it calculates the maximum permitted chroma from the shape (horseshoe, triangle, etc). At every pixel, it calculates the current chroma, and then squishing it is easy.

Barymap is C-code, and needs ImageMagick to be re-built. Sqshxyy is a Windows BAT script that uses barymap and other ImageMagick modules.

I haven’t finished writing/testing/documenting either process yet. When I do, I’ll publish them on http://im.snibgo.com .