Exposure Fusion and Intel Neo drivers

So, let’s take a pixel from red chair from an image posted by @ggbutcher in another thread

Comparison of a single pixel in a simple +3EV push in darktable vs. tone equalizer attempt to raise shadows without blowing highlights.

In [15]: 85.8/47.1
Out[15]: 1.8216560509554138

In [16]: 72.8/23.5
Out[16]: 3.097872340425532

In [17]: 73.6/24.3
Out[17]: 3.0288065843621395

Pretty consistent with the red oversaturation I see visually.

As you your provided example - since I haven’t posted my patch yet (it’s become clear to me that all of the stuff currently hardcoded in DT needs to be exposed in the UI, and I admit I suck at UI/UX stuff), you just proved with your example that an exposure fusion workflow in linear space provides poor results, since that’s what darktable is currently doing.

Except it isn’t a generic transfer function.

Since we’re blending pixels images that are generated from the same image by exposure shift, we get (for two exposures, it gets obviously a bit more difficult to show the math for three):
(1-w)x^a+w*cx^a

Where c is a constant derived from focus shift and w is our weight. I’ll need to go through the derivation, but it should boil down to:
(offset-w)*cx^a

For each of x=r, x=g, x=b - the relative relationship between r, g, and b will be preserved - see the gradient I posted as an example.

The one thing that is significantly different between doing this in linear space and in gamma space is the perceptual result of a 0.5 weight - as seen by the midpoint of the gradient posted. This perceptual difference is likely why performing blending in linear space (current darktable master) gives such poor results.