Exposure Fusion and Intel Neo drivers

I’m certain they won’t break compatibility with old edits, that just isn’t a thing that can reasonably happen.

1 Like

x^a*y^a = (xy)^a

Now, for fixed y, and x1 = r, x2 = g, x3 = b
r^ay^a = (ry)^a
g^a
y^a = (gy)^a
b^a*y^a = (by^a

hey look, no change in the ratios of g/y/b whether multiplying by y inside of the gamma relationship or by y^a outside

your mind = blown

You keep on ranting about chrominance shifts… Chrominance shifts that I’ve only seen when playing with your tone equalizer.

Undesired luminance changes? Maybe. But, perceptually (as opposed to mathematically) - averaging two values in gamma-space gives a perception closer to a midpoint between those two values than averaging in linear-space prior to converting to gamma. I need to do more digging, but it probebly explains why (despite your insistence that blending in nonlinear space will cause haloing) - blending in linear space causes haloing not seen when blending/exposure fusing in gamma-2.4 space.

Upper is interpolation in linear space. Lower is interpolation in gamma-space (specifically, gamma=2.4) - and yes, I readily admit I’m currently not dealing with the linear transfer at very low values seen in sRGB (where the gamma outside of said linear region is 2.4 but the average is 2.2-2.3)

BTW, where’s the chrominance shift you insist MUST me occurring in such an operation as shown in this image?

Yeah. Now try f(x)^a * f(y)^a where f is a general transfer function defined implicitely by interpolation of nodes. What is it equal to ?

Or maybe I have more experience than you do on these matters and I know how to make these things break. You are one year too late for this conversation. Chrominance shifts can’t happen in my code, because it’s a simple exposure compensation, so I wonder what you have seen.

You are annoying.

is called before

where the gaussian pyramid blending and exposures blending happens. So, the blending is not done over pure log, or pure gamma or linearly encoded data, but on whatever goes outside of the tone curve, which is usually an S-shaped curve with raised midtones.

About these chroma shifts, I’m not inventing them.

Original:

Base curve with exposure blending:

Filmic + tone equalizer:

(settings obviously cranked up for illustrative purposes). Blue becomes purple. Not good. Why ? Theory violated. Period.

So, let’s take a pixel from red chair from an image posted by @ggbutcher in another thread

Comparison of a single pixel in a simple +3EV push in darktable vs. tone equalizer attempt to raise shadows without blowing highlights.

In [15]: 85.8/47.1
Out[15]: 1.8216560509554138

In [16]: 72.8/23.5
Out[16]: 3.097872340425532

In [17]: 73.6/24.3
Out[17]: 3.0288065843621395

Pretty consistent with the red oversaturation I see visually.

As you your provided example - since I haven’t posted my patch yet (it’s become clear to me that all of the stuff currently hardcoded in DT needs to be exposed in the UI, and I admit I suck at UI/UX stuff), you just proved with your example that an exposure fusion workflow in linear space provides poor results, since that’s what darktable is currently doing.

Except it isn’t a generic transfer function.

Since we’re blending pixels images that are generated from the same image by exposure shift, we get (for two exposures, it gets obviously a bit more difficult to show the math for three):
(1-w)x^a+w*cx^a

Where c is a constant derived from focus shift and w is our weight. I’ll need to go through the derivation, but it should boil down to:
(offset-w)*cx^a

For each of x=r, x=g, x=b - the relative relationship between r, g, and b will be preserved - see the gradient I posted as an example.

The one thing that is significantly different between doing this in linear space and in gamma space is the perceptual result of a 0.5 weight - as seen by the midpoint of the gradient posted. This perceptual difference is likely why performing blending in linear space (current darktable master) gives such poor results.

Because this exposure fusion doesn’t happen in linear space at all, but after the base curve is applied, which is not linear and not even pure-power, and I tried to show you the code to justify my claim, but you still don’t understand. So it fails precisely because it does not happen in linear space. And you got your assumptions wrong because, as most opensource hackers, you treat RGB codevalues as random numbers and don’t look at the whole pipeline that goes from scene-linear light ratios to whatever broken transfer function you are trying to hack.

I mean, if you want to invalidate Parseval’s theorem, do it. You might even get a Fields medal from it. Until you do, I’m right, you are wrong. You don’t blend, blur, convolve, feather, or merge anything outside of a linear space, where RGB codevalues are proportional to light energy. Full stop.

There is absolutely no relationship preserved between R, G and B at the output of the base curve, because it’s not linear, and it happens before the exposure fusion. Once again.

1 Like

In the current code, it does, and it looks like garbage. Which invalidates your claim that linear operation is the One True Way. We’ll get to why that is in a bit.

Sure it is.
linear_setting_fusion

A bunch of code not really relevant to a discussion of exposure fusion and how to make it operate better.

So, since you’re going to be highly insulting. You’re arrogant, overconfident, and repeatedly make claims that don’t stand up to reality. Over and over you say that operations in gamma space will cause chromaticity shifts and destroy colors, even when provided visual evidence that contradicts your claims. You are also apparently mentally incapable of discussing item B (exposure fusion) if it has ever been, in any way, associated with item A (basecurve) without bringing in your prejudices against A. (I’m not the one who shoved exposure fusion into the basecurve module. If you have issues with them being in the same modules, take that up with whomever did it, not with me.) You also seem to be ignoring the fact that Edgardo has been working on fixing the issues you’ve pointed out with the basecurve module (apparently you’re so prejudiced against it you’ve ignored the RGB norms work? And yes, I know that currently the fusion pipeline isn’t using his fixes, that’s become first on my TODO list before any further work, even though I’ve been eliminating that issue as illustrated above)

Again:

No chromaticity shifts when interpolating between the two brightnesses, despite your repeated claims it is guaranteed to happen.

As to luminance relationship at the “halfway” mark, you keep on forgetting about the fact that human vision is NOT linear.

You can rant about theory all you want. Then you need to remember that while it’s science-heavy, photography is partially an art form. And thus human perception is important.

So let’s try perception. All three images here were generated using the EXACT same settings (shown above). Tell me which one was weighted in linear and blended in linear, which one was weighted in gamma-2.4 and blended in linear, and which one was weighted and blended in gamma-2.4:

There is absolutely no relationship preserved between R, G and B at the output of the base curve, because it’s not linear, and it happens before the exposure fusion. Once again.
[/quote]
Once again:
linear_setting_fusion

Exposure fusion and basecurve are only linked by the fact that someone jammed the functionality of one into the other. It’s well proven that the two can be decoupled, as shown in the image above. But you’re apparently incapable of preventing your prejudices against the prior state of basecurve (ignoring the recent work that’s gone into it) from causing you to froth at the mouth, see red, repeatedly state BASECURVE BAD BASECURVE BAD BASECURVE BAD BASECURVE AND ANYTHING EVEN REMOTELY ASSOCIATED WITH IT BAD, even when provided direct evidence that your claims being made don’t hold true when you get down to the most important thing - visual results to human eyes.

If you’ve got a recommended easy method for MEASURING the chromaticity shift you swear must be happening (e.g. prove that it’s actually happening), I’m all ears. (HSV decomposition in GIMP sorta works, but the 8-bit intermediary causes quantization errors and it’s just tedious… I will say that using that method, I’m not seeing fusion with blending and weighting in gamma space cause chromaticity shifts except in regions where the output luminance is near enough to black that quantization renders the result untrustworthy. But I’m amenable to a different measurement method.)

I’ve (tried) to split this into its own topic, since it was so far off the topic of the new-ish tone equalizer module that @anon41087856 is working on. I hope that thread can get back on track, since the module looks quite promising.

Now, about other things:

To be honest, you both have been fairly condescending towards each other through most of the posts. It’d be really awesome if you’d both just stop, and then, perhaps, you can get the information you’re after.

3 Likes

This certainly doesn’t mean anything, but I took the middle one (the brightest one) and just applied a tone curve preset gamma 2.0 and got this:

BUT BUT BUT… GAMMA TRANSFORMS ARE ALWAYS BAD!!!

side note - performing the gamma transform after linear blending as you have done may cause a chromaticity shift:
(Edit: Image replaced since last night, I had made changes to the wrong curve. However chroma shift is still there):

Maybe I’ve not been entirely clear, but when I say “blend and weight in gamma space” - I take the input and convert it to gamma space prior to compute_features, and then undo the gamma transform after blending is fully complete. “weight in gamma and blend in linear” undoes the gamma transform immediately after calculating weights and returns to linear. linear/linear is current DT master.

Also thanks for causing me to give Pierre the answer to at least one image. :frowning:

Weighting in linear can be shown to be highly nonoptimal, at least for the weighting function given by Mertens et al and used by darktable: (X axis is EV shift from the assumed clip point of 1.0f, I forgot to add axis labels.)

Note the pretty much constant weighting in the lower EV values. Also note that trying merely to shift the optimal exposure target down to 0.18 (middle grey in linear space) has an even worse “constant weight in shadows” problem. Part of my work has been to expose the optimal exposure and variance/width as sliders, they’re hardcoded right now. :frowning:

What kicked this off was that back in October, I was never under any circumstances able to get visually acceptable results with any settings for fusion in darktable, as compared to exporting three JPEGs with varying exposure compensation and feeding them to enfuse (inefficient, pain in the ***, etc.). Back then, it caused my family to yell at me simultaneously for having my laptop screen on after they wanted to sleep and also for my camera having what seemed to be write-only-memory. It took a while before I had a chance/motivation to revisit it and see if I could actually fix it. I’ve come close, although I still have some corner case handling that I need to dig into and, as I’ve mentioned before - some code copypasta followed by code divergence in the basecurve computing flow that really needs to be resynchronized. I was planning to publish the current state of my patch tonight, but it’s painfully clear that I cannot proceed with further work on it without fixing the remaining underlying issues with basecurve (or more appropriately, ensuring Edgardo’s fixes are available under all conditions and use cases), even if those issues can be worked around for now by setting a linear “curve”.

Perhaps you failed to read this, but you should read it:

2 Likes

The paper/algorithm was very clearly tested with sRGB jpegs. They seem to imply it won’t depend on particular camera oetf or gamma, but that doesn’t make sense using “well exposed” value 0.5 like you say (and they don’t test it). Anyway, seems it should be a final rendering at the end of the pipe with sRGB inputs - however problematic that may be.

Yeah. Working in gamma-space has the benefit that the weighting function of a Gaussian curve is more perceptually symmetric around the target. Working in pure log2 space would possibly be even better, but then you have to deal with that pesky log2(0) corner case. (Although technically, while many say human perception is logarithmic, it’s more of a power function - Stevens's power law - Wikipedia - gamma happens to be nice and convenient here as a match to our perception.)

As to why blend in gamma space, it’s best illustrated in one of the earlier gradients I posted to prove that chromaticity was not shifting. Note that at the midway point between the two ends, the gradient that is being combined in linear space is MUCH brighter than the one being combined in gamma space. Perceptually, blending in gamma space results in the midpoint being perceived as much closer to a brightness halfway between the ends, while blending in linear space results in the midpoint being perceived as pretty close to the brighter end of things. Which explains why the image is perceptually washed out/has the histogram pulled up into the highlights.

Right now I’m using a self-written function call to convert into and out of gamma space, but I have no issue with using existing darktable color management code to go in and out of the sRGB transfer curve instead… Just haven’t gotten around to that yet.

As to doing it at the very end of the pipeline - I’m not opposed to splitting this out into a separate module at some point (it makes fixing things without having to handle old XMPs designed around the old system - although I question how many of those there are where someone actually was happy with the result… Looking at commit history it seems like there was a lot of attempts to tweak the relative weights in various ways in an attempt to compensate for working in linear space - and none of THESE parameters were exposed in the UI). On the flipside, moving to a separate module without breaking compatibility in the old module has the result of duplicated functionality without any clear indication that one of the implementations is considered so old as to be existing solely for backwards compatibility. It might be better to have a “preserve legacy compatibility” checkbox that re-activates the old weighting approach. (The relative weighting approach makes it nearly impossible to allow both old and new approaches without exploding our slider count. Especially since operating without clipping means we have few if any use cases where saturation or contrast weighting are beneficial. Contrast weighting might be relevant if you also get Edgardo’s fixes into the fusion code path, but it seems like the contrast weighting and the basecurve-with-norms would just wind up fighting each other? Might also lead to haloing. TBD - I’m not going to mess with that until I can get Edgardo’s fixes into the fusion pipeline.)

Also, based on some results from @afre , I strongly suspect that when presented with linear data, enfuse is internally converting to gamma-encoded sRGB for work. Unfortunately enfuse isn’t exactly the easiest code to read due to very heavy use of C++ templates and the vigra toolbox. I’m going to try and run some tests this weekend on that.

It is hard to reply across threads! And with the awkward blocks of text and back and forth, I don’t feel like I have the stamina to be following closely. Please don’t ping me anymore until things have settled on my personal life. Some final thoughts:

Are you familiar with the enfuse manual? It contains most of the info you would need to know. I suspect you already have but some of the statements seem to indicate otherwise. From a skim, it does look like the default colour space is sRGB but there are several conditionals regarding colour transformation. Detail that I will leave you to reading. One of the flaws in my testing is that I forgot to re-specify the colour space after I did my processing in G’MIC. G’MIC ignores colour space completely, so the output files are profile-less.

1 Like

I’m out of here. Keep fixing tools working in broken models as much as you want. When you will have to pull a Firefox out of your Netscape, you will understand why it was a bad idea in the first place.

Fix models, not tools.

2 Likes

It’s been a long time since I’d read it, I forgot that they had a fairly long treatise on colorspace management in there.

Of interest - it appears they handle FP images very differently from others (see the log transform), but it isn’t clear as to exactly what they do beyond that. It may explain your unexpected results. As far as gamma encoding, they only briefly mention it once. However, replacing equation 6.1 with 6.3 is by far the easiest way to describe what my changes to the fusion pipeline do.

As to the comments at the bottom of page 70 (their page numbering, not PDF page numbering), an analysis of the real-world impact of that is a large portion of the goal I had when generating the blending gradients I’ve used as examples. Note that at the beginning of section 6, they state: “Here, we collectively speak of blending and do not distinguish fusing, for the basic operations are the same.” - while the basic operations may be the same, in terms of the behavior of input data, they are not. Blending of different images (such as panoramic stitching) is a more “general” set of use cases where the two input pixels might have different color. In the case of exposure fusion where all exposures are generated from the same input, we’ll always be blending the “same” color. (again, assuming that the user doesn’t break other parts of the basecurve module and/or someone, likely myself, gets Edgardo’s fixes working inside the fusion pipeline)

@aurelienpierre Partly why I don’t use enfuse much anymore. However, when I do, I know what it is and what I want to do with it.

That’s how you show you understand nothing to what you are doing. There are no colours at this place in the pipe. Colours exist only in human brain. Mapping the RGB tristimulus (which encode light emissions, aka spectra reduction to a flat vector) as recorded by the sensor to human-related colours (XYZ space) is the job of the input profile (which still does it awefully bad, but that’s hoz it was done 10 years ago…), and it happens in the pipe after base curve.

I have put ressources in darktable’s wiki for developpers who code faster than light: Developer's guide · darktable-org/darktable Wiki · GitHub

This one especially https://www.visualeffectssociety.com/sites/default/files/files/cinematic_color_ves.pdf will explain to you why your careless use of the “gamma” word (and encoding…) is damaging.

Also: https://medium.com/the-hitchhikers-guide-to-digital-colour/the-hitchhikers-guide-to-digital-colour-question-1-what-the-f-ck-happens-when-we-change-an-rgb-b47e70582e8b

TL;DR gamma is the electric-optical transfer function of CRT screens. ICC clowns found clever to reuse the same word to describe an integer encoding system meant to prevent quantization artifacts (aka banding effect, aka posterization) in 8 and 16 bits integer files, even though it has nothing to do with the original gamma and only participated in putting a mess into everyone’s head. But “gamma encoding” is very clear on the fact it’s an exchange format that needs to be decoded before actually working on the pixels. But not all the IEEE geeks in universities are aware of that, it seems, and obviously none of them pushed pixels seriously in their lives. And now, gamma refers loosely to any kind of exponent transfer-function intended to raise mid-tones while not affecting black and white (that’s called lightness adjustment), or, worse, any attempt to encode values perceptually (ever heard of EVs ? It’s log2, not whatever exponent, and – yes – log(0) is a problem because black does not exist outside of black holes, so null energy can’t be found on Earth and it’s not a real problem then).

Because, again, RGB encode light emissions, so everything you do in a pipeline should use light-transport models, aka numerical simulations of what would have physically happened if you had played with actual light on the real scene, thus scene-linear encoding, proportional to light energy. As it happens, pushing pixels in perceptually encoded spaces fails miserably for reasons well understood (by most, at least). That is common knowledge amongst VFX and colour studios, but for some reasons, hobbyists find “intuitive” to push pixels in perceptual spaces (cause that’s how we see, right ? Well, photons don’t care). Theory is the only small thing we can hang on to, violate it and, you may not notice it immediately, but it will break in your face sooner than later.

Not that I hope it will change your mind, but I spent the last year trying to reeducate people here about best practices in colour handling and reasonable workflows, and you are spreading fake knowledge and pushing us 10 years back, where gamma issues disappeared in the 8 EV dynamic range of average DSLRs, so it was kind of ok.

Final note, of all the gamma-encoded RGB spaces to work with, the dumbest is probably sRGB, because the gamma is actually a linear function under a threshold, so doing blending in that non-uniformly spaced thing is the sillyest thing I heard this week.

2 Likes

@anon41087856 @Entropy512
If you allow me, I would like to contribute my 2cts to this discussion, even though I am not at all a color scientist…

Let me begin by speculating on this statement:

RGB values are one possible representation of color, even when they directly represent the light emission values as directly recorded by the camera sensor.

RGB values represent some color. Which color exactly is undefined, unless you associate to them a set of RGB primaries and a white point. If you have that, then an RGB color in camera colorspace is as valid as a colors in Rec.2020. Colors are present at each stage of the pipeline, they are just encoded differently.

Then the only things you are allowed to do are basically exposure compensations and white balance corrections. Any other non-linear adjustment, including filmic curves, do not have a physical counterpart, because for a given scene illumination you cannot physically change the relative strength of your light diffusion in shadows and highlights, right?

Notice also that exposure compensation and WB work equally well on RGB values that are encoded with a pure power function (as already mentioned by @Entropy512), therefore linear encoding is not strictly needed from a mathematical point of view, at least in this case.

I totally agree with you, but a safer rephrasing of the above sentence might be we’ll always be blending RGB triplets that have the same channel ratios, because they are all obtained from the same initial RGB triplet by linear scaling (aka exposure compensation). I do not know the internals of DT’s base curve module, but this might only be valid if the base curve is a straight line…

Now let me try to give you my point of view on some of the facts an myths about linear vs. “gamma” encodings…

Color shifts
When you apply a tone curve to the RGB values, color shifts are generally not a consequence of whether the RGB values are linear or not. If a non-linear tone curve is applied to the individual RGB channels, color shifts are always there.
The usual “workaround” to avoid this is to apply the tone curve to some RGB norm (like luminance), compute the out/in luminance ratio, and then scale the RGB values by this ratio.
This is where the need of linear RGB values comes into play, because:

  • luminance must be computed from linear RGB values
  • the RGB scaling by the luminance ratio is equivalent to an exposure scaling only if the RGB values are linear (or encoded with a pure power function, but there is no advantage in doing that in this specific case)

When linear is visually better
One domain in which AFAIK linear encoding is truly needed is pixel blending.
Here is a classical example of blending a pure red (on the left) and a pure cyan (on the right) color at 50% opacity (in the middle). First image is obtained in sRGB encoding, second one in linear:

I guess we all agree that the second one looks “more correct”.

The same happens when downscaling images, because downscaling involves blending neighbouring pixels. Downscaling in linear encoding is better.

When linear encoding is NOT good
One clear case in which linear encoding is not “good” from the practical point of view is when building luminosity masks. For example, one would expect that a basic luminosity mask has an opacity of 50% where the image roughly corresponds to mid-grey, right? However, mid-grey is 18% in linear encoding…

The relationship with enfuse
Correct me if I am wrong, but in a simplified form and for two images, the exposure blending boils down to something like

(1-W)*x_{1} + W*x_{2}

where x1 and x2 are the the RGB channels of images 1 and 2 respectively. W is a factor that gives more weight to “well exposed” pixels, right? And W is derived starting from some RGB norm, like luminance, right?

To me, this looks really like blending through luminance masks, which can be split in two parts:

  • the determination of the weights W, for which perceptual encoding seems to be more appropriate, since you want your weight function to be peaked at 50% and perceptually symmetric around mid-gray, right?
  • the blending of RGB values weighted by W, for which linear encoding is more appropriate.

Therefore, I wonder if the following approach would yield correct results:

  • take linear RGB values as input
  • compute the RGB luminance, and encoded it with a power function with exponent \gamma = 1/2.45, so that mid-grey is roughly mapped to 0.5
  • use the power-encoded luminance to compute the weights
  • blend the linear RGB values with the obtained weights

Would this make sense?

P.S: this answer has nothing to do with the initial issue with Intel Neo drivers, so do not hesitate to split is to a separate thread if you think it would be better!

3 Likes

Carmelo - VERY well written, thanks!

Looks good to me.

Yup. It sounds like Pierre works in the cinema industry, and one thing to keep in mind is that in the majority of cinematic productions, a major part of the production is controlling the actual light present in the scene with modifiers (scrims, reflectors, etc) and additional lighting. This means some of the extreme dynamic range managment tricks such as exposure fusion and one of your approaches at PhotoFlow - new dynamic range compressor tool (experimental) - #36 by afre aren’t nearly as necessary, if at all. (Nice approach by the way, I think my next task is to take a look at DT’s implementation of that approach and figure out why it underperformed - FYI fixing the poor performance of enfuse in highlights that you show as an example in that thread is exactly what I’ve been trying to do. The current implementation performs horribly in highlights)

Ideally you do this even in photography - but sometimes you’re hiking on a trail up multiple flights of stairs and merely putting your camera and tripod in the backpack has already increased your burden significantly. There is no way I’m bringing a monobloc and scrims/reflectors on the trail!

So we’re left with the problem of a scene with a very high dynamic range, a camera that can capture it if you expose to preserve highlights, but stuck with the lowest common denominator of a typical SDR display. So the challenge is to not make things look like crap on such displays, even if what is displayed is now at best a rough approximation of the real scene.

Of note, a lot of these problems go away with a wide-gamut HDR display. As an experiment, I’ve exported a few of the images I find need exposure fusion to linear Rec. 2020 RGB, and then feed this to ffmpeg to generate a video that has the Hybrid Log Gamma (HLG) transfer curve and appropriate metadata. The result looks AMAZING without any need for exposure fusion at all in most cases. Sadly, for still images, we have no good way to deliver content to such displays, even of those displays are getting more common. However, if you ever expect your content to be viewed on a phone or tablet, 99% of them are SDR displays and it’s going to be that way for many years to come. :frowning: Which happens to be why you’ll have to trust my word that exporting to HLG looks gorgeous on a decent HLG display - there’s simply no way to convey that visually though this forum software to the displays that 99% of readers here have.

Yup, and the math behind this is the identity (ax)^y = a^yx^y as mentioned before.

Yeah, that’s a better way of wording it.

This is, as far as I can tell, the fundamental reason Pierre hates basecurve so much. However, this issue with basecurve was fixed:

It happens that it was not fixed for the fusion flow (a result of two code paths getting branched, apparently to eliminate a single multiply operation on the “fast” branch ages ago)

I’ll be submitting a pull request later today that reorganizes some of these code paths such that Edgardo’s changes can be used in combination with fusion.

Side note: Getting to the science vs. art discussion I mentioned previously, in some cases such chromaticity shifts actually look really nice. For one particular sunset example, applying the “sony alpha like” transfer curve in the old per-channel way gives a much more “fiery” look to the clouds. Is it in any way a correct representation of the physical realities of the scene? Not at all. Does it look impressive? Yup. Obviously, this is the sort of thing that should be used with caution and should not be the default behavior (which is indeed the case going forward in DT)

Yup, no issues with this. All of the gradient examples I posted were of blends between two pixels that were linear scalings of each other (e.g. channel ratios are constant). If channel ratios aren’t constant, things get funky.

Yup, and this is a major part of why DT’s current exposure fusion workflow tends to pull everything into the highlights and then crush the highlights. It also pulls quite a bit up past the point at which it’ll clip later in the pipeline.

Exactly! The equation you give is in the enfuse manual as equation 6.1. Alternatively, the manual gives a second equation (6.3), which is ((1-w)x_{1}^(1/y) + wx_{2}^(1/y))^y (effectively, what I did in darktable’s fusion implementation is to replace equation 6.1 with 6.3, where y = 2.4)

Yup!

I’m not so sure of that. Let’s take the extreme example of blending black with white, with w = 0.5

So x_{1} = 0, x_{2} = 1.0, w = 0.5

Plug this into your equation (corresponds to equation 6.1 in the enfuse manual), and you get 0.5

Plug this into equation 6.3 with y = 2.4 and you get something around 0.18

So perceptually, the blending in linear space gives a result that is significantly brighter than the perceptual midpoint between the two inputs. You can see this in one of the orange gradients I posted.

That was, in one of the examples I posted above, described as “weight in gamma-space, blend in linear”. The end result was an image that was very bright and washed out. Better than the current lin/lin approach, but still not visually pleasing. Someone posted an example of applying a power transform to that image to make it look much better, that was the case where I responded that doing so was one of the cases where a chromaticity shift could occur (see the gradient I posted with two different shades of orange).

I don’t think anyone has talked about that in a while, and I don’t expect any more conversation on that particular topic outside of common/opencl_drivers_blacklist: Only blacklist NEO on Windows by Entropy512 · Pull Request #2797 · darktable-org/darktable · GitHub - yes, I submitted a pull request to un-blacklist Neo on non-Windows platforms since it appears that the root cause of failures on Linux was identified and corrected. OpenCL + NEO is working great on my system.