Exposure Fusion and Intel Neo drivers

Carmelo - VERY well written, thanks!

Looks good to me.

Yup. It sounds like Pierre works in the cinema industry, and one thing to keep in mind is that in the majority of cinematic productions, a major part of the production is controlling the actual light present in the scene with modifiers (scrims, reflectors, etc) and additional lighting. This means some of the extreme dynamic range managment tricks such as exposure fusion and one of your approaches at PhotoFlow - new dynamic range compressor tool (experimental) - #36 by afre aren’t nearly as necessary, if at all. (Nice approach by the way, I think my next task is to take a look at DT’s implementation of that approach and figure out why it underperformed - FYI fixing the poor performance of enfuse in highlights that you show as an example in that thread is exactly what I’ve been trying to do. The current implementation performs horribly in highlights)

Ideally you do this even in photography - but sometimes you’re hiking on a trail up multiple flights of stairs and merely putting your camera and tripod in the backpack has already increased your burden significantly. There is no way I’m bringing a monobloc and scrims/reflectors on the trail!

So we’re left with the problem of a scene with a very high dynamic range, a camera that can capture it if you expose to preserve highlights, but stuck with the lowest common denominator of a typical SDR display. So the challenge is to not make things look like crap on such displays, even if what is displayed is now at best a rough approximation of the real scene.

Of note, a lot of these problems go away with a wide-gamut HDR display. As an experiment, I’ve exported a few of the images I find need exposure fusion to linear Rec. 2020 RGB, and then feed this to ffmpeg to generate a video that has the Hybrid Log Gamma (HLG) transfer curve and appropriate metadata. The result looks AMAZING without any need for exposure fusion at all in most cases. Sadly, for still images, we have no good way to deliver content to such displays, even of those displays are getting more common. However, if you ever expect your content to be viewed on a phone or tablet, 99% of them are SDR displays and it’s going to be that way for many years to come. :frowning: Which happens to be why you’ll have to trust my word that exporting to HLG looks gorgeous on a decent HLG display - there’s simply no way to convey that visually though this forum software to the displays that 99% of readers here have.

Yup, and the math behind this is the identity (ax)^y = a^yx^y as mentioned before.

Yeah, that’s a better way of wording it.

This is, as far as I can tell, the fundamental reason Pierre hates basecurve so much. However, this issue with basecurve was fixed:

It happens that it was not fixed for the fusion flow (a result of two code paths getting branched, apparently to eliminate a single multiply operation on the “fast” branch ages ago)

I’ll be submitting a pull request later today that reorganizes some of these code paths such that Edgardo’s changes can be used in combination with fusion.

Side note: Getting to the science vs. art discussion I mentioned previously, in some cases such chromaticity shifts actually look really nice. For one particular sunset example, applying the “sony alpha like” transfer curve in the old per-channel way gives a much more “fiery” look to the clouds. Is it in any way a correct representation of the physical realities of the scene? Not at all. Does it look impressive? Yup. Obviously, this is the sort of thing that should be used with caution and should not be the default behavior (which is indeed the case going forward in DT)

Yup, no issues with this. All of the gradient examples I posted were of blends between two pixels that were linear scalings of each other (e.g. channel ratios are constant). If channel ratios aren’t constant, things get funky.

Yup, and this is a major part of why DT’s current exposure fusion workflow tends to pull everything into the highlights and then crush the highlights. It also pulls quite a bit up past the point at which it’ll clip later in the pipeline.

Exactly! The equation you give is in the enfuse manual as equation 6.1. Alternatively, the manual gives a second equation (6.3), which is ((1-w)x_{1}^(1/y) + wx_{2}^(1/y))^y (effectively, what I did in darktable’s fusion implementation is to replace equation 6.1 with 6.3, where y = 2.4)

Yup!

I’m not so sure of that. Let’s take the extreme example of blending black with white, with w = 0.5

So x_{1} = 0, x_{2} = 1.0, w = 0.5

Plug this into your equation (corresponds to equation 6.1 in the enfuse manual), and you get 0.5

Plug this into equation 6.3 with y = 2.4 and you get something around 0.18

So perceptually, the blending in linear space gives a result that is significantly brighter than the perceptual midpoint between the two inputs. You can see this in one of the orange gradients I posted.

That was, in one of the examples I posted above, described as “weight in gamma-space, blend in linear”. The end result was an image that was very bright and washed out. Better than the current lin/lin approach, but still not visually pleasing. Someone posted an example of applying a power transform to that image to make it look much better, that was the case where I responded that doing so was one of the cases where a chromaticity shift could occur (see the gradient I posted with two different shades of orange).

I don’t think anyone has talked about that in a while, and I don’t expect any more conversation on that particular topic outside of common/opencl_drivers_blacklist: Only blacklist NEO on Windows by Entropy512 · Pull Request #2797 · darktable-org/darktable · GitHub - yes, I submitted a pull request to un-blacklist Neo on non-Windows platforms since it appears that the root cause of failures on Linux was identified and corrected. OpenCL + NEO is working great on my system.