Exposure Fusion and Intel Neo drivers

When I am done diffusing bombs for you. Segfaults, bugs, use cases and presets… :wink:

Overexposure has nothing to do with RGB spaces, it has to do with poor settings. Basecurve works in RGB indeed, but it’s mistake is to apply one single 1D transfer function on the 3 channels, which does not preserve chrominance.

This is exactly what we want to avoid. Working in full gamma-encoded pipelines breaks light-transport models and messes up colours. If one module needs perceptually-encoded data, it can encode at its input and decode at its output, but the pipe should stay linear and every kind of masking/blending will fail if this condition is not met (the reason is grounded into maths, see Parseval’s theorem).

No good algorithm should rely on the assumption that the middle grey is 18%, 50% or any fixed value, because those are assumptions from 1976 that are valid for low dynamic range imagery (< 8 EV) and assume 100% is diffuse white (so the shot was exposed for mid-tones). Your today’s average APS DSLR hits 10 EV of dynamic range, so please let these assumption for history where they belong, because we now expose for highlights and 100% luminance is likely to be specular highlights, so the middle grey can be anywhere between 0 and 18 %.

HDR blendings are mostly garbage anyway. All of them.

@Entropy512 enfuse doesn’t require specific conditions to work per se as long as the input files have the same format and you use the correct parameters. I have used it on various colour spaces, TRCs and data, and it worked splendidly.

Not quite but I would say that many papers get the basics terribly wrong, invalidating most of their findings. However, as a hobbyist, I still read them because they aren’t completely rotten. There are still good ideas to be found.

1 Like

Yes, in spades. I’ve found that keeping the data in it’s original relationship compels me to do much less (if any) saturation “pet tricks” to restore pleasing colors.

Well, sometimes you just have to do it. There’s a lot of dynamic range out there, and it’ll be a long time before cameras will be able to cleanly comprehend every presentation of it in one exposure.

We’re talking about Beignet/Neo, not CPU implementations.

Not once in that link are Beignet/Neo discussed.

Now, by doing some digging, one eventually finds the issue that drove that commit:
https://redmine.darktable.org/issues/12541

So, there are two procedural problems identified here:

  1. As I already said, the commit in question has no description whatsoever, and does not reference the issue that drove it in any way, shape or form. No commit message, no comment in the code. One should not have to spend 20 minutes of Google searching to find something that should have been in the commit message or comments.
  2. The developers immediately blamed the driver and blacklisted it across the board for all GPU variants and all possible driver versions, based on a single report - a report of a problem that turned out to be within darktable itself. (Failure to perform required cache invalidation of compiled kernels). The improper caching was solved by pull 2033 - yet the driver is still blacklisted.

Improper settings? OK, so - please tell me which settings of exposure EV and exposure bias in the exposure fusion mode of basecurve will not cause this:

[quote]
to turn into this:


(exposure fusion, three exposures, exposure shift +1.232, exposure bias +1.00, darktable master as of an hour or so ago - note that the highlights of the rocks on the upper right have been raised so high as to be nearly blown)

As opposed to this:


(same settings, but applying a 2.4 gamma curve immediately after the basecurve step of the exposure fusion pipeline, then returning to linear after fusion is completed, brightness of the rocks only raised slightly)

It isn’t exactly the best use case for exposure fusion (unfortunately, my current preferred test picture has clearly identifiable people I don’t think would want to be used for demonstration purposes in a public forum) as even after trying to make the algorithm less likely to blow highlights, it still blows them to some degree in the shown test image - but not as severely as the current hardcoded target of 0.54 linear does.

For reference, the test image comes from a rather low-end 360 degree camera. As a result the input dynamic range isn’t quite as great, so the potential benefits of exposure fusion are less, but it’s the test case that initially had me wondering why darktable’s exposure fusion implementation was so vastly inferior to feeding enfuse with three separate sRGB JPEGs despite claiming to be derived from the same algorithm.

I’ve never seen enfuse mess up colors when feeding it sRGB-encoded images. I assume you’re talking about what happens when you disable “preserve colors” in basecurve. Perhaps there’s some confusion resulting from the fact that the exposure fusion function in darktable has been “baked in” to basecurve?

Perhaps I should have been clear that this is what I did. However your assertion that every kind of masking/blending will fail is false for the algorithm implemented by enfuse (and implemented in darktable based on the algorithm described by Mertens et al - Exposure Fusion ) Calculating weights in gamma but masking/blending in linear gives horribly ugly results (haloing, etc.)

What an average APS DSLR or MILC can do is irrelevant when you’re delivering to an sRGB display, other than making the exposure fusion approach described in compressing dynamic range with exposure fusion | darktable possible. In fact, if you expose for highlights, then the rest of the scene will be severely underexposed without some method of bringing up the shadows - and darktable’s exposure fusion attempts to be one such method. The problem is that by operating in linear space instead of gamma-encoded, the results obtained aren’t even remotely what the algorithm was designed to do.

Darktable hardcodes a target brightness of 0.54 linear (where did this come from?) and a standard deviation in the weighting calculation of 0.5 (where did this come from? For reference, the original paper by Mertens et al uses a target of 0.5 and a std dev of 0.2). This results in, as shown above - everything getting pulled way up into the highlights instead of towards the midtones.

I’m going hiking tomorrow, hopefully I’ll come back with some better use case example images that don’t have people in. (I found my modified algorithm to be highly suited to images from a family wedding a few weeks ago - but as mentioned above, I don’t feel comfortable posting images from that wedding here.)

Got hints on reliable and consistent content delivery of stills to HDR10/HLG displays?

I’ve found no reliable way to do this. It looks gorgeous when you do (and eliminates the need for dynamic range compression tricks), but for the majority of displays out there, you need to encode your stills as a slideshow into a 10-bit HEVC video. Otherwise the display assumes it’s SDR content with an sRGB transfer curve and we’re back to square one. You’ve already established that modern cameras can capture much wider dynamic range in a single exposure than an sRGB JPEG can deliver to a typical user’s display.

I agree it could be better, but it is what it is. Things are never perfect, we all do the best we can. Glad you found your answer.

Perhaps they just didn’t want to deal with it nor the bug reports that come with it. Good news it that it’s proper free software, so you can patch out the blacklisted module with ease.

You can overwrite it with: darktable --conf opencl_disable_drivers_blacklist=yes -d opencl

2 Likes

Just use masked dodging and burning, like in the old days in the darkroom. If it worked then, it should work better now.

That’s out of topic. All tests of using Intel OpenCL we have done have failed, so the GPU codepath is not portable, and since dt is already optimized for CPU with SSE2 and SSE4/AVX/AVX2 support is incoming, there is no point investing effort to fix Intel OpenCL since it will not bring additionnal performance and does not prevent to use dt on CPU.

There is one rule with base curve : don’t use base curve.

I don’t know what enfuse does and how it does it, but the theory is pretty clear now. Use light transport models, aka encodings proportionnal to light emissions energy.

Everytime you convolve, you need to respect the conservation of energy. Convolving on non-linear data breaks the conservation of energy. Every kind of weighted average (blurring or else), is a convolution.

That’s utterly wrong, since all you try to do is remap camera dynamic range to display dynamic range. RGB encode light emissions, you can’t treat them as black boxes of numbers. Please stop trolling out-of-topic and read https://medium.com/the-hitchhikers-guide-to-digital-colour

2 Likes

We see reports for crashes with Intel and OpenCL almost every second week as the blacklist doesnt work on windows. So the rumor that it just affects a few people is wrong.

1 Like

I intend to do so.

Let’s go back to the OP’s original comment:

Here’s the user perspective:
Dartkable is working great
Hey, a new version of darktable!
Why is darktable now taking 4x as long for my workflow?

Most users are not going to do what I did and dig through git logs to figure out WHY darktable slowed down when they tried building from git. Even people who DO compile from source will likely initially assume they simply made a mistake when compiling by forgetting an important optional library (I did).

Increasing the time I spend for a large number of images which I’ve exposed to preserve highlights from <20 seconds per image for a rough first pass to >10 minutes for something I’m happy with.

OK, I guess you opened a bit of a can of potential offtopic words with your comment in your first post which I quoted (otherwise I would not have responded to this topic)

Define “failed”. On an i5-7200U on Ubuntu 19.04 with the distribution-packaged Neo, I see a nearly 4x speedup from CPU to GPU with no discernible visual penalty. In the discussions of at least one pull request (the one where caching was fixed), not a single person indicated they had an issue with the Neo drives (at least on Linux) with the exception of the issues with cache invalidation.

dartkable_cpu_perf.txt (1.4 KB)

dartkable_neo_perf.txt (42.3 KB)

Then why does the module exist in darktable?

So the darktable team added a feature inspired by another program, specifically cited the program/algorithm, made a blog post about it - but failed to properly implement the algorithm in such a way that it’s impossible to get visual results even remotely resembling the algorithm claimed to be their inspiration.

Now the position is that not only should that feature not be fixed because obtaining visually acceptable results violates some rule of mathematical purity, but that feature shouldn’t even really exist in the first place nor should the module it was added to.

Anyone who points out the issues with said feature (despite the “slow and inefficient” original implementation of that feature working for them for years - the “slow and inefficient” approach being to export one JPEG, turn on the exposure module with +1 to +2 EV, export that, dupe the exposure module with same settings, export THAT, then feed to enfuse) is just a troll?

Given your statement made in your first post, you’re sounding REALLY hypocritical.

I apparently wasted my time by whipping up a patch to try and make my darktable workflow vastly more efficient by fixing a feature that doesn’t work as it was claimed to be intended to work. It’s pretty clear it won’t get accepted if I finish up the OpenCL side of things and attempt to submit, and any form of the patch that provides visually acceptable results will be rejected.

The new module I’m working on (remember, that was the topic of this thread) makes a dodging and burning mask from a guided filter in 50 ms. Every other method will ruin colours in unpredictible ways.

Because of 10 years of legacy to stay compatible with. You don’t have to use every single module in the soft. It will go out for darktable 3.x branch if we decide to break up compatibility with the 2.x branch (which would be sane, I guess).

You don’t get, don’t you ? Working gamma encoded stuff (aka display-referred) is what Photoshop, Lightroom and all the legacy image-processing stack is doing. Guess what ? It sucks at HDR ! Why ? Because it breaks light transport models, so it messes up colours and produces halos while blending. So, instead of patching rotten algos from another era, let’s embrace the future and get a chance to do things properly, in scene-referred space, with linear operators, where 18% or 50% are just regular luminances.

Because, yes, I have spotted halos in https://mericam.github.io/papers/exposure_fusion_reduced.pdf (p.6), and I told you they would happen even before looking at the paper (sure, they hide them to some extent by working on a multiscale pyramid, but that doesn’t make it right). Get your theory right, then unroll the algo, not the other way around.

Self compiled or packaged dt ?

I’m not a dt dev or user, but I’ve been here long enough to know that if you have a cogent patch to dt, it will be considered. You’re sparring with a person who has demonstrated just that in his efforts to re-orient dt, and others’ thinking, toward a linear processing pipeline. A lot of the thinking behind that has occurred in threads on this forum; you’ve jumped into that context with a few specific assertions without a real understanding that context. That goes for the module performance, and very likely for the OpenCL consideration.

One thing that needs to happen is that this thread should be teased apart into its separate topics.

3 Likes

OK, I played with it. I can’t be certain whether I like the results from it or a fixed enfuse workflow better yet. It’s still significantly more effort (adjusting 5-6 sliders, vs flipping a switch and twiddling one slider). In terms of potential theoretical visual improvements vs. effort required, I’m not a fan so far.

I’ve been using the enfuse approach for years, I’ve been very happy with its results as much as you may hate aspects of the algorithm. It happens to have been “baked in” to the basecurve module, for better or for worse.

I’m not seeing haloing there. Unless you’re talking about the low-frequency artifacts they explicitly state for that particular image as being a known (rare) corner case.

The only time I’ve seen haloing with an exposure fusion algorithm derived from enfuse is with the current state of darktable master (with one example given in an image posted above). Guess what? That was the one which was entirely processed in linear space.

Self compiled - for months I assumed it was a mistake on my part, but the reality was that it was the result of going from a release prior to Neo being blacklisted to one afterwards.

Same thing will happen to any user who updates from an older prepackaged version of DT to one after the blacklist. All they’ll see as a user is - darktable is now significantly slower, at least on mobile Kaby Lake CPU/GPU combinations such as the i5-7200U and the Neo driver packaged with Ubuntu 19.04.

Disable the blacklist - boom, major performance increase. Users shouldn’t have to dig into darktablerc to make it perform well.

OK, I need to head out to enjoy today’s nice weather, but it sounds like that’s a good idea. Either a mod can split posts regarding enfuse to a different thread, or I’ll start a new thread when I get back.

No, the base curve is blending non linear data, e.g. applies the non-linear tone curve, then blend the “exposures”.

Except tone curves cannot preserve the chrominance of your pictures and destroy the linarity of the pipe too early. If you are willing to loose colours to avoid 4 more things to set, well maybe stick to your camera JPEG.

What would “break up compatibility” mean, would I loose tons of already finished edits or would legacy modules still be available for legacy images (e.g. if I have to re-export an image, maybe to get a version with crop removed to be used in a given print layout)?

1 Like

I’m certain they won’t break compatibility with old edits, that just isn’t a thing that can reasonably happen.

1 Like

x^a*y^a = (xy)^a

Now, for fixed y, and x1 = r, x2 = g, x3 = b
r^ay^a = (ry)^a
g^a
y^a = (gy)^a
b^a*y^a = (by^a

hey look, no change in the ratios of g/y/b whether multiplying by y inside of the gamma relationship or by y^a outside

your mind = blown

You keep on ranting about chrominance shifts… Chrominance shifts that I’ve only seen when playing with your tone equalizer.

Undesired luminance changes? Maybe. But, perceptually (as opposed to mathematically) - averaging two values in gamma-space gives a perception closer to a midpoint between those two values than averaging in linear-space prior to converting to gamma. I need to do more digging, but it probebly explains why (despite your insistence that blending in nonlinear space will cause haloing) - blending in linear space causes haloing not seen when blending/exposure fusing in gamma-2.4 space.

Upper is interpolation in linear space. Lower is interpolation in gamma-space (specifically, gamma=2.4) - and yes, I readily admit I’m currently not dealing with the linear transfer at very low values seen in sRGB (where the gamma outside of said linear region is 2.4 but the average is 2.2-2.3)

BTW, where’s the chrominance shift you insist MUST me occurring in such an operation as shown in this image?

Yeah. Now try f(x)^a * f(y)^a where f is a general transfer function defined implicitely by interpolation of nodes. What is it equal to ?

Or maybe I have more experience than you do on these matters and I know how to make these things break. You are one year too late for this conversation. Chrominance shifts can’t happen in my code, because it’s a simple exposure compensation, so I wonder what you have seen.

You are annoying.

is called before

where the gaussian pyramid blending and exposures blending happens. So, the blending is not done over pure log, or pure gamma or linearly encoded data, but on whatever goes outside of the tone curve, which is usually an S-shaped curve with raised midtones.

About these chroma shifts, I’m not inventing them.

Original:

Base curve with exposure blending:

Filmic + tone equalizer:

(settings obviously cranked up for illustrative purposes). Blue becomes purple. Not good. Why ? Theory violated. Period.