Exposure Fusion and Intel Neo drivers

I agree it could be better, but it is what it is. Things are never perfect, we all do the best we can. Glad you found your answer.

Perhaps they just didn’t want to deal with it nor the bug reports that come with it. Good news it that it’s proper free software, so you can patch out the blacklisted module with ease.

You can overwrite it with: darktable --conf opencl_disable_drivers_blacklist=yes -d opencl

2 Likes

Just use masked dodging and burning, like in the old days in the darkroom. If it worked then, it should work better now.

That’s out of topic. All tests of using Intel OpenCL we have done have failed, so the GPU codepath is not portable, and since dt is already optimized for CPU with SSE2 and SSE4/AVX/AVX2 support is incoming, there is no point investing effort to fix Intel OpenCL since it will not bring additionnal performance and does not prevent to use dt on CPU.

There is one rule with base curve : don’t use base curve.

I don’t know what enfuse does and how it does it, but the theory is pretty clear now. Use light transport models, aka encodings proportionnal to light emissions energy.

Everytime you convolve, you need to respect the conservation of energy. Convolving on non-linear data breaks the conservation of energy. Every kind of weighted average (blurring or else), is a convolution.

That’s utterly wrong, since all you try to do is remap camera dynamic range to display dynamic range. RGB encode light emissions, you can’t treat them as black boxes of numbers. Please stop trolling out-of-topic and read https://medium.com/the-hitchhikers-guide-to-digital-colour

2 Likes

We see reports for crashes with Intel and OpenCL almost every second week as the blacklist doesnt work on windows. So the rumor that it just affects a few people is wrong.

1 Like

I intend to do so.

Let’s go back to the OP’s original comment:

Here’s the user perspective:
Dartkable is working great
Hey, a new version of darktable!
Why is darktable now taking 4x as long for my workflow?

Most users are not going to do what I did and dig through git logs to figure out WHY darktable slowed down when they tried building from git. Even people who DO compile from source will likely initially assume they simply made a mistake when compiling by forgetting an important optional library (I did).

Increasing the time I spend for a large number of images which I’ve exposed to preserve highlights from <20 seconds per image for a rough first pass to >10 minutes for something I’m happy with.

OK, I guess you opened a bit of a can of potential offtopic words with your comment in your first post which I quoted (otherwise I would not have responded to this topic)

Define “failed”. On an i5-7200U on Ubuntu 19.04 with the distribution-packaged Neo, I see a nearly 4x speedup from CPU to GPU with no discernible visual penalty. In the discussions of at least one pull request (the one where caching was fixed), not a single person indicated they had an issue with the Neo drives (at least on Linux) with the exception of the issues with cache invalidation.

dartkable_cpu_perf.txt (1.4 KB)

dartkable_neo_perf.txt (42.3 KB)

Then why does the module exist in darktable?

So the darktable team added a feature inspired by another program, specifically cited the program/algorithm, made a blog post about it - but failed to properly implement the algorithm in such a way that it’s impossible to get visual results even remotely resembling the algorithm claimed to be their inspiration.

Now the position is that not only should that feature not be fixed because obtaining visually acceptable results violates some rule of mathematical purity, but that feature shouldn’t even really exist in the first place nor should the module it was added to.

Anyone who points out the issues with said feature (despite the “slow and inefficient” original implementation of that feature working for them for years - the “slow and inefficient” approach being to export one JPEG, turn on the exposure module with +1 to +2 EV, export that, dupe the exposure module with same settings, export THAT, then feed to enfuse) is just a troll?

Given your statement made in your first post, you’re sounding REALLY hypocritical.

I apparently wasted my time by whipping up a patch to try and make my darktable workflow vastly more efficient by fixing a feature that doesn’t work as it was claimed to be intended to work. It’s pretty clear it won’t get accepted if I finish up the OpenCL side of things and attempt to submit, and any form of the patch that provides visually acceptable results will be rejected.

The new module I’m working on (remember, that was the topic of this thread) makes a dodging and burning mask from a guided filter in 50 ms. Every other method will ruin colours in unpredictible ways.

Because of 10 years of legacy to stay compatible with. You don’t have to use every single module in the soft. It will go out for darktable 3.x branch if we decide to break up compatibility with the 2.x branch (which would be sane, I guess).

You don’t get, don’t you ? Working gamma encoded stuff (aka display-referred) is what Photoshop, Lightroom and all the legacy image-processing stack is doing. Guess what ? It sucks at HDR ! Why ? Because it breaks light transport models, so it messes up colours and produces halos while blending. So, instead of patching rotten algos from another era, let’s embrace the future and get a chance to do things properly, in scene-referred space, with linear operators, where 18% or 50% are just regular luminances.

Because, yes, I have spotted halos in https://mericam.github.io/papers/exposure_fusion_reduced.pdf (p.6), and I told you they would happen even before looking at the paper (sure, they hide them to some extent by working on a multiscale pyramid, but that doesn’t make it right). Get your theory right, then unroll the algo, not the other way around.

Self compiled or packaged dt ?

I’m not a dt dev or user, but I’ve been here long enough to know that if you have a cogent patch to dt, it will be considered. You’re sparring with a person who has demonstrated just that in his efforts to re-orient dt, and others’ thinking, toward a linear processing pipeline. A lot of the thinking behind that has occurred in threads on this forum; you’ve jumped into that context with a few specific assertions without a real understanding that context. That goes for the module performance, and very likely for the OpenCL consideration.

One thing that needs to happen is that this thread should be teased apart into its separate topics.

3 Likes

OK, I played with it. I can’t be certain whether I like the results from it or a fixed enfuse workflow better yet. It’s still significantly more effort (adjusting 5-6 sliders, vs flipping a switch and twiddling one slider). In terms of potential theoretical visual improvements vs. effort required, I’m not a fan so far.

I’ve been using the enfuse approach for years, I’ve been very happy with its results as much as you may hate aspects of the algorithm. It happens to have been “baked in” to the basecurve module, for better or for worse.

I’m not seeing haloing there. Unless you’re talking about the low-frequency artifacts they explicitly state for that particular image as being a known (rare) corner case.

The only time I’ve seen haloing with an exposure fusion algorithm derived from enfuse is with the current state of darktable master (with one example given in an image posted above). Guess what? That was the one which was entirely processed in linear space.

Self compiled - for months I assumed it was a mistake on my part, but the reality was that it was the result of going from a release prior to Neo being blacklisted to one afterwards.

Same thing will happen to any user who updates from an older prepackaged version of DT to one after the blacklist. All they’ll see as a user is - darktable is now significantly slower, at least on mobile Kaby Lake CPU/GPU combinations such as the i5-7200U and the Neo driver packaged with Ubuntu 19.04.

Disable the blacklist - boom, major performance increase. Users shouldn’t have to dig into darktablerc to make it perform well.

OK, I need to head out to enjoy today’s nice weather, but it sounds like that’s a good idea. Either a mod can split posts regarding enfuse to a different thread, or I’ll start a new thread when I get back.

No, the base curve is blending non linear data, e.g. applies the non-linear tone curve, then blend the “exposures”.

Except tone curves cannot preserve the chrominance of your pictures and destroy the linarity of the pipe too early. If you are willing to loose colours to avoid 4 more things to set, well maybe stick to your camera JPEG.

What would “break up compatibility” mean, would I loose tons of already finished edits or would legacy modules still be available for legacy images (e.g. if I have to re-export an image, maybe to get a version with crop removed to be used in a given print layout)?

1 Like

I’m certain they won’t break compatibility with old edits, that just isn’t a thing that can reasonably happen.

1 Like

x^a*y^a = (xy)^a

Now, for fixed y, and x1 = r, x2 = g, x3 = b
r^ay^a = (ry)^a
g^a
y^a = (gy)^a
b^a*y^a = (by^a

hey look, no change in the ratios of g/y/b whether multiplying by y inside of the gamma relationship or by y^a outside

your mind = blown

You keep on ranting about chrominance shifts… Chrominance shifts that I’ve only seen when playing with your tone equalizer.

Undesired luminance changes? Maybe. But, perceptually (as opposed to mathematically) - averaging two values in gamma-space gives a perception closer to a midpoint between those two values than averaging in linear-space prior to converting to gamma. I need to do more digging, but it probebly explains why (despite your insistence that blending in nonlinear space will cause haloing) - blending in linear space causes haloing not seen when blending/exposure fusing in gamma-2.4 space.

Upper is interpolation in linear space. Lower is interpolation in gamma-space (specifically, gamma=2.4) - and yes, I readily admit I’m currently not dealing with the linear transfer at very low values seen in sRGB (where the gamma outside of said linear region is 2.4 but the average is 2.2-2.3)

BTW, where’s the chrominance shift you insist MUST me occurring in such an operation as shown in this image?

Yeah. Now try f(x)^a * f(y)^a where f is a general transfer function defined implicitely by interpolation of nodes. What is it equal to ?

Or maybe I have more experience than you do on these matters and I know how to make these things break. You are one year too late for this conversation. Chrominance shifts can’t happen in my code, because it’s a simple exposure compensation, so I wonder what you have seen.

You are annoying.

is called before

where the gaussian pyramid blending and exposures blending happens. So, the blending is not done over pure log, or pure gamma or linearly encoded data, but on whatever goes outside of the tone curve, which is usually an S-shaped curve with raised midtones.

About these chroma shifts, I’m not inventing them.

Original:

Base curve with exposure blending:

Filmic + tone equalizer:

(settings obviously cranked up for illustrative purposes). Blue becomes purple. Not good. Why ? Theory violated. Period.

So, let’s take a pixel from red chair from an image posted by @ggbutcher in another thread

Comparison of a single pixel in a simple +3EV push in darktable vs. tone equalizer attempt to raise shadows without blowing highlights.

In [15]: 85.8/47.1
Out[15]: 1.8216560509554138

In [16]: 72.8/23.5
Out[16]: 3.097872340425532

In [17]: 73.6/24.3
Out[17]: 3.0288065843621395

Pretty consistent with the red oversaturation I see visually.

As you your provided example - since I haven’t posted my patch yet (it’s become clear to me that all of the stuff currently hardcoded in DT needs to be exposed in the UI, and I admit I suck at UI/UX stuff), you just proved with your example that an exposure fusion workflow in linear space provides poor results, since that’s what darktable is currently doing.

Except it isn’t a generic transfer function.

Since we’re blending pixels images that are generated from the same image by exposure shift, we get (for two exposures, it gets obviously a bit more difficult to show the math for three):
(1-w)x^a+w*cx^a

Where c is a constant derived from focus shift and w is our weight. I’ll need to go through the derivation, but it should boil down to:
(offset-w)*cx^a

For each of x=r, x=g, x=b - the relative relationship between r, g, and b will be preserved - see the gradient I posted as an example.

The one thing that is significantly different between doing this in linear space and in gamma space is the perceptual result of a 0.5 weight - as seen by the midpoint of the gradient posted. This perceptual difference is likely why performing blending in linear space (current darktable master) gives such poor results.

Because this exposure fusion doesn’t happen in linear space at all, but after the base curve is applied, which is not linear and not even pure-power, and I tried to show you the code to justify my claim, but you still don’t understand. So it fails precisely because it does not happen in linear space. And you got your assumptions wrong because, as most opensource hackers, you treat RGB codevalues as random numbers and don’t look at the whole pipeline that goes from scene-linear light ratios to whatever broken transfer function you are trying to hack.

I mean, if you want to invalidate Parseval’s theorem, do it. You might even get a Fields medal from it. Until you do, I’m right, you are wrong. You don’t blend, blur, convolve, feather, or merge anything outside of a linear space, where RGB codevalues are proportional to light energy. Full stop.

There is absolutely no relationship preserved between R, G and B at the output of the base curve, because it’s not linear, and it happens before the exposure fusion. Once again.

1 Like

In the current code, it does, and it looks like garbage. Which invalidates your claim that linear operation is the One True Way. We’ll get to why that is in a bit.

Sure it is.
linear_setting_fusion

A bunch of code not really relevant to a discussion of exposure fusion and how to make it operate better.

So, since you’re going to be highly insulting. You’re arrogant, overconfident, and repeatedly make claims that don’t stand up to reality. Over and over you say that operations in gamma space will cause chromaticity shifts and destroy colors, even when provided visual evidence that contradicts your claims. You are also apparently mentally incapable of discussing item B (exposure fusion) if it has ever been, in any way, associated with item A (basecurve) without bringing in your prejudices against A. (I’m not the one who shoved exposure fusion into the basecurve module. If you have issues with them being in the same modules, take that up with whomever did it, not with me.) You also seem to be ignoring the fact that Edgardo has been working on fixing the issues you’ve pointed out with the basecurve module (apparently you’re so prejudiced against it you’ve ignored the RGB norms work? And yes, I know that currently the fusion pipeline isn’t using his fixes, that’s become first on my TODO list before any further work, even though I’ve been eliminating that issue as illustrated above)

Again:

No chromaticity shifts when interpolating between the two brightnesses, despite your repeated claims it is guaranteed to happen.

As to luminance relationship at the “halfway” mark, you keep on forgetting about the fact that human vision is NOT linear.

You can rant about theory all you want. Then you need to remember that while it’s science-heavy, photography is partially an art form. And thus human perception is important.

So let’s try perception. All three images here were generated using the EXACT same settings (shown above). Tell me which one was weighted in linear and blended in linear, which one was weighted in gamma-2.4 and blended in linear, and which one was weighted and blended in gamma-2.4:

There is absolutely no relationship preserved between R, G and B at the output of the base curve, because it’s not linear, and it happens before the exposure fusion. Once again.
[/quote]
Once again:
linear_setting_fusion

Exposure fusion and basecurve are only linked by the fact that someone jammed the functionality of one into the other. It’s well proven that the two can be decoupled, as shown in the image above. But you’re apparently incapable of preventing your prejudices against the prior state of basecurve (ignoring the recent work that’s gone into it) from causing you to froth at the mouth, see red, repeatedly state BASECURVE BAD BASECURVE BAD BASECURVE BAD BASECURVE AND ANYTHING EVEN REMOTELY ASSOCIATED WITH IT BAD, even when provided direct evidence that your claims being made don’t hold true when you get down to the most important thing - visual results to human eyes.

If you’ve got a recommended easy method for MEASURING the chromaticity shift you swear must be happening (e.g. prove that it’s actually happening), I’m all ears. (HSV decomposition in GIMP sorta works, but the 8-bit intermediary causes quantization errors and it’s just tedious… I will say that using that method, I’m not seeing fusion with blending and weighting in gamma space cause chromaticity shifts except in regions where the output luminance is near enough to black that quantization renders the result untrustworthy. But I’m amenable to a different measurement method.)

I’ve (tried) to split this into its own topic, since it was so far off the topic of the new-ish tone equalizer module that @anon41087856 is working on. I hope that thread can get back on track, since the module looks quite promising.

Now, about other things:

To be honest, you both have been fairly condescending towards each other through most of the posts. It’d be really awesome if you’d both just stop, and then, perhaps, you can get the information you’re after.

3 Likes

This certainly doesn’t mean anything, but I took the middle one (the brightest one) and just applied a tone curve preset gamma 2.0 and got this: