"Aurelien said : basecurve is bad"

… but learn from it?

I don’t think that this kind of tone leads to constructive discussion.

(I am assuming that you are aware that there is a lot of discussion and work that comes between someone proposing a feature with a screenshot and an actual implementation in a stable release, so I won’t expand on that here).

2 Likes

Yes, and such work was being done by myself and Edgardo. Then aurelien called in his rabid attack dog who came in and started making (clearly false) assumptions about how the thing was intended to work or actually worked, and outright ignored unambiguous proof that his primary problem with the module (too early in the pipeline) didn’t actually hold true because modules could be moved.

Do you think new module: exposure fusion by edgardoh · Pull Request #2846 · darktable-org/darktable · GitHub , which was his very first comment on the PR, is constructive? If you will notice, I tried to continue to be constructive, up to the point where it was pointless - new module: exposure fusion by edgardoh · Pull Request #2846 · darktable-org/darktable · GitHub

(For reference, if someone had suggested moving the default position of basecurve later as part of the process, as opposed to s**ting all over it because it was currently too early, I would have been perfectly fine with that, because it clearly worked better.)

I tried as hard as I could to have a constructive discussion with Edgardo and Pascal, but Aurelien is incapable of such discussion.

2 Likes

I have no horse in this race; I was just responding to your comment. I am sorry to hear that other discussions had other issues, but that’s not a good reason to spoil this one.

Apparently you guys can’t take a hint, so let’s be clear: drop the exposure fusion thing, its in the past, or we can lock this thread.

2 Likes

I will just share one experience as a photography and imaging tutor. I was teaching a new class of students with varying abilities to use DT. We opened an image using base curve as the starting point and made simple adjustments. We took a snapshot. We then reopened that image with no base curve or filmic applied. I taught the students how to create there own curve which they generally found better than the base curve. Finally we opened the image in Filmic V3 and with just a few simple tweaks 100% of the students preferred the look created by Filmic. Since then Filmic V4 and V5 has only made the job simpler. I can’t wait to try V6. I especially love the ability to adjust contrast without clipping the extreme highlights or shadows in V5. I also liked the improve saturation controls that came about in V4.

6 Likes

Do you use Windows or Linux…I posted a portable build for Windows with V6 if you want to try it…no install required…

I am using Windows for my edits as I have a SpyderX calibrated 43 inch monitor attached to my laptop. I have a Linux desktop which I wish to switch to be my editing computer but have so far failed to get the SpyderX to work on Linux as I am a bit of a dummy when it comes to Linux. I am trying to walk away from Windows eventually as I like the philosophy of the linux world. Where will I find your V6 download. I am currently running a 3.9 install from a couple of weeks ago. Is V6 planned for inclusion in 3.9 anytime soon?

If.you want help, start a new thread. You should just install displaycal and plug it in.

I am not sure… Last comments were a couple of weeks old

For now you can try it. I think some changes have been made since I did this…my pc is dead so I am not set up to build and so I can’t update beyond this until it arrives…but it should give you some idea…just unzip even in downloads directory and run using the little batch file to keep everything local to that directory

You are kidding, right ?

Exposure fusion mixes local and global tonemapping using shitty gaussian blur proven many times to produce halos and not even letting the user define the radius.

The only local tonemapping I use it Tone EQ with exposure-invariant guided filter or simple dodging and burning, again with masks feathered with guided filtered.

Everything else is bullshit and you know it. The proof you made up were carefully chosen examples, the general case is it looks like shit.

No.

darktable had colour problems that could not be solved in the framework it used because the problems were actually the framework itself. These problems were made obvious by modern cameras used in harsh conditions that challenged the pipeline more than before (aka pulling shadows like never before). Technical solutions have been proposed to fix these problems. Fixing a buggy framework meant changing the framework itself.

Many users never witnessed those problems. To them, darktable just got more complicated for no good reason and it’s almost impossible to reason with them and try to explain why things are better now. We just broke their toy.

What’s more important is image processing has always been hard and difficult to understand. But the easy toys managed to keep all that hidden, meaning very few people got a chance to understand what really happened when pushing sliders.

Problem is that new framework undoes some core assumptions of the previous framework, assumptions that were never explicitly stated. Things like grey = 50%, white=100%, ergo never clip highlights, always pull the middle of the tone curve.

Again, it’s almost impossible to try to explain how the assumptions have changed and what it actually changes in practice, since those assumptions were never clear or known before.

It’s easy to adapt to things you understand. But being unable to adapt to something hard to understand probably means you did not understand more before. You just got muscle memory.

Scene-referred is more simple. It removes intermediate layers that were broken. It requires fewer modules to achieve the same result. It better splits apart colour properties (hue, chroma, lightness, brightness, saturation) for independent control, which by the way was grounded in darktable’s design from day one (hence the use of CIE Lab).

People don’t want to understand that the lazyness they could afford when shooting 8 EV pictures and inserting them into an image pipeline of 8 EV is not affordable anymore with their 14 EV cameras.

People don’t want to understand how colour works, past the HSL sliders. People don’t want to understand that having used GUIs for decades, they still don’t get what they are doing. And now they are presented the bill, and they don’t like it.

My advice is try watercolour. Then only you will find out for yourself that more simple does not mean more easy.

13 Likes

One thing I truly appreciate about the scene referred changes is that they prompted me to learn. Through these changes, their surrounding discussions, and not least of all Aurelien’s many explanations, I got a chance to think through so many topics I didn’t know where important. Now I understand so much more about image processing.

And that helps me not just in Darktable, but in any tool. For that, I am even more grateful, than the changes themselves, even though they are awesome, too. Thank you!

8 Likes

What genuine bugs are you referring to?

let me not help this discussion by saying that i really like exposure fusion, too. and that it has nothing to do with opinionated discussions about how to name or where to place a curve module in the pipeline.

i really like bart wronski’s blog post about exposure fusion, and it also has webgl source code that is really fast.

in vkdt i use local laplacian pyramids for this purpose, but the implementation is not too different from exposure fusion.

6 Likes

Thanks for the link - it’s a good read. I also never realized Ryan Geiss was involved “under the hood” with pipeline optimizations in the official Google implementation.

If I recall correctly the fusion implementation was yours? It was very close, the only flaw was that it missed a small detail buried deep in Mertens’ paper - or possibly, when you wrote it the implementation was working as intended but a later change elsewhere in Darktable broke design assumptions of your implementation. (Pretty much Mertens’ paper made an assumption that inputs and outputs were already in a reasonably perceptually even space. Camera JPEG output was close enough to usually work OK, but feeding fusion with linear data leads to bad behavior.)

Subsequent poking at the whole thing had me realize some additional things, the ideal place for it would be in the final stage of a node-based pipeline where the two synthetic exposures could go through two paths (see the saturation modification in Google’s Night Sight enhancements - http://graphics.stanford.edu/papers/night-sight-sigasia19/night-sight-sigasia19.pdf - interestingly, Hasinoff shows up again. He was a co-author on the earlier local laplacian paper cited by a few here.) If I recall correctly, what you’re working on is node-based and could branch a pipeline with a later merge?

1 Like

right, i think i implemented that. i suppose darktable’s pipeline was always too rigid and now it’s messy to work with. probably one of the sources of all the discussions around curves.

the local laplacian pyramid in the fast implementation proceeds much like exposure fusion: apply curves (with emphasis on shadows or highlights) to the image in a few variants, then use a laplace pyramid interpolation scheme to merge. in the default vkdt pipeline i have it after the regular filmcurve, so not in linear space (though it kinda works in both places).

i hadn’t thought about this possibility you mention, fork the processing early and merge in the end. probably due to performance woes. light processing such as colour transform or exposure aren’t too expensive though (vkdt would do full res raw buffer processing like this in half a millisecond).

indeed in the node graph it’s very easy to output multiple buffers from a node or just connect the same output to two branches and recombine at any later point. there’s quite a bit of that for image alignment (also same/overlapping authors).

Yeah, your blog post at compressing dynamic range with exposure fusion | darktable was honestly one of the things that caused me to start looking more heavily at darktable. However, at least as of roughly 2019-ish, any result I got from the implementation was vastly inferior to what’s in your post, and vastly inferior to manually creating 2-3 separate exposures and feeding them to enfuse (sloooow!), hence me poking at the implementation. I suspect something outside of the module changed in a way that impacted the module, because your results in the blog post indicate a module operating in the way that I would expect.

Will have to take a look at your implementation, I know at least one other person was experimenting with it. Another algorithm Hasinoff had an involvement in. :slight_smile:

As to “early” - I would not do it too early, it should be pretty close to the end of the pipeline since it is fundamentally a “compress dynamic range to fit into a display” transform. Pretty much exposure and (potentially) saturation, and then the curve, then fuse, and that should be the last step in the pipeline. The flaw in doing it within basecurve is that it’s tied to basecurve - can’t use Jandren’s sigmoid stuff, etc. The flaw in edgardoh’s implementation was that it was entirely curve-independent. It still worked pretty well, but I suspect some of the corner cases where postprocessing with enfuse was still doing vastly better were due to curve interactions. I have one particular shot from a few years ago that I want to revisit - it’s one where no approach I’ve ever tried has come close to what Google’s pipeline got for a shot of the Epcot globe at night.

re: early: i meant branch early, merge at the end. so you would have a few completely independent more or less full pipelines running in parallel. because exposure would likely play a role in this, and you’d normally do that early or at least in linear. other than that i agree, fusing is totally an output transform, and when it comes to pipelines that run in parallel: the shorter the better.

1 Like

Having completely independent full pipelines would, obviously, be bad for performance. A lot of the operations should, in theory, be exposure-independent, and even for some that are, in many cases it might be beneficial to just have them operating on a “rough” exposure for the majority of the pipeline.

In a non-fusion pipeline, it’s usually: curve (linear in, nonlinear out), ODT, done
For a fusion split pipeline, it would be:
split, exposure shift (in linear space), possibly exposure-dependent color adjustments (in linear, see Google’s NS paper I linked), curve (linear in, nonlinear out), ODT, fuse

Things like demosaic, white balance, sharpening, scaling, lens correction, etc would all be done prior to the split since most should be exposure-independent, and even those that are exposure-dependent might behave strangely in a fusion pipeline and are best used pre-split on a “rough approximation” exposure unless you want to sacrifice performance to do something really funky.

Edit: BTW, in Mr. Wronski’s demo, the mip slider (in the dt implementation, this is the limit on the number of levels) gives a great example of how performance optimizations can affect haloing. In the default enfuse implementation, it will (except in extreme circumstances) use as many levels as necessary to get to a single-pixel pyramid tip. In wronski’s example, and dt’s implementation, there’s a performance optimization to stop early. You can see that this performance optimization leads to halos, with the halos becoming sharper/more noticeable the earlier in the pyramid you stop.