"Aurelien said : basecurve is bad"

let me not help this discussion by saying that i really like exposure fusion, too. and that it has nothing to do with opinionated discussions about how to name or where to place a curve module in the pipeline.

i really like bart wronski’s blog post about exposure fusion, and it also has webgl source code that is really fast.

in vkdt i use local laplacian pyramids for this purpose, but the implementation is not too different from exposure fusion.

6 Likes

Thanks for the link - it’s a good read. I also never realized Ryan Geiss was involved “under the hood” with pipeline optimizations in the official Google implementation.

If I recall correctly the fusion implementation was yours? It was very close, the only flaw was that it missed a small detail buried deep in Mertens’ paper - or possibly, when you wrote it the implementation was working as intended but a later change elsewhere in Darktable broke design assumptions of your implementation. (Pretty much Mertens’ paper made an assumption that inputs and outputs were already in a reasonably perceptually even space. Camera JPEG output was close enough to usually work OK, but feeding fusion with linear data leads to bad behavior.)

Subsequent poking at the whole thing had me realize some additional things, the ideal place for it would be in the final stage of a node-based pipeline where the two synthetic exposures could go through two paths (see the saturation modification in Google’s Night Sight enhancements - http://graphics.stanford.edu/papers/night-sight-sigasia19/night-sight-sigasia19.pdf - interestingly, Hasinoff shows up again. He was a co-author on the earlier local laplacian paper cited by a few here.) If I recall correctly, what you’re working on is node-based and could branch a pipeline with a later merge?

1 Like

right, i think i implemented that. i suppose darktable’s pipeline was always too rigid and now it’s messy to work with. probably one of the sources of all the discussions around curves.

the local laplacian pyramid in the fast implementation proceeds much like exposure fusion: apply curves (with emphasis on shadows or highlights) to the image in a few variants, then use a laplace pyramid interpolation scheme to merge. in the default vkdt pipeline i have it after the regular filmcurve, so not in linear space (though it kinda works in both places).

i hadn’t thought about this possibility you mention, fork the processing early and merge in the end. probably due to performance woes. light processing such as colour transform or exposure aren’t too expensive though (vkdt would do full res raw buffer processing like this in half a millisecond).

indeed in the node graph it’s very easy to output multiple buffers from a node or just connect the same output to two branches and recombine at any later point. there’s quite a bit of that for image alignment (also same/overlapping authors).

Yeah, your blog post at compressing dynamic range with exposure fusion | darktable was honestly one of the things that caused me to start looking more heavily at darktable. However, at least as of roughly 2019-ish, any result I got from the implementation was vastly inferior to what’s in your post, and vastly inferior to manually creating 2-3 separate exposures and feeding them to enfuse (sloooow!), hence me poking at the implementation. I suspect something outside of the module changed in a way that impacted the module, because your results in the blog post indicate a module operating in the way that I would expect.

Will have to take a look at your implementation, I know at least one other person was experimenting with it. Another algorithm Hasinoff had an involvement in. :slight_smile:

As to “early” - I would not do it too early, it should be pretty close to the end of the pipeline since it is fundamentally a “compress dynamic range to fit into a display” transform. Pretty much exposure and (potentially) saturation, and then the curve, then fuse, and that should be the last step in the pipeline. The flaw in doing it within basecurve is that it’s tied to basecurve - can’t use Jandren’s sigmoid stuff, etc. The flaw in edgardoh’s implementation was that it was entirely curve-independent. It still worked pretty well, but I suspect some of the corner cases where postprocessing with enfuse was still doing vastly better were due to curve interactions. I have one particular shot from a few years ago that I want to revisit - it’s one where no approach I’ve ever tried has come close to what Google’s pipeline got for a shot of the Epcot globe at night.

re: early: i meant branch early, merge at the end. so you would have a few completely independent more or less full pipelines running in parallel. because exposure would likely play a role in this, and you’d normally do that early or at least in linear. other than that i agree, fusing is totally an output transform, and when it comes to pipelines that run in parallel: the shorter the better.

1 Like

Having completely independent full pipelines would, obviously, be bad for performance. A lot of the operations should, in theory, be exposure-independent, and even for some that are, in many cases it might be beneficial to just have them operating on a “rough” exposure for the majority of the pipeline.

In a non-fusion pipeline, it’s usually: curve (linear in, nonlinear out), ODT, done
For a fusion split pipeline, it would be:
split, exposure shift (in linear space), possibly exposure-dependent color adjustments (in linear, see Google’s NS paper I linked), curve (linear in, nonlinear out), ODT, fuse

Things like demosaic, white balance, sharpening, scaling, lens correction, etc would all be done prior to the split since most should be exposure-independent, and even those that are exposure-dependent might behave strangely in a fusion pipeline and are best used pre-split on a “rough approximation” exposure unless you want to sacrifice performance to do something really funky.

Edit: BTW, in Mr. Wronski’s demo, the mip slider (in the dt implementation, this is the limit on the number of levels) gives a great example of how performance optimizations can affect haloing. In the default enfuse implementation, it will (except in extreme circumstances) use as many levels as necessary to get to a single-pixel pyramid tip. In wronski’s example, and dt’s implementation, there’s a performance optimization to stop early. You can see that this performance optimization leads to halos, with the halos becoming sharper/more noticeable the earlier in the pyramid you stop.

Something I’m a little unsure about as a complete novice (picked up my first camera just over a week ago at writing) is how the Filmic RGB module is enabled by default. I’m all for the scene-referred workflow, but my knowledge about image processing is limited to the little bit of contact I’ve had with Photoshop and GIMP.

The manual’s Workflow introduction covering processing recommends a few steps, but as a novice, I’m unsure if the manual is referring to a “blank slate” workflow where the Filmic RGB module hasn’t been enabled yet or if I should stick to the “defaults” that gets applied. I’m still in the process of training my eye to know how to achieve what I want to, but the extra bit of freedom is leaving me slightly more frustrated when I’m ending up “chasing my tail” without knowing what I’m really doing wrong.

1 Like

@LazyGameDevZA The filmic module is enabled by default in the scene-referred workflow and, if you are following the guide in the manual, filmic should be enabled from the start.

2 Likes

Thanks for the clarification @elstoc, I’d suggest mentioning that in the manual to help a bit with confusion from newbies like me :slight_smile:

1 Like

Done

4 Likes

@LazyGameDevZA Good suggestion and welcome to the forum.

1 Like

Target of this video…might help sort it out… Rapidfire Fix #3 - Demystifying Darktable Import and Preview - YouTube

1 Like

If you are going to blend exposures, at least do it based on something physiological : https://repositorium.sdum.uminho.pt/bitstream/1822/17831/1/ledda_afrigraph.pdf

We expect stuff like pigment bleaching in highlights, and that laplacian pyramid stuff just looks like forced textures where the Naka-Rushton model clearly says gradients should be compressed.

2 Likes

Interesting, and timely for me. I’ve been shooting family pictures of late in high-DR situations, interior/shade with bright daylight backgrounds. I got to taking a good look at the scenes and really, when I concentrate on the subjects in the interiors, the backgrounds are indeed “pastelized”. So, with those situations I’ve just started to shape the curve shoulder to keep just some definition and not worry about making it look as color-rich and contrasty as the interior. Here’s an example:

Just like I remembered, warm interior with washed-out exterior. I shot a colorchecker under the interior light source, I’m going to pick at it later to figure out where it would want whitebalance, but I’m not inclined to change the current rendition.

7 Likes

Glenn: wouldn’t that depend on whether you’d prefer
a clinical white or to convey a certain mood?

Have fun!
Claes in Lund, Sweden

Actually, I’m starting to pay more specific attention to what i actually see in the scene. If the light on the subject is warm, I want to keep that notion, rather than neutralize it…

2 Likes

@Claes I think that is a key point…technical or look…I think some will still say do a correct WB and get the look from other tools but I’m a rule breaker :slight_smile:

@ggbutcher
I find the same…in DT I often use the CC module and get a “neutralized WB” then I just back off the chroma slider and slowly remove it or slide it all the way to 0 and slowly bring it back until its pleasing in the eye…sometimes I use skin tones too and correct those to my liking and let that dictate the correction for lighting. In your photo I would try something like this on those plates that are directly be hit by the light or on that lovely you ladies skin tones making sure she captures the light but does not look alien after my tinkering :slight_smile:

You know the CAT16 model has a degree of adaptation built-in and the dt libs have it, but the color calib module sets it to 1 (full adaptation) all the time. It’s 8 lines of code to make that hard-set 1 an user-defined parameter.

1 Like

I do love the module and honestly I don’t often have to mess with it other than for the using the other tabs or your new color match features. But i do sometimes flip the CAT to custom after the initial correction and make small tweaks on the chroma to boost or pull back how the cast is corrected… might be blasphemy … many times I just go back to where it was but sometime a small tweak just looks better to my eyes…some interaction of technical and perceptual in my brain :slight_smile:

Make it so. :wink::wine_glass:

4 Likes