Better highlights with enfuse

I would normally agree with this, but enfuse is a special case here. It’s unique in that it’s about the only “multiple LDR in, tonemapped LDR out, no intermediary” solution out there. Enfuse actually often performs poorly (sometimes EXTREMELY poorly) with tiff input, especially “typical” TIFFs. (“typical” being 16-bit linearly encoded). Feed 16-bit linear TIFFs to enfuse and you wind up with the same problem darktable’s built-in fusion has.

Enfuse is usually the very last step in the workflow, so your concerns about 8-bit JPEGs being unsuitable to further editing do not apply - there is no further editing. Anyone doing further editing should not be using enfuse, they should be doing something like feeding multiple bracketed RAWs to hdrmerge. If they REALLY need to be generating an editing-suitable HDR intermediate from multiple LDR inputs, enfuse is not the tool - LuminanceHDR or an alternative implementation of Debevec’s or Robertson’s algorithm is needed. (Warning: OpenCV’s robertson implementation is vastly inferior to LuminanceHDR’s in most scenarios in my experience, even “easy” scenarios like performing response recovery on a simple smooth gradient which should be the easiest thing for a response recovery algorithm to handle…)

Looking at the one example provided by the OP:
Exposure weight of 0 is not suitable for this use case. The only scenario where you would ever want to consider an exposure weight of 0 is when doing focus stacking. (In which case, you only want contrast weight and nothing else). In fact, looking at the command provided, that command is only suitable for focus stacking.

The “highlights” input may still have been too overexposed, potentially due to the RAW itself being overexposed (clipped highlights).

I’ll try and dig up my Christmas tree example, but the workflow was:
Feed bracketed shots with hdrmerge to generate an float32 HDR DNG for input to RawTherapee (note: when I dig up my example, there will be artifacts from this, I’m working on trying to fix those bugs in hdrmerge, there are documented open issues relevant to the failure. In fact you reminded me that I need to see if filebin is accepting new submissions again so I can upload example raw sequences that trigger the bugs…)

Perform color/etc processing in rawtherapee

Generate a sequence of exposure-shifted outputs using a “typical” tone curve from RT - at this point 8-bit sRGB JPEG is fine unless the desired final output is a wider gamut.

95%+ of the time, RawTherapee’s “dynamic range compression” tool is more than sufficient. (It probably would in this particular shot). In those where it doesn’t (so far, in my experience, colored LED lighting that you WANT to have its color preserved) - Feed those exposure-shifted JPEGs to enfuse - so far the defaults usually are fine for me, sometimes I’ll bump up saturation weight but Christmas tree lights are an atypical use case here. In general if there’s a contrast issue (highlights pulled down too much, shadows pulled up too much), I adjust the inputs to enfuse before I’ll adjust the weights. For example, if the highlights are too dim, I’ll remove the most negative-shifted-exposure-compensation (dimmest) JPEG from the enfuse input. Note that in my initial adventures with exposure fusion, I considered saturation weight to be unneccessary/pointless but I was wrong - since most tone curve implementations will boost saturation in the high-contrast midtones, saturation weight is relevant.

I’ll try and take a crack at the OP’s original raw image tonight.

1 Like

…maybe someone will turn up their noses, because it’s a result from a pay software, but this is the result you get by default without any correction, with Aurora HDR. I always use it because I take interior photos for work and the results are absolutely natural as long as I don’t pump colors and effects, things that I never do as they I look for the naturalness of the environments I photograph.

Wow, we’ve been using it very differently then :).

Enfuse processes in a non rgb colourspace, and often in something ‘lab like’ to put it simply . So gamma corrected input or not should have absolutely no effect .

But gamma corrected is not really the same as ‘linear’ or not. The moment you start messing with contrast, data isn’t linear anymore.

So, as you say 'multiple ldr in, tonemapped ldr out" then using something that is already processes and to your liking will absolutely help.
But that’s not how I see enfuse :). ‘multiple raw ldr in, single raw ldr out, to feed into other program of choice’ to get natural results is how I see it.

But let’s be clear , no wrongs here!

I do wish that the OP posts the raw file for the *4996 exposure. Because I’m convinced that’s all what’s needed and there is no need to do any bracketing here.

Also , something to be clear about.

‘do not use jpg, use tiff’ absolutely stands in my opinion when talking about compression. Every step in the workflow towards the output should be kept lossless. And only your final , sized,media ready delivery file can be compressed if that’s appropriate.

@Entropy512 it seems is talking about ‘using edited files as input to enfuse, or completely neutral untouched files’. And here i find it funny the thinking is completely reversed from mine , but what works, works ! And what i understand from the algorithm enfuse uses, could work just as well. And if you want an ‘almost ready’ good looking file out or enfuse, using something that already looks good to you for input makes sense in this case.

The OP set contrast-weight to 1, probably to preserve as much contrast as possible in the output. But it’s also one of the reasons the lamp stays clipped. An area between clipped pixels and non-clipped pixels is contrast , so you’re telling enfuse to save that contrast:).
What you actually want is to preserve the contrast of the input files , except for around overexposed parts. There you want less contrast to preserve the contents .

There are also settings for the local contrast enfuse adds, but i can’t wrap my head around those.

It shouldn’t - the problem is, at least as of 2018 and I haven’t seen any notable change here, the weights are applied to raw input data regardless of the ICC profile of the input. As a result the weighting comes out completely wrong, which is one of the two core problems with darktable’s fusion implementation in the basecurve module. (The other being that dt’s module performs blending in linear space when the pyramid blending is, per the original Mertens paper, supposed to be done in a perceptually uniform colorspace.)

Using enfuse as input pretty much violates the whole scene-referred concept for any subsequent editing - you’re not only operating on data that isn’t scene-linear, it’s also now tonemapped.

Compare using basecurve (with or without fusion) at the beginning of a pipeline (bad) vs. at the very end (mostly fine other than basecurve’s now quite outdated color preservation modes). Fusion just takes that even further. In its original implementation it was for fusing JPEGs and not for further editing (you could, but that’s basically a scene-referred workflow), in Google’s HDR+ implementaiton, it was one of the last steps in the pipeline, in darktable’s fusion implementation it is wherever you put basecurve.

It’s completely and totally different than HDRMerge (multiple semi-HDR in, even higher dynamic range out - individual RAW files fit most definitions of HDR at this point…) or response function recovery from JPEGs (Debevec’s or Robertson’s algorithm).

enfuse defaulted contrast-weight to 0 years ago because it behaved unpredictable for typical workflows. It really only makes sense when using enfuse for focus stacking and not exposure fusion.

1 Like

This is a nice result. Sometimes we have to be willing to pay for software to get the best job done. I have always been please with Lightrooms merging, but not so impressed with their subscription based module. Since I still have LR6 I will use that.

Thank you for the detailed and informative reply. This should be very helpful to the OP. I am happy to stand corrected.

@Entropy512 The --exposure-weight=0 and the rest attributes were taken from that manual page I referred to on my first post. I just tried some combinations to see the result. I have no deep understanding of what each of this parameter does and how it does it. That is why this post started.

I understand that passing those RAWs from RawTherapee and the module for the “dynamic range compression” for all of those five images and then converting them to jpgs and finally feed them to enfuse will produce better results. With the default parameters in enfuse I guess… I’ll try that, plus I will upload the RAW files for the rest to have a go.

@jorismak I could in this case play with a RAW file only and bring the highlights to my needs yet this is only an example I’ve chosen to upload. There are other cases where interior and exteriors are in the scene so there is a need for an HDR merge from bracketed photos. Still in this example it is obvious that highlights need fixing after that enfuse process on those 5 pictures. As I understood correctly fixing must be done beforehand.

Just a quick remark on this:
I also didn’t claim to then throw it in a tool expecting it to behave just like an original RAW. I say it needs to be thrown into a tool to get finishing touches. Where a complete commercial tool does merging + mapping + tweaks, I see enfuse as the merging and (a bit) of the tone-mapping. Yes, basically just exposure fusion.
And in that I basically mean to say: Tweak the command line parameters so you have all the signal you want (basically, nothing clipped you don’t want to be clipped) and then try to make it nice with some basic image tweaks in any tool of your liking. I never got likable contrast or saturation out of enfuse without it clipping things I didn’t want clipped…

About your post:
Quite interesting info to know about the algorithms there! I never used Darktable in the basecurve days, so I didn’t know it had some fusion somewhere, let alone of the problems with it. The same that the weights are applied to raw input data…doesn’t seem like that from the help text. What are the other colourspaces for then, just the masking after the weights have been calculated?!

Enlightening at least!

The link you provided takes you to the subsection of the manual for focus stacking - but focus stacking does not appear to be what you are doing?

In general, use the DRC tool OR use enfuse - I’ve never combined both and I can’t really see a use case for combining both.

As I think I’ve mentioned, the DRC tool meets my needs 95%+ of the time, it’s only a few use cases (typically some VERY extreme dynamic range scenarios, or ones that have colored lighting that I want to preserve the saturation of like LED Christmas tree lights) where I’ve found that I need to use enfuse.

At some point now that RT has made some pipeline adjustments for other needs, I may add an exposure fusion implementation similar to what darktable has as part of basecurve, but with fixes for the deficiencies in the dt implementation, and also the ability to use any of the tone curve modes supported by RT. But I’ve got a huge amount of stuff on my TODO list that is suffering from severe procrastination already. (Taking a crack at the OP’s stuff fell into this “impacted by severe procrastination” thing…)

As to other colorspaces - I haven’t touched the dt basecurve module in ages, don’t remember there being any other colorspaces (except for possibly the all-modules-support-it blending modes?) - among other things in the patchset I was working on were blending in more suitable colorspaces and calculating weighting in more suitable colorspaces - [WIP] iop/basecurve: Rework exposure fusion by Entropy512 · Pull Request #2828 · darktable-org/darktable · GitHub

For the record , i mentioned that you probably only need a single raw file to get the desired result (at least in this case).

One of the darker RAWs that dont clip the lamp, loaded with a normal film curve preset in Rawtherapee, and then a full-image logencoding applied is probably all you need.

Besides learning and playing with tools - which is a lot of what I do :wink: - I don’t see a need for enfuse here. The dynamic range isn’t that extreme.

Enfuse shines when there are multiple correct exposures in the scene in my opinion. Things like windows and interiors combined. You have an ideal exposure for the outside , but a different one for inside. This often involves masking in other tools , enfuse (or other forms of exposure fusion, which basically IS masking , just automatic ) handle this well often.

This scene has doesn’t have that multiple exposure issue . It’s just a single exposure scene, with a lot of difference between dark and light because of the lamp. Modern cameras do that very well (if you expose low enough to not clip anything ).

Yes I understood that. Yet I am not satisfied with the results that enfuse produces on the highlights even with this simple example I am presenting here. I thought that it might have some other parameters to play around and fix this issue.

Have you got an example that would show this matter and if so please present the CLI on enfuse that managed to do what you claim. I would like to see those highlights on windows how they will behave on merging the multiple exposures with enfuse.

  1. I’m not a pro :).

  2. I like this result. But the sky for instance looses some colour it seems because it brightens. It’s not clipped though.

The playraw from this thread: Improve window look - Processing / Play Raw -
I took the ‘1612’ DNG file, loaded it into Rawtherapee, reset to neutral, then loaded the default ‘film-look curve ISO medium’ preset, how whatever it’s called.

I did the same for the 1614 DNG file, but I lowered exposure quite a bit (around -2.6 EV?) to bring the bright parts more in line with where I want them.

Those two exports (as 16bit TIFF files, but regular sRGB, just as Rawtherapee normally does).

enfuse -l -2 --blend-colorpspace=CIECAM02 --exposure-optimum=0.6 --exposure-weight=1 --saturation-weight=0 -o output.jpg IMG_1612.TIF IMG_1614.tif

This results in:

The -l -2 means ‘two levels less than normal’. Less levels means ‘use more of local features, less of global features’. Too much can - quickly - become messy. But a few levels less can work if you think enfuse preserves too much of the global look of the images. Then again, enfuse is meant to make a natural looking image so global features are normally wanted.

CIECAM02 is (much) slower but often works better for me. I do try CIELAB / CIELUV and ‘identity’ just to see what the difference is. Here it helps in preserving some colour in the sky outside.

--exposure-optimum=0.6 is to tell we light a bit brighter result than enfuse does by default. If you would use more input images, it would also tell enfuse which pixels to use more from which image (I.E., pixels with a brightness closer to 0.6 get used more, further away from 0.6 get used less. 0 is dark, 1.0 is full bright. You’ll probably stop seeing colour the closer you get to 1.0).

--exposure-weight=1 is the default.

--saturation-weight=0 is to turn off saturation weighing. The default is 0.2. To make it ‘easier to understand’ I turn it off. Then in the end start raising it a bit to see if it helps the result. In this case it didn’t, so it stays off.

A parameter to play with normally is --exposure-cutoff. You can use it to say ‘do not use pixels darker than xxx, or brighter than xxx’. It can help in telling enfuse to simply not use brightly exposed pixels at all.
But in this case, that means the ceiling inside is also just away from clipping, and will not be used if you give it an exposure cutoff like 0:-10. (Use every pixel with value 0 and up, but do not use pixels that are within ‘10’ of the limit. You can also use percentages here).
So, if we tell it that, it will not use clipped pixels, but the ceiling is clipped in the 1612 image. But nicely clipped, we want that :). By using exposure cutoff, those clipped pixels are not used anymore, and so it switches to the very dark pixels of the 1614 image, and you get weird blending on the ceiling. Maybe using more in between images here would actually help.

Fun thing of enfuse in this case, is the more advanced way to use it, with saving/loading the masks it creates. You can call it with --save-masks and it will dump some tif files with the masks it generated. You can edit them, and then call enfuse again with --load-masks to use the edited masks. I use it sometimes to generate the masks for me which I will then load in a photo tool to use for altering the image, or doing the blending manually there.

What makes this demo shot tricky are the fine-mazy curtains in front of the window. They are black but you don’t want them too dark because it’ll look weird with the bright window behind it.


I’m going to add.

I did another attempt ‘my way’.
I exported 1612 and 1614 (with the same -2.6 EV, both with inpaint opposed highlight reconstruction enabled) but starting from the ‘neutral’ preset. Enable capture sharpening / noise reduction / demosaic to RCD to have something of a base starting point.

In the 1612 image, there is now a clear difference between the clipped parts in the window, and the ceiling. Because there is no global contrast applied to the images yet, the highlights aren’t ‘sandwiched together’ yet.

film-curve 1612:

linear 1612 (but still in regular sRGB export):

If I blend them in enfuse with:

enfuse --exposure-cutoff=0:-1% --exposure-weight=1 --exposure-optimum=0.55 --saturation-weight=0 --compression=none -d 16 -o output.tif linear_1612.tif linear_1614.tif

I don’t need to fancy enfuse parameters. The exposure cutoff is here to remove clipped pixels from being taking into consideration at all (which is now possible since the ceiling isn’t clipped anymore), disable saturation weighing again, raise the optimum exposure just a bit above detail.

I get a (linear) blend like this:

See how much is left from the sky outside this way?

Loading that tif into Rawtherapee, load the default film-like curve, and drag the highlights down a bit in the curve. You could then even play with shadows/highlights, tone-equalizer / tone-mapper / whatever in Rawtherapee (or another tool) to modify the look that you want.

Not saying it’s perfect, but I do find I have more ‘grip’ on what I’m doing this way :).


I don’t know which way is ‘correct’, or what problems are caused by what. I think both ways work with enfuse :).

Instead of messing with exposure masks in a raw converter, let enfuse do it for you, and edit the result as if it was a base raw-image-start.

Thank you very much for the detailed guidance on enfuse from your point of view. Very helpful and explanatory indeed.

I’ve tried of what you did myself on another set of 5 photos (maybe they are too many) working from tifs to jpg and to tif same commands as you gave too. (note that --blend-colorspace=CIECAM02 has been replaced with just -c in the command prompt).

I did too processed the RAWs in RT as far as shadows and higlights are concerned plus some angle corrections on all of them and highlight compression where needed and exported them as tifs.

I am presenting here the jpg that I managed to produce with your options on enfuse (the first one that produces a jpg from tifs). Yet I like the darker one that the second command produces (the one that produces a tif as an output).

I am not entirely satisfied with the highlights though.