[Solved] hdr workflow - ask for guidance

@afre My point is why this enfuse result (with default settings applied) reveal details on the sky,

… and this one doesn’t (also using enfuse with defaults)

given that both +3 EV frames show completely white sky


In other words, I’d like to check if I’m doing something wrong before considering the alternatives you’ve pointed out.

@CriticalConundrum Yes, it is.
Would you like to share your workflow?

I don’t know exactly what you mean but my guess is that the two images are very different in terms of content, contrast and proportion. Using only the defaults is like opening a raw processor and doing nothing. That won’t work for every photo.

If you are still following @Carmelo_DrRaw’s example, ask him to elaborate on how he arrived at his results.

Who knows, maybe this is a tutorial in the making! :wink:

1 Like

Maybe that’s what’s happening here and I will end up splitting this workflow into two parts, one for the ground and one for the sky, as you mentioned before.

Meanwhile I’ll wait a bit more to see if some new idea comes into play.

Thanks anyway for your support.

@gadolf It’s really fast. I export the HDRmerge raw with a neutral tone curve into a 32bit float format. I then open this in a photo editor and duplicate into 2 layers for shadow and highlights. I apply the S curve for highlights ignoring the effect on shadows and blend it into the highlights region using blend if. Then I do the same for shadows. Ideally you try to get the contrast about equal across the board and then you can do one last S curve for everything.

I did in fact finish off with a Tonal Contrast effect from the NIK Collection, which is free, but honestly it’s kinda like icing on the cake.

@CriticalConundrum On which color space?

EDIT: I agree, working with the hdrmerge output is easy. I got something like this, without layers:

But with that I loose all compressed dynamic range provided by enfuse. In the workflow I’m trying to achieve, I create images with different exposures from the hdrmerge output and then merge them with enfuse. With that, you compress all dynamic range into a single image, which could then be edited, revealing tonalities at taste. Also, by doing those steps on a wider gamut color space, you make sure you don’t loose any data. See this post.
Working with enfuse gives me much better results on the shadows:

My point is also achieving the same results on the highlights, which I thought I would be able to do in the same edit session. But, as @afre said, I probably will need to split this workflow into three parts, one for shadows, one for highlights and then merge the two outputs.

Sorry for being late to this party, but life kept me very busy recently…

This is my “best” result with enfuse, obtained in the following way:

  • I saved images at 1EV intervals, from -3EV to +3EV, in perceptually-encoded sRGB and 16 bit TIFFs
  • merged the images with the following command:
--exposure-weight=1 --saturation-weight=0 --blend-colorspace=IDENTITY --gray-projector=luminance --depth=r32 --output=CRW_3486-3488-enfuse.tif CRW_3486-3488??.tif

Here is the result:

The result is quite flat, particularly in the sky…

Recently I have started to play with another method, which is based on the bilateral filter. This method is not new, and there is quite a lot of scientific literature on the subject (see for example this and this articles). It boils down to performing a two-level decomposition of the image, where a blurred version of the image is subtracted to the original one to isolate the high-frequency details. The contrast of the blurred image is then flattened to compress the overall dynamic range, and finally the high-frequency level is added back to restore the local details.

The key is in the blur operator: when using a simple gaussian kernel, the result will show visible halos around strong edges. This problem can be mitigated, or even practically removed, by using an edge-aware blur operator like the bilateral filter. Such blur operators are able to “distinguish” between fine-grained texture (that gets blurred) from hard edges (that are left unchanged). As a result, the high-frequency level will contain the fine-grained details, but not the hard edges, which are kept in the blurred version.

Here is what I could obtain with a simple application of the above method:

The result gets even better after restoring a bit of contrast to the merged image:

Here is a PhotoFlow preset that automates the procedure:
SH-recovery-bilateral.pfp.zip (1.8 KB)

Looks better than your enfuse output. I came up with and experimented with a similar method but it didn’t perform well. Guess I will revisit it sometime.

@Carmelo_DrRaw Thank you very much for taking your time and sharing your “new” workflow!
I loaded this image, applied your preset, and exported to a tif file. I did the same to other two images that produce a panorama.
Unfortunately, Hugin doesn’t seem to like these files: after setting up the parameters (finding the points / geometric and photometric optimization), when I click on the GL button (to start building the panorama), it crashes. I tried this for rec2000 tifs as well as for srgb’s. Reinstalled it but nothing.
When I have time I try to test this Hugin/Ubuntu 18.04 on other panoramas to see if the problem is with this new Ubuntu.
Thanks once more!

Try (1) deleting Hugin’s preference files and (2) reading its logs.

my try

  • HDRmerge
  • Rawtherapee with fattal and some tweaks

Not sure if HDR workflow is useful in this case.(only 1 EV between darker and brighter photo)

CRW_3486-3488.jpg.out.pp3 (11.4 KB)

I do not understand the whole debate. I would open the darkest photo (3488) to
raw-editor and adjust photo whit it. Where’s the beef?


One can always edit the darkest photo alone. However, merging multiple exposures gives cleaner shadows with much reduced noise. While this might not be crucial in this specific case, it is at the basis of any HDR workflow (which is the initial topic of this thread).


In which format did you export the images? I am not sure that all Hugin versions are compatible with 32 bit floating point tiffs…

@Carmelo_DrRaw Please, forget that.

@afre You just pointed me to the right direction, thanks for that. It was a system environment issue: I had a different enfuse version, incompatible with the hugin Ubuntu version. After I removed that and reinstalled hugin, it no longer crashed.

After a couple of tweakings, I could recover some good information from the raw data.
Thanks to all, especially to @Carmelo_DrRaw for sharing this method.


I understand HDR Workflow, but why first merge photos with HDRMerge and then make different exposures to enfuse. Why not use enfuse directly to original images?

HDRMerge is great because it merges raws without demosaicing; hence results in less noise. However, I think it can only handle minor alignment issues (by a few pixels).

I think it merges raws by selecting the best pixel in the different photos.
Here, one photo is mostly selected except for one part of bright sky, so the noise is that of that photo so no noise gain.

Then I don’t understand the interest to generate 5 photos from the HDR photo with different exposure to blend them in enfuse. Again no noise gain as noise is exactly the same in the 5 photos.
Or is it a way to tonemap the HDR photo? is it better than tonemapers as fattal, Mantiuk…?