[Solved] hdr workflow - ask for guidance

I do not understand the whole debate. I would open the darkest photo (3488) to
raw-editor and adjust photo whit it. Where’s the beef?

agreed

One can always edit the darkest photo alone. However, merging multiple exposures gives cleaner shadows with much reduced noise. While this might not be crucial in this specific case, it is at the basis of any HDR workflow (which is the initial topic of this thread).

2 Likes

In which format did you export the images? I am not sure that all Hugin versions are compatible with 32 bit floating point tiffs…

@Carmelo_DrRaw Please, forget that.

@afre You just pointed me to the right direction, thanks for that. It was a system environment issue: I had a different enfuse version, incompatible with the hugin Ubuntu version. After I removed that and reinstalled hugin, it no longer crashed.

After a couple of tweakings, I could recover some good information from the raw data.
Thanks to all, especially to @Carmelo_DrRaw for sharing this method.

3 Likes

I understand HDR Workflow, but why first merge photos with HDRMerge and then make different exposures to enfuse. Why not use enfuse directly to original images?

HDRMerge is great because it merges raws without demosaicing; hence results in less noise. However, I think it can only handle minor alignment issues (by a few pixels).

I think it merges raws by selecting the best pixel in the different photos.
Here, one photo is mostly selected except for one part of bright sky, so the noise is that of that photo so no noise gain.

Then I don’t understand the interest to generate 5 photos from the HDR photo with different exposure to blend them in enfuse. Again no noise gain as noise is exactly the same in the 5 photos.
Or is it a way to tonemap the HDR photo? is it better than tonemapers as fattal, Mantiuk…?

If hdrmerge gets the “best” pixels from three original images, then it results in a 32-bit composite with the best from each image, without demosaicing.

Then you must recover all that good information, aka high dynamic range, and make it visible in a low dynamic range image.

The 5 exposures generations is just a way of cheating enfuse to give it all the high dynamic range possible for its compression.

Why 5 and not 3? Well…to give enfuse more dynamic range layers?

I feel I’m only scratching the surface, and it would be nice to hear a more scientific answer from @Carmelo_DrRaw , but this superficial rationale (due to my lack of deep understanding of what’s involved here) is what drives my attempt to use this workflow (besides the fact that it gave excellent results in that original post)

Anyway, be aware that he brought an alternative way of compressing the hdr based on the bilateral filter.

EDIT: As for the noise, I think it’s similar to what you get when you stack layers using mean or median, Try to stack those five layers in Gimp, for example, and you’ll get a less noisy image.
EDIT2: Actually, I think the best stacking mode to reduce noise is median. Maybe the mean, or average, method works well if you dither your layers by some pixels (and also here), The last links I provided are for astrophotography but I suspect it could give good results for any kind of image (provided you’re up to spend thousands of $ to buy a telescope mount capable of dithering :cry: ).

No, as the five images are coming from the same HDR image, there will be no statistical noise average. They are identical except for exposure.

Yes it is the definition of tonemaping. A lot of Gurus designed methods to do that (fattal, Mantiuk and many others)

I agree with Patdavid, but it does’nt apply here as your five or three images have different exposures and come from the very same HDR image.
The mean method is a very brutal method that is never used in measurement processing. At least extreme values are to be discarded.

1 Like

@gaaned92 I agree with you in all your comments above, so I don’t know where does the noise reduction come from in the enfuse method…

While this is mostly true, I think that enfuse has the advantage of exposing many more parameters that you could tweak compared to the typical tone map operator, including ways to deal with noise. Take this tutorial for instance, Retinex has regularization but not Mantiuk 2006. Another implementation might have it but that is still one control.

You mean gains in noise reduction. :stuck_out_tongue: In this case, I would include the outputs of the raws (TIFs) in addition to the HDRMerge DNG for enfuse to process. That way, enfuse is able to catch what HDRMerge missed. After all, HDRMerge is just one algorithm with its own limitations.

Anyway, as I cautioned @gadolf before, I might not know what I am talking about :blush:. You would have to test what I have said and correct me if need be. :slight_smile:

@gaaned92 @afre @yteaot
Let me try to explain my understanding of the different points discussed here…

  1. HDRMerge vs. single exposure with respect to noise
    Suppose you have three shots taken at -1EV, 0EV and +1EV. When you combine them with HDRMerge the output image will take the dark pixels from the +1EV shot. If you apply a +2EV exposure compensation to the HDRMerge output, shadows will look identical to the +1EV shot (because by default the HDRMerge output will be exposed as the darkest shot, -1EV in this case).
    What happens if you process directly the darkest shot instead? If you apply a +2EV exposure correction to the -1EV shot, the shadows will not look identical to the +1EV shot, instead they will exhibit more noise.
    That’s why we say that the HDRMerge output as an higher dynamic range than the individual shots, because it preserves highlights detail while having at the same time clean shadows…
  2. Why save the HDRMerge output as 5 bracketed images
    After having looked quite in detail into the enfuse code, I concluded that the algorithm works better if the input pixels are provided at equally-spaced exposure values, at no more than 1EV spacing (this value is at least OK for the default settings).
    The enfuse algorithm works by assigning a gaussian weight to each pixel of each input image, the gaussian function being centered at 0.5 with a sigma of 0.2. Those weights are then used to compute the output pixels as a weighted average of the input ones.
    The point is, if none of the input pixels is close to 0.5 then all the weights are close to zero, and the averaging is not accurate. This happens if the images are too far apart in terms of exposure.
    From a practical point of view, saving several images at regular EV spacing is much easier when using a single RAW image that includes all the relevant information, instead of manually interpolating between several bracketed RAW files. That’s were HDRMerge helps a lot…
2 Likes

While #2 might be true in our case, the manual says that evenly spaced EV is unnecessary. What is is providing enough relevant data for enfuse to ingest. In fact, if I have trouble with the highlights, I could list the images with the highlight data more than once to boost their weights. In combination with alpha masking, this tactic is very powerful indeed. And so, as I have said numerous times, enfuse has a lot of depth if one is willing to go beyond the defaults. I am just a beginner myself. :slight_smile:

At one point the Enfuse documentation specifically recommended otherwise. I tried finding that paragraph earlier this year in the latest version (4.2) and failed, so perhaps something has changed upstream? I suppose it’s time to re-read that new PDF. It’s seems longer, too.

Could you point me to that paragraph in the latest documentation please?

Another two reasons:

  • HDRMerge lets you remove ghosts,
  • the final HDR DNG takes far less space.
    Here’s a real example (Sony ILCE-7M2): 5 x 47MB source files = 17.4MB HDR DNG.

However, there are also pitfalls:

  • HDRMerge does not deal with chromatic aberration, dead pixels, hot pixels, etc. These things may become more difficult or impossible to remove as well from the merged HDR DNG as from the individual source files.

Yes, that would be good to refer to. I was going by (my poor) memory. See: http://enblend.sourceforge.net/enfuse.doc/enfuse_4.2.xhtml/enfuse.html#sec%3Acommon-misconceptions.

I will post here since it is related to HDRMerge and this topic. I was able to get HDRMerge working. I am super rusty; I haven’t used it since it first came out! The more raws I merge, the more noise seems to be introduced, and the more I push the hdr, the uglier it gets. E.g., comparison between CRW_1597.DNG and merged CRW_1594-1597.dng (only 4 frames; 7 is worse) from the recent PlayRaw, brightened to have approx. the same mean statistic.

To troubleshoot, start off by using frames which don’t need aligning so that you can disable that option, then save HDR DNGs with a mask for 2 brackets, 3, 4, etc… and verify at every step by examining the mask whether all images were used and whether the correct parts of each image were used. Then you can file a bug report once you have some concrete info.

1 Like

Okay, I will probably make a new post later.