@gaaned92 @afre @yteaot
Let me try to explain my understanding of the different points discussed here…
- HDRMerge vs. single exposure with respect to noise
Suppose you have three shots taken at -1EV, 0EV and +1EV. When you combine them with HDRMerge the output image will take the dark pixels from the +1EV shot. If you apply a +2EV exposure compensation to the HDRMerge output, shadows will look identical to the +1EV shot (because by default the HDRMerge output will be exposed as the darkest shot, -1EV in this case).
What happens if you process directly the darkest shot instead? If you apply a +2EV exposure correction to the -1EV shot, the shadows will not look identical to the +1EV shot, instead they will exhibit more noise.
That’s why we say that the HDRMerge output as an higher dynamic range than the individual shots, because it preserves highlights detail while having at the same time clean shadows… - Why save the HDRMerge output as 5 bracketed images
After having looked quite in detail into the enfuse code, I concluded that the algorithm works better if the input pixels are provided at equally-spaced exposure values, at no more than 1EV spacing (this value is at least OK for the default settings).
The enfuse algorithm works by assigning a gaussian weight to each pixel of each input image, the gaussian function being centered at 0.5 with a sigma of 0.2. Those weights are then used to compute the output pixels as a weighted average of the input ones.
The point is, if none of the input pixels is close to 0.5 then all the weights are close to zero, and the averaging is not accurate. This happens if the images are too far apart in terms of exposure.
From a practical point of view, saving several images at regular EV spacing is much easier when using a single RAW image that includes all the relevant information, instead of manually interpolating between several bracketed RAW files. That’s were HDRMerge helps a lot…