I’m not sure if I’m using HDRMerge incorrectly or this is a bug, but when I merge 3 files, EV 0, EV -1, and EV +1, the resulting output appears to be just the EV -1 image. I’m using a build on MacOS that I built from github, which I believe is up to date at commit cb42a8b if I’m reading the log correctly.
I’m using RawTherapee’s Neutral profile to compare, and the histograms appear to be almost identical between the darkest image and the HDR result - the brighter pictures clearly show the histogram moving to the right. I think this a valid way to compare the files.
I guess my assumption of how this works is that I would expect the foreground to be the brightest image, and then when the brightest image starts to clip/get close to clipping, it would use the next darker image. Is this correct?
The mask does seem to show the brightest image used in the foreground though, and then in the clouds in my test image, it goes to the other two images in bright areas. This matches up with how I would expect it to work.
I’ll upload the result DNG file, but I don’t think I can upload more than one file yet so I can’t get the source files up (yet, anyway).
As a side note, I was very confused by the labelling of the EV values in the tooltip, but I understand it bases “0 EV” on the darkest image, is that right? I think I read that here.
I thought of one other thing too - I tried manually changing the mask in a solid “green” (supposed to be the brightest image?) area, and the resulting file is no different from the result when just using the automatic mask. I think this supports my idea that it’s not working correctly?
I suppose after A/Bing them closer, I do notice less noise in the HDR image than the -1EV, especially in the grass and clouds. My understanding of what it’s doing must be wrong - so is it a case of where the “+1EV mask” (labelled +2EV in HDRMerge) areas are effectively pulled down 2EV in exporting the merged HDR? Same with the middle exposed image - the areas it contributes are effectively pulled down 1EV in the merged image?
So in which case everything is “normalized”, in a way, to the darkest image?
If that’s closer to what’s going on, then yeah I was totally wrong about what was going on to make the resulting image. I’ll have to learn how tone mapping works then! Sorry for all the faff about nothing haha