Hdrmerge and RT

I was impressed that someone would create some software that would create an HDR image without doing the tone mapping for you. I’m not saying LuminanceHDR is bad for doing that - or any of the closed source software that everyone depends upon. I just think it’s neat that they would leave that up to other programs.

So I looked at the tone mapping page in RawPeedia (which is where I learned about hdrmerge) and it just talks about the tone mapping section o f RT. And I looked in the old forums in the tutorials topic and didn’t see an HDR tutorial. So, when I played with tone mapping an hdr image I didn’t necessarily get an awesome image. But when I played with all the normal sliders from RT as if I’d just taking a normal raw image, I was able to get something I liked.

Is this the HDR workflow with RT?

  1. take images about 1 EV apart
  2. import them into hdrmerge and get out the DNG with 32 bit channels
  3. open it in RT and turn on tone mapping
  4. Adjust the image normally in RT knowing that you have more details in the highlights and shadows than you otherwise would - therefore less noise. YAY!

Am I missing anything?

Thanks!

2 Likes
  • Shoot your raw images to cover the whole dynamic range of the scene. Spacing them 2EV apart is good enough. Spacing them 1EV or less is a waste of time*.
  • Combine them into a HDR DNG file using HDRMerge.
  • Save with 16-bit floating-point precision*. I have yet to see a scene where this is not enough.
  • Edit with RT. RT’s Tone Mapping tool is not well suited to dealing with images which have such a high dynamic range. Instead, Retinex and curves will get you better results.

* Prove me wrong.

Generally true.

However, the worse the midtone/highlight SNR of your camera, the tighter spaced you want the exposures to be so that they average out more. The tighter the spacing, the cleaner it’ll be even outside of the shadows. This would be important for small-sensor point-and-shoot cameras if you for some reason want to attain DSLR quality from one.

Once I did a 3-stop interval HDR on my Canon 60D, merged by the command-line version of Filmulator, which works similarly to HDRMerge, and there were noticeable noise differences between places where one of the brighter photos was almost but not clipped, and where it was actually clipped and no longer used. I imagine that any camera with 3 stops less midtone/highlight SNR would need 1 stop differences between exposures to match that quality, hypothetically.

On the other hand, I was overly conservative back then and made Filmulator discard a pixel if any of its channels were clipped, so it effectively limited the peak SNR of the camera.

First of all, thanks for the info from both of you.

I’ll have to study what that Retinex thing is. I made a test HDR image by turning on my master bathroom light - 8 lights with extremely high lumens (it glows when you look at it from the bedroom). Then I went outside to the bedroom and shot, trying to get details in the bedroom without blowing out the bathroom.

I did this the day before I watched a talk about realistic HDR in which someone explained how to use the histogram to figure out how many exposures you need and how many EVs apart. I think there would be cases where 2 EV would be fine (given the dynamic range of most SLRs) and cases where 1 EV would at the very least be conservative.

As I said in my OP, at first I was not impressed with the image just putting it into LR and turning on tone mapping. Once I started playing with the curves just like I would with a normal image, it turned out much better. (Although if I were truly trying to make the best image, I would have then used GIMP or some other program to do a little blending with one of the shots that had the perfect tones in one section OR perhaps could have gotten away with the C(H) tool in LAB)

So, based on Morgan’s response, that was the right thing to do. Tone Mapping on its own isn’t going to do much. Which makes sense, when I saw a tutorial with the HDR tool everyone in the closed source world loves, once the guy had the HDR image created, he spent some time picking good sliders (like you would in Luminance or w/e the open source tool is) and then even put it into LightRoom for a bit more tweaking.

Anyway, thanks again for the advice. Now I’m just waiting to a change to go somewhere where I’d have a need of HDR. And, I have to look up what Retinex is.

1 Like

Tone mapping is a general name for a technique, while the actual way it is done is up to the implementation. These implementations are called tone mapping operators, TMOs. Each has its own benefits and drawbacks. Luminance HDR has a bunch of them (Durand, Reinhard, etc. They are named after the person or lead researcher who came up with them), while RT’s tone mapping uses one which LHDR doesn’t have, edge-preserving decomposition. However back when RT’s tone mapping was implemented HDRMerge didn’t exist and we didn’t have raw images capable of containing such high dynamic ranges, so RT’s tone mapping is not well tuned for them, that’s why I suggest using other tools.

Tone mapping - Wikipedia
http://www.cs.huji.ac.il/~danix/epd/
Google-translate this: Retinex/fr - RawPedia

I think how you shoot has a lot to do with access to your subject. I shoot in a lot of places I’m only going to be in for a week at most. When I shoot, I set my camera to Av, set to anywhere between 11-22, and then I measure the brightest looking area and the darkest looking area with the camera. That will give me my “range” to shoot in, so to speak. You might find you only need two shots with an EV of 2, or you might find you need 11-13 1 EV shots. I usually shoot 1 EV spacing from darkest to blowout because I don’t want to get home and find I missed something. You can mix and match your shots however you like later.

Sounds about right. I’m looking to try it out with a good subject this summer.