Optimal 16bit HDR panorama workflow

Nice sample shot, I agree.

I sometimes find with my own panoramas that using my camera’s own raw development software has some advantages when it comes to colour and lens correction. The camera manufacturer know how to pull the most benefit from its own raw image format, so I’ve started there to produce 16-bit TIFFs corrected only for these two variables. These I pass along to hugin for alignment, exposure blending (Enfuse), and stitching. Hugin’s own 16-bit TIFF output can then be passed to Rawtherapee for tone mapping, noise reduction, and other enhancements.

Totally! But I was busy eating and this had to wait :sunglasses:

  1. Don’t worry about 16-, 24- and 32-bit. I tried 5 shots bracketed 2EV apart of a very high dynamic range scene and saw no difference between 16-bit and 32-bit, so I use 16-bit - it’s smaller.

2.a. If you still see ‘Auto Levels’, you need to update.
2.b. If you use “(Neutral)” then you shouldn’t need to set any of those other things you mention to zero - the profile does that for you.
2.c. The rest sounds good.
2.d. The question is whether you get the best results by (a) first tweaking your photos and then stitching, or (b) setting your photos to neutral, stitching and then tweaking. I think it generally doesn’t matter (I think = I tested many things and arrived at this conclusion). What does matter is that you don’t lose any data before stitching. As such, when I process photos intended for being stitched into panoramas, I made a processing profile which uses things like chromatic aberration correction, a little sharpening, defringing, and other things particular to my lens, but when it comes to the exposure tools I leave everything as neutral as possible while preventing clipping in the shadows and highlights. Generally this means enabling “Highlight Reconstruction” (even if I leave it at 0) and sometimes setting the Black slider to -5000 or so. Once stitched, I then make it look aesthetically pleasing, knowing I lost no data till that point.
I need to stress that you should absolutely not use tools like Color Toning or Wavelets before stitching, only after, because they will create differences between the tiles (I call them tiles, I mean the individual ‘shots’ or ‘panels’ or ‘fields-of-view’ or ‘bracketed sets’ which you stitch).

The rest sounds good.

Finally, once you’ve preserved all data up to the point of stitching, don’t forget to stop prioritizing that when you tweak the stitched panorama, because I-can-see-everything does not equate to what-a-nice-photo :slight_smile:

1 Like

@tbransco I don’t agree that the camera knows best. For one, computer software has more processing power at its disposal. Secondly, a generic profile for your lens (regardless whether it comes from the camera or lens manufacturer or elsewhere) is likely to not be as good as a profile you made yourself, using your lens on your body. Third, distortion is complicated, your lens and body combination could suffer from several kinds of it, so the software correcting it should support that.

1 Like

Just to clarify, I was suggesting the use of desktop software, not the use of the camera itself, to convert the raw files to TIFF. Fair points on the remainder, though; I’ve just not been keen enough to build such profiles for each lens/camera combo I might use. Those with higher standards, or worse lens/camera combos, are certainly welcome to go that route.

1 Like

Yes, I read your early remarks on HDRMerge’s bitdepths. I guess I got this comment wrong, because there is some byte twisting below:
https://github.com/jcelaya/hdrmerge/blob/master/DngFloatWriter.cpp#L378
So I’ll go for 16bit (and perhaps try it on a PPC machine).

Well, I’m using the most recent distribution-supplied RT version, and although the distribution is Debian testing this still happens to be an elderly 4.2. :wink: But shure, I can compile the latest GIT version if I’m no longer content with the stock version’s features.

Okay, I’ll try your recommendations next time.

Sure. :slightly_smiling:

Thanks a lot!

Best,
Flössie

Thank you, Pat! This is, in fact, a crop from a much broader (7:1) panorama.

1 Like

Some people are even using Blender to create HDR Panoramas:
http://adaptivesamples.com/2015/11/09/hdri-mpumalanga-veld/

1 Like

[quote=“floessie, post:1, topic:717”]
What do you think? Am I on the right track? I’m a bit unshure, if I preserve all details in the shadows with my RT preprocessing step. The preprocessed images look relatively dull and the postprocessing is mostly about lifting the shadows via tonemapping.
[/quote]It looks great so yes. :slightly_smiling: Your workflow sounds sane too.
Enfuse (the exposure blending in hugin) can sometimes also deliver great results out of the box so give that a try too.

Regarding the specific image it looks a bit underexposed at least in the shadows. I’d probably push it a bit more:

But that of course also really depends on the environment in which it is going to be displayed and the mood you want to convey.

1 Like

Yes, I tried it once and was not overly impressed, but maybe I was too green or the shot didn’t fit.

That looks indeed better on this white background. And actually, I had it printed on aludibond to be placed on a light yellow wall and it is a tad too dark. Next time I better ask for advice here beforehand. :grin:

Thanks for pointing out.

Best,
Flössie

@floessie switch RT’s preview background color to white before saving - a good habit.

Do you mind sharing your settings for the CIECAM02 section? I’m new to HDRMerge and RT.
A little tutorial on what to do with the CIECAM02 tone mapping will be ideal! :slight_smile:
I like the edited image from @Jonas_Wagner which is slightly brighter and vivid.

I’ve used my own shell script to simulate the ZeroNoise technique and process fake exposures (from the ZeroNoise output) with enfuse; but I never get any accepted/pleased result…
HDRMerge looks promising and better than my script because it works at the raw level.

Cheers,

The version of RT from the Open Build Service for Debian has been working very reliably for me for a long time and gets timely updates. The version in the Debian repo is ancient!

Hi there,
no problem. Here is the PP3. sthlm.tif.pp3 (5.8 KB)

It has been some time since I worked on it. Read the tooltip for tonemapping in RT carefully. It basically explains what is needed to get it working with CIECAM02.

HTH,
Flössie!

Mica, that’s a good hint. For others it could be helpful, if you post the relevant sources.conf lines here. For me … Well, I’ve decided to spend a bit of my time on RT every now and then, so I’m building directly from git. :grin:

I’ve installed it following the instructions from http://rawtherapee.com/downloads

which are currently:

echo 'deb http://download.opensuse.org/repositories/home:/rawtherapee/Debian_8.0/ /' >> /etc/apt/sources.list.d/rawtherapee.list 
apt-get update && apt-get install rawtherapee

@paperdigits thanks a lot for the link! I was actually looking for something like this. I will install the updated version asap. Am also using regular Debian and I look forward to all the bug fixes :smiley:

You are welcome, but you should really thank the repo maintainer for doing the packaging work, the updates usually come within 2 days of me seeing a post here. :slight_smile:

Yeah I’m looking forward to install later today :smiley:

Microsoft does it all

Support for input images with 8 or 16 bits per component
Ability to read raw images using WIC codecs (I’ve tried this; you can read RAW files directly)
Automatic exposure blending
Wide range of output formats, including JPEG, JPEG XR, Photoshop, TIFF, BMP, PNG, and Silverlight Deep Zoom

The default tone mapping in RT doesn’t shift colours to the usual tone mapping style unless CIECAM02 is enabled. The default brightens images markedly though.

There are several approaches to tone mapping for both a natural look and the other. There was a very capable photographer called Colin that used to hang around on cambridgeincolour helping people with post processing. He was very photoshop. I think he would suggest different exposures on the shot that was posted by the OP.

More than 3 can be taken and if exposures are correct shadow and highlight compression etc may not be needed. Generally 6 clean stops can even be expected from a camera jpg. Raw circa 10 with some work. Some would say more on raw. Some people try hdr using +/- 1 which on a decent camera is likely to be pointless so it all comes down to what the plus and minus should be, even should they be equal or even more than 3 shots. Then how bright do the dark parts need to be etc to gain the overall effect that is wanted. Things can go as far as making everything look evenly lit.

I’ve been more interested in HDR since I started shooting mirrorless as the clipping can be seen. I sometimes take 2 shots of things to subsequently merge. :slight_smile: Might get a decent image worth finishing one day. I’ve also played around with the idea. I managed to loose my processed photo’s so can’t post any examples.

I think the shots needed at least another stop to bring up the shadows - :flushed: forgot to mention that.

Hope you don’t mind but I am going to post a bit of rework of the shot here

As an example of how easy it is to use a particular feature most pp software has. :slight_smile: Not that we will get it.

John