This is my first attempt to fully process an image using foss, from shooting (CHDK) to developing (Hugin/Luminance HDR, Photoflow),
It was aimed to be an hdr panorama, following the steps in this post, but since I did the shots hand held, I had aligning issues.
So I gave up the hdr workflow and took the 0 EV frames, aligned them with align_stack, then I created two images from that one, under and over exposed by 1 step.
Then I used Photo Flow to merge them together (ground and sky).
This is the result:
@gadolf I am glad that you posted this. As I said in the other thread, it is a nice photo. Nice to look at and probably nice to play with. One thing to improve is actually drop the raws into the post itself instead of using filebin, which will expire. Then future visitors or users won’t be able to play . If you have trouble with that, please @ any of the mods.
I gave this pano a quick go, and merged the brackets before even checking what they were, before checking whether it was worth merging them. Since there was little difference between the brackets in each bracketed set there was very little to gain, and the resulting merged file, after automatic alignment, suffered from some issues.
There were merging artifacts in the clouds:
Furthermore, each source raw file was riddled with dead pixels, so the merged file had as many dead pixels if not more - the extra ones would appear in transition zones where the source images blended one into another.
The lesson is that there is little to gain, and in fact time and quality to lose, by merging shots where the difference between the darkest and brightest image is only 1EV, as is the case here. As such, I would recommend uploading only the darkest shot of each bracketed stack: CRW_3488.DNG, CRW_3491.DNG and CRW_3494.DNG.
It was fun to see what could be squeezed out from raw files from such a tiny compact camera.
I did this in two steps.
The first step was to squash the dynamic range. I suppose I did this more out of habit than necessity. The first reason for squashing before stitching was that some programs don’t handle high precision (floating-point TIFF or EXR) images well, so you can avoid the issues by stitching 16-bit TIFFs which were already compressed (tone-mapped, fused, manually blended, whatever). The second reason is that some tools requires much RAM, and when dealing with high resolution panoramas (hundreds of megapixels) you might not be able to run certain tools on the whole stitched pano. Neither of these reasons apply here, since this is a small panorama. Here is one of the compressed images before stitching:
Thanks for sharing!
Here’s my route: hdrmerge → RT → hugin
I’m not a huge fan of the “HDR look” so I tried to keep this somehow natural looking (whatever that means There are some halos visible if you look at the image scaled down, but I find them acceptable if you look at the pic in full size (think about printing big).
@agriggio Wow, amazing! How have you managed to aligned them so well? I lost a week or so just to have them fit and gave up…
If you could post the hdrmerge command you used it would be great.
Also, I don’t understand quite well the idea behind your work flow. Have you used hdrmerge only to align the images, then you processed them individually on RT and only stitched them in the end?
@gadolf I used auto-align and auto-crop in HDRMerge, didn’t manually touch the mask.
HDRMerge doesn’t only-align, it’s all or nothing, meaning if you align, you must save the aligned and merged HDR DNG, not the individual frames. However! You could, if you wanted to, then export bracketed TIFFs from the HDR DNG using RawTherapee. So, for example, bracket -4EV, -2EV, 0EV, +2EV, +4EV, merge them using HDRMerge (with auto-align if needed), open the HDR DNG in RawTherapee, then set exposure compensation to -4EV (and enable chromatic aberration correction, dead pixel correction, etc.), save a TIFF, set EC to -2EV, save, 0EV, save, +2EV, save +4EV, save. Now you have 5 aligned TIFFs which you could for example use in Luminance HDR. Why would you do this, instead of just opening the HDR DNG in that program, or instead of exporting a single 32-bit TIFF or EXR from RT? Several reasons. Storing one losslessly compressed HDR DNG takes up far less disk space than storing five usually uncompressed or poorly compressed original raw files - people who stitch panoramas may find that significant. Another reason could be that some software doesn’t support HDR DNG files, nor does it support 32-bit TIFF or EXR files. And yet another that some software doesn’t align well, and doesn’t prevent ghosting well.
Nope. Unless you consider a 16-bit TIFF to be insufficient to store the data and detail, but that’s not the case here.
@Morgan_Hardwood Actually I was referring to @agriggio’s alignment, since he seem to have used all frames. When I tried that, I always ended with alignment issues. That’s why I ended up using only one EV step set.
Anyway, thanks for the elaboration on your hdr workflow. I’ll try that (not sure when ) and get back with results.
As did I. Which was a mistake, since there was nothing to gain by doing that, but I didn’t check beforehand.
But aren’t you too radical in saying that “there was nothing to gain”?
I mean, I agree that three bracketed, one step images won’t add too much to the dynamic range, but doesn’t it, still, add something?
Do you usually take hdr’s with that range (-4,-2, 0, +2, +4)?
I didn’t check either. I simply used (a recent version from git of) hdrmerge with auto align and auto crop, as Morgan said. Nothing fancy really. Then developed the 3 frames in RT (or better, I only worked on the central one, and then copied the settings to the other two), exported, and quickly stitched with hugin.
I used @partha’s GIMP 2.9.8 Std’s panorama stitcher (well, actually it is the free panorama stitcher from Microsoft) after doing preliminary processing in RT 5.4. Tried to anchor viewer’s attention to the yellow building at the bottom of the hill by making it prominent!