Östraby Mill - After Sunset

Just after sunset at Östraby Kvarn.

The difficult thing about shooting lovely sunsets is not in the shooting but in the showing. The extreme variation in the intensity of light between those brilliant golden clouds and the deep moody dark ones makes it difficult to show on a monitor a scene which is as mesmerizing as nature originally displayed. It’s further complicated when shooting a panorama, because while you’re developing any single raw photo before stitching you can’t well predict what treatment is needed so that some other photo, which de facto has to receive identical treatment in order for the stitched panorama to be seamless, doesn’t get blown into oblivion or pushed into the shadows. Ideally you would stitch first and ask questions later, and that’s what I did here. Following on from my “unclipped” tests with 4-legs ( A dog on a rock ), this panorama is made from Sony α7 II raw files converted to flat, unclipped, 32-bit floating-point TIFFs using RawTherapee, stitched in Hugin, and then processed again in RawTherapee to put some dents in the flattitude. Three intermediate versions saved - ground, field and sky, and those layers were merged in GIMP.

16 Likes

This is quite nice.

Did you bracket your exposures?

Yes, I usually bracket - generally 5 shots 2EV apart.

Regarding bracketing, its purpose is to improve signal quality and reduce noise. Do you need it to handle the high dynamic range? Let’s take a look. One of these is bracketed 5 shots 2EV apart, the other is from just one of those 5 shots. Can you tell which is which?

1 Like

I’m going to guess that 2.jpg is the bracketed shot and 1.jpg is the single. But there isn’t much difference, though there is some shadow noise when pixel peeping 1.jpg.

3 Likes

Exactly.

Thanks for sharing. Cheers me up.

Not all cameras have good DR. Mine is old and abysmal.

1 Like

Wow again but…

:hushed:

7 Likes

@s7habo

1,5. Each group of 5 bracketed Sony α7 II ARW files are converted to one HDR DNG file using HDRMerge.

1 Like

Learning panorama photography…

Me:

@Morgan_Hardwood:

7 Likes

That’s a beautiful image @Morgan_Hardwood .
Glad to see this unclipped processing. It tried a similar approach (maybe wrongly) by saving 16-bit tiffs using a custom ICC output profile based on ACES-AP1 with linear gamma (made using RT profile creator).

@sguyader that approach would still result in clipped shadows/highlights. I wrote some documentation, see:

You might succeed with this approach if you did little or no tone manipulation to the raw images prior to saving as 16-bit TIFF, as your camera saturation even at 14-bit still leaves 75% (linear) of the data range available for growing into before “hrair”[1]…

[1] Lapine language - Wikipedia

1 Like

Just wanted to mention, that your images are always very nice!!

Indeed I avoided any ton manipulation before exporting.

Yes this is why I wrote that I did it probably “wrongly”, and I’m glad to see the “unclipped” procedure exists, as I love creating panoramas and I’m often facing scenes with a lot of contrast.

your hdr image seems to have some alignment problems, probably caused by camera movement during the longest exposure of the stack:

this is correctable to some extent if you do your hdr stacking in hugin

I have wondered about this before: is it possible for HDRMerge to have improved alignment capabilities?

Toyota!. But in the case of HDRMerge improving alignment is almost imossible as the output also has to have the same CFA-array as the input. That makes aligning anything other than hor. or vert. very hard…

Late, but I had the same observations as Paper Digits. If you pixel peep on 1, the noise in shadows is evident, but otherwise, they are the same. Modern Sony sensors are incredible with their dynamic range.

Strange, I have never had to mess with unclipped 32 bit float in doing panoramas. I generally just compress the tonal range until no shadows or higlights are clipping and exporting in 16 bit tiff to be sufficient, and then reintroducing the contrast post stitch and local tonal manipulation. I have found this workflow to be generally sufficient in preserving all data. If I have already made sure that no highlighs are clipping in the brightest shots, and detail is mostly preserved in the deepest shadows, who cares if a few dead pixels in the shadows clip?