Aside from your request, I admit I do not get the advantage of two pairs of HDR mixed afterward vs one HDR, due to my poor knowledge. Would you mind explain me the foundation of this technique in more detail, o could you link me to some sort of initiation pages? I would appreciate it so much.
The image in the middle is using the HDRMerge with SuperResolution, it boasts the same detail and slightly less noise(smoother transitions) than pure photoshop.
I believe that if HDRMerge will adopt SuperResolution the results can be even better.
Personal thoughts on the subject: I have followed the “superresolution” posts here and the conclusion I drew is that it leads to poor quality results and so is a waste of time. HDRMerge is a neat and small program which does one thing and it does it well. I would classify adding this sort of feature as bloat.
Median stacking is a separate and valid technique though with extremely narrow applicability.
The median stacking with pixel shifting (hand held shot) - SuperResolution - gives great results in terms of detail. It’s perfect for landscapes and shots without moving subjects.
Plus, there isn’t any app at the moment that takes several RAW files and upscales them to a RAW output.
That’s why i believe that having the option in HDRMerge would be great.
I previously updated a result of a very simple edit with great results.
Could you point out any examples on why you believe that SuperResolution leads to poor quality results?
I’ve tried superresolution and one big issue is trees. Trees have branch intersections that move dramatically when the branches themselves move very slightly, and that really screws up stacking. Plus they sway as a whole a surprising amount. You’ve gotta have a perfectly still day to make much of superresolution from anything with trees outdoors.
“SuperResolution” sounds like a thing, but it isn’t. There are many techniques for increasing resolution, very different to each other (1) (2) (3) etc. In a technical discussion, something which could mean anything means nothing.
You’re asking for two things: upscaling and subsequent stacking using a median or linear blend.
Those things require perfectly aligned images. Not easy with various raw formats.
Usability of such a feature is severely limited: the scene must be static, and if raw image alignment is not implemented then the shots would also need to be perfectly registered.
Feature requests for blending and alignment have been open for ages, no one is working on them.
HDRMerge has two developers, both heavily involved in other things. I don’t count myself, as I just fixed one bug and will update its website.
Separate from the discussion of whether or not “SuperResolution” can/should be included in HDRMerge, here’s my Pentax K-3 II pixel shift test.
I used Pentax Digital Camera Utility (PDCU) 5.4. No lens correction or other changes were applied. For the non-pixel shift shot, I used PDCU’s standard sharpening mode (the fine sharpening mode wasn’t helping in the same way it does with pixel shift shots and seemed to be having an undesirable effect), and applied as much of this standard sharpening as I could before angled edges got very obviously pixelly/jaggy.
Whole-shot preview, just for context:
1:1 crops (you’ll need to click to zoom to show the full difference as these images are wider than the content column here in Discourse):
SuperResolution/pixel shift as I think of it doesn’t result in final images which have greater pixel dimensions, but images which are of the same pixel dimensions but with greater definition and less noise due to the combining of multiple shots. I’m not sure why upscaling would be necessary/desired? Is interpolating (substantially more) useful in that workflow in addition to merely combining and averaging shots at their original PPI?
Thank you for the excellent images to illustrate the results of pixel shifting and stacking.
From my experience/tests using upscaled images increases the definition even further.
@DavidOliver I love my camera, but damn that’s a nice feature to have! What are the minuses of enabling it, how fast does it do the pixel shifting? How does it impact exposure time?
Even without a pixelshift enabled camera, we can get similar results with hand held shots on a normal camera.
We just have to use a burst shot, align the stack and median blend to get the improved detail.
That’s why a SuperResolution workflow is something very interesting to bring to HDRMerge.
four separate exposures, quadrupling overall exposure time (not good for moving subjects)
currently, it complicates processing as the RAW processor needs to be able to process the resultant single Pentax RAW file, which includes four separate images. darktable doesn’t do this, so I would have to make a TIFF using either Pentax’s RAW processor or a modded version of dcraw, both of which can help with artefacts caused by movement between the individual shots, for importing into darktable/GIMP/whatever.
Doing it more manually, as @Helder.Vicente describes, avoids the non-standard four-in-one RAW file problem, of course.