I’ve marked two areas in red for comparison of what we’re doing.
I used the align_image_stack script that comes bundled with Hugin, and Imagemagickmogrify, (I also use a custom G’MIC script I got from @David_Tschumperle for mean averaging videos).
I posted earlier the steps but here’s the general workflow for static subject images:
Shoot a bunch of handheld images in burst mode (if available).
Develop raw files if that’s what you shot.
Scale images up to 4x resolution (200% in width and height). Straight nearest-neighbor type of upscale is fine.
In your directory of images, create a new sub-directory called resized.
In your directory of images, run mogrify -resize 200% -path ./resized *.jpg if you use jpg’s, otherwise change as needed.
This will create a directory full of upscaled images.
Align the images using Hugin’s align_image_stack script.
In the resized directory, run /path/to/align_image_stack -a OUT file1.jpg file2.jpg file3.jpg ... fileX.jpg
The -a OUT option will prefix all your new images with OUT.
I move all of the OUT* files to a new sub-directory called aligned.
In the aligned directory, you now only need to mean average all of the images together.
Using Imagemagick: convert OUTfile*.tif -evaluate-sequence mean output.bmp
Using G’MIC: gmic video-avg.gmic -avg \" *.tif \" -o output.bmp
I’ll attach the .gmic file at the end.
I used 7 burst capture images from an iPhone 6+ (default resolution was 3264 x 2448).
Here are 100% crops of the results.
For the first area marked in red, the image looks like this with straight upscale first, result second:
The second marked area looks likes this before and after:
I would say that if your scene is static, this would be a perfect way to increase the resolution significantly (and the added bonus of less noise as @Jonas_Wagner already pointed out).
I’ve written a small script that automates the whole procedure using PhotoFlow’s batch processor. For the moment it is just a test, so it is not yet very user friendly (and not really tested in detail…).
Assuming you have photoflow installed and you are under Linux (I still have to figure out how to do the same under Windows and OSX), all you need to do is to unpack the tar file in some folder of your choice, and copy the RAW files of the images to be averaged into the input sub-folder.
Then you can type
./superresolution.sh
and wait for the
output/averaged.tif
file to appear.
The script will develop each input RAW file (using some default parameters like CAMERA WB and sRGB output) and upscale it by 200%.
Then it will use align_image_stack to align each image (this command is part of the Hugin suite), and will generate an output/averaged.pfi file to load each aligned image and blend it with the previous ones at decreasing opacity.
Finally, the output/averaged.pfi file is saved into output/averaged.tif
If you are interested, I can give you more details on how it works and how it can be tweaked (for example, how to change the RAW development settings).
You may want to consider using uncompressed TIFF as your intermediate format as it is lossless, it has good Exif/XMP/IPTC metadata support and is fast to read and write. Also consider adding --gpu to the align_image_stack command to make use of your GPU for remapping, if available, which results in the process being roughly twice as fast.
As for upscaling the source images, I would not use cubic or linear, not in 2015. Lanczos is great all-round, it does very well at preserving sharp but smooth edges without aliasing or stair-stepping. Jinc is worth trying too, it’s a bit smoother. Of course each has its own parameters to control it.
You can easily test all resizing algorithms from a console using:
while read -r filter; do convert test.tif -filter "${filter}" -resize 200% "test_${filter}.tif"; done < <(convert -list filter)
Shit in = shit out. It’s very important that you start off with appropriately corrected images - a good, smooth demosaicing algorithm, with corrected chromatic aberration, purple fringing, distortion, hot pixels, etc. Sharpen slightly if at all.
Oh, cool! Had I known this script existed when I started fooling with super resolutions a few months back, it would have made life much easier, lol! Glad to know of it now, though!
IIRC your article was the reason to try it . However, I have a serious dispute with hugin and his (pano)tools … Neither panoramas nor stacking ever worked reasonable. But it may be the input data and not hugins fault.
If you’re not getting good results from the auto align, switch the user interface to expert mode then manually assign some points to see if it improves.
Thanks for the tip, unfortunately I already did that, hugin starts in expert mode (not for the moon, there I stuck to align_image_stack but played with command line parameters). Actually, the first thing I do when a software offers some kind of advanced/expert mode is enabling it . And it really could be the input data, some panos were shot handheld and the moon photos I got from a friend, don’t know what he exactly did. But I don’t want to hijack another thread again, it could become a habit, so I’ll come back next time when I have problems with it.
I’ve been trying to do some cylindrical panoramas on my 24mm sigma lens, and I’m finding about the same as you in regards to hugin; precision when capturing seems to be the name of the game.