What's your workflow for up-scaling images?

If you are taking the images you have one good option: taking multiple images and stitching them together. The open source tool for this would be hugin. This works but it can be hard to pull of for non static subjects. Fun fact: you can also lower noise or simulate a larger aperture that way.

Now as far as adding resolution to an existing image is concerned, it can’t really be done. In essence algorithms are either blury or make stuff up. If the image is suitably vector like automated or manual tracing (inkscape would be the open source tool) can work well.

If it’s not a vector, you can upscale the image (by a factor of 2 for each side if possible), sharpen it and then add some noise/grain to give it some fake detail. It’s not quite as bad of an idea as it sounds but obviously far from good.

There is other techniques out there too trying to synthesize detail but from my experience they don’t work well.

There seems to be some interesting research going on into using artificial intelligence of sorts (deep convolutional neural networks). This article by flipboard is relatively accessible. With that said I haven’t seen any production ready implementation of this and I have the suspicion that they will create really strange artifacts. But we’ll see.

Is this the Brenizer method, right?
For those not familiar


There are a lot of resources about it with good example.

1 Like

I’m on mobile, so I’ll be back with more detail.

If your scene is static, you can take multiple handheld shots and manually create a superresolution image similar to how the sensor shift tech works on Hasselblads and the new Olympus em5II.

  1. Take a bunch of handheld images of your scene (burst mode is good for this).
  2. Scale images up (200% width and height).
  3. Align images (Hugin/auto_align script)
  4. Mean blend images.
  5. Sharpen a bit.

I’ll try to shoot some samples to demo this idea. I originally read about someone doing this on a post here:

1 Like

I’ve done this before. Stack them in Hugin, have it output at 2x resolution, and use RL Deconvolution in RawTherapee.

The more pixel-level sharpness you have, the less strong the RL deconvolution you need. My Ricoh GR needs only some; my 60D needs a lot.

I don’t usually do this though because it’s annoying to do.

The Brenzier method specifically refers to simulating a larger aperture, so yep.

1 Like

Ok, here we go. I am not going to link my full-sized files. (Normally I would).
The reason is that they’re 32 megapixel tif images (~100MB each).

I used my iPhone 6+ to shoot a burst of test images. Here is one of them:

I’ve marked two areas in red for comparison of what we’re doing.

I used the align_image_stack script that comes bundled with Hugin, and Imagemagick mogrify, (I also use a custom G’MIC script I got from @David_Tschumperle for mean averaging videos).

I posted earlier the steps but here’s the general workflow for static subject images:

  1. Shoot a bunch of handheld images in burst mode (if available).
  2. Develop raw files if that’s what you shot.
  3. Scale images up to 4x resolution (200% in width and height). Straight nearest-neighbor type of upscale is fine.
    • In your directory of images, create a new sub-directory called resized.
    • In your directory of images, run mogrify -resize 200% -path ./resized *.jpg if you use jpg’s, otherwise change as needed.
      This will create a directory full of upscaled images.
  4. Align the images using Hugin’s align_image_stack script.
    • In the resized directory, run /path/to/align_image_stack -a OUT file1.jpg file2.jpg file3.jpg ... fileX.jpg
      The -a OUT option will prefix all your new images with OUT.
    • I move all of the OUT* files to a new sub-directory called aligned.
  5. In the aligned directory, you now only need to mean average all of the images together.
    • Using Imagemagick: convert OUTfile*.tif -evaluate-sequence mean output.bmp
    • Using G’MIC: gmic video-avg.gmic -avg \" *.tif \" -o output.bmp
      I’ll attach the .gmic file at the end.

I used 7 burst capture images from an iPhone 6+ (default resolution was 3264 x 2448).
Here are 100% crops of the results.
For the first area marked in red, the image looks like this with straight upscale first, result second:

The second marked area looks likes this before and after:

I would say that if your scene is static, this would be a perfect way to increase the resolution significantly (and the added bonus of less noise as @Jonas_Wagner already pointed out).

The .gmic script I used is here:

video-avg.gmic:

avg :
  -v -
  ({'"$1"'}) -autocrop[-1] 32 -replace[-1] 32,{','} files=@{-u\ @{-1,t}}
  -m "_avg : $""=_file _nb_files=$""#" -_avg $files -uncommand _avg -rm[-1]
  -repeat $_nb_files
    file=${_file{$>+1}}
    -v + -e[] "\r - Image "{1+$>}/$_nb_files" ["$file"]               " -v -
    -i $file -+
  -done -v +
  -/ $_nb_files

Just copy and save the code block above in a file called video-avg.gmic in the same directory you’ll be mean averaging your images.

2 Likes

Very nice result. I will try to put the steps into a Lua script for darktable in the next few days.

That is really an interesting technique!

I’ve written a small script that automates the whole procedure using PhotoFlow’s batch processor. For the moment it is just a test, so it is not yet very user friendly (and not really tested in detail…).

Assuming you have photoflow installed and you are under Linux (I still have to figure out how to do the same under Windows and OSX), all you need to do is to unpack the tar file in some folder of your choice, and copy the RAW files of the images to be averaged into the input sub-folder.

Then you can type

./superresolution.sh

and wait for the

output/averaged.tif

file to appear.

The script will develop each input RAW file (using some default parameters like CAMERA WB and sRGB output) and upscale it by 200%.

Then it will use align_image_stack to align each image (this command is part of the Hugin suite), and will generate an output/averaged.pfi file to load each aligned image and blend it with the previous ones at decreasing opacity.

Finally, the output/averaged.pfi file is saved into output/averaged.tif

If you are interested, I can give you more details on how it works and how it can be tweaked (for example, how to change the RAW development settings).

1 Like

You may want to consider using uncompressed TIFF as your intermediate format as it is lossless, it has good Exif/XMP/IPTC metadata support and is fast to read and write. Also consider adding --gpu to the align_image_stack command to make use of your GPU for remapping, if available, which results in the process being roughly twice as fast.

As for upscaling the source images, I would not use cubic or linear, not in 2015. Lanczos is great all-round, it does very well at preserving sharp but smooth edges without aliasing or stair-stepping. Jinc is worth trying too, it’s a bit smoother. Of course each has its own parameters to control it.

You can easily test all resizing algorithms from a console using:

while read -r filter; do convert test.tif -filter "${filter}" -resize 200% "test_${filter}.tif"; done < <(convert -list filter)

Shit in = shit out. It’s very important that you start off with appropriately corrected images - a good, smooth demosaicing algorithm, with corrected chromatic aberration, purple fringing, distortion, hot pixels, etc. Sharpen slightly if at all.

1 Like

[gmic] *** Error in ./avg/ (file ‘video-avg.gmic’, line #3) *** Item substitution ‘{-u @OUT0000.tif,OUT0001.tif,OUT0002.tif}’: Unrecognized item ‘u @OUT0000.tif,OUT0001.tif,OUT0002.tif’ in expression ‘-u @OUT0000.tif,OUT0001.tif,OUT0002.tif’.

:frowning:

1 Like

@David_Tschumperle recently updated that script for me for more recent g’mic. He may have even included the mean averaging into core?

Yes, it is now included by default: http://gmic.eu/reference.shtml#average_video

You can use it like this :

$ gmic -average_files \"\*.jpg\" -o avg.jpg
1 Like

Oh, cool! Had I known this script existed when I started fooling with super resolutions a few months back, it would have made life much easier, lol! Glad to know of it now, though!

@Isaac

when I started fooling with super resolutions

Sounds interesting! Could you please tell us more (presumably in a thread of its own)?

2 Likes

Yes, please :grin:, tried it myself recently with pictures of the moon, but never got a reasonable result. I always had problems with the alignment.

@Isaac, @Claes, @chris have ya’ll seen the old post about this on our blog?

1 Like

IIRC your article was the reason to try it :smiley:. However, I have a serious dispute with hugin and his (pano)tools … Neither panoramas nor stacking ever worked reasonable. But it may be the input data and not hugins fault.

If you’re not getting good results from the auto align, switch the user interface to expert mode then manually assign some points to see if it improves.

1 Like

Thanks for the tip, unfortunately I already did that, hugin starts in expert mode (not for the moon, there I stuck to align_image_stack but played with command line parameters). Actually, the first thing I do when a software offers some kind of advanced/expert mode is enabling it :smiley:. And it really could be the input data, some panos were shot handheld and the moon photos I got from a friend, don’t know what he exactly did. But I don’t want to hijack another thread again, it could become a habit, so I’ll come back next time when I have problems with it.

I’ve been trying to do some cylindrical panoramas on my 24mm sigma lens, and I’m finding about the same as you in regards to hugin; precision when capturing seems to be the name of the game.

1 Like