Softness and Superresolution

![](upload://aY60bpZT7UItEy5ntYJJ4FzgTn4.jpeg)

Softness and Superresolution

Experimenting and Clarifying

A small update on how things are progressing (hint: well!) and some neat things the community is playing with.

I have been quiet these past few weeks because I decided I didn’t have enough to do and thought a rebuild/redesign of the GIMP website would be fun, apparently. Well, it _is_ fun and something that couldn’t hurt to do. So I stepped up to help out.

A Question of Softness

There was a thread recently on a certain large social network in a group dedicated to off-camera flash. The thread was started by someone with the comment:

The most important thing you can do with your speed light is to put some rib [sic] stop sail cloth over the speed light to soften the light.

Which just about gave me an aneurysm (those that know me and lighting can probably understand why). Despite some sound explanations about why this won’t work to “soften” the light, there was a bit of back and forth about it. To make matters worse, even after over 100 comments, nobody bothered to just go out and shoot some sample images to see it for themselves.

So I finally went out and shot some to illustrate and I figured they would be more fun if they were shared (I did actually post these on our forum).

I quickly set a lightstand up with a YN560 on it pointed at my garden statue. I then took a shot with bare flash, one with diffusion material pulled over the flash head, and one with a 20” diy softbox attached.

Here’s what the setup looked like with the softbox in place:

![Soft Light Test - Softbox Setup|640x480](upload://dLjWUfOrUqZpdJWc40fbKJtVYGD.jpeg)
Simple light test setup (with a DIY softbox in place).

Remember, this was done to demonstrate that simply placing some diffusion fabric over the head of a speedlight does nothing to “soften” the resulting light:

![Softness test image bare flash|640x640](upload://sJaTFp8s0lgzcVXC47cq2N477OW.jpeg)
Bare flash result. Click to compare with diffusion material.

This shows clearly that diffusion material over the flash head does nothing to affect the “softness” of the resulting light.

For a comparison, here is the same shot with the softbox being used:

![Softness test image softbox|640x640](upload://jNkilETbwfDjHYAvldC3NgMjGra.jpeg)
Same image with the softbox in place. Click to compare with diffusion material.

I also created some crops to help illustrate the difference up close:

![Softness test crop #1|640x640](upload://ecW2NgkqSWlzo4zNBu3TII5C57w.jpeg)
Click to compare: Bare Flash With Diffusion With Softbox
![Softness test crop #1|640x640](upload://i4W6BdCvsHhv6sABggvYo4dv0s5.jpeg)
Click to compare: Bare Flash With Diffusion With Softbox

Hopefully this demonstration can help put to rest any notion of softening a light through close-set diffusion material (at not-close flash-to-subject distances). At the end of the day, the “softness” quality of a light is a function of the apparent size of the light source relative to the subject. (The sun is the biggest light source I know of, but it’s so far it’s quality is quite harsh.)

A Question of Scaling

On discuss, member Mica asked an awesome question about what our workflows are for adding resolution (upsizing) to an image. There were a bunch of great suggestions from the community.

One I wanted to talk about briefly I thought was interesting from a technical perspective.

Both Hasselblad and Olympus announced not too long ago the ability to drastically increase the resolution of images in their cameras that used a “sensor-shift” technology to shift the sensor by a pixel or so while shooting multiple frames, then combing the results into a much larger megapixel image (200MP in the case of Hasselblad, and 40MP in the Olympus).

It turns out we can do the same thing manually by burst shooting a series of images while handholding the camera (the subtle movement of our hand while shooting provides the requisite “shift” to the sensor). Then we simply combine the images, upscale, and average the results to get a higher resolution result.

The basic workflow uses Hugin align_image_stack, Imagemagick mogrify, and G’MIC mean blend script to achieve the results.

  1. Shoot a bunch of handheld images in burst mode (if available).
  2. Develop raw files if that’s what you shot.
  3. Scale images up to 4x resolution (200% in width and height). Straight nearest-neighbor type of upscale is fine.
    • In your directory of images, create a new sub-directory called resized.
    • In your directory of images, run mogrify -scale 200% -format tif -path ./resized *.jpg if you use jpg’s, otherwise change as needed. This will create a directory full of upscaled images.
  4. Align the images using Hugin’s align_image_stack script.
    • In the resized directory, run /path/to/align_image_stack -a OUT file1.tif file2.tif ... fileX.tif The -a OUT option will prefix all your new images with OUT.
    • I move all of the OUT* files to a new sub-directory called aligned.
  5. In the aligned directory, you now only need to mean average all of the images together.
    • Using Imagemagick: convert OUTfile*.tif -evaluate-sequence mean output.bmp
    • Using G’MIC: gmic video-avg.gmic -avg \" *.tif \" -o output.bmp

I used 7 burst capture images from an iPhone 6+ (default resolution 3264x2448). This is the test image:

![Superresolution test image|640x480](upload://gBEKoGaNjFraMu3OJC57dXsqzm4.jpeg)
Sample image, red boxes show 100% crop areas.

Here is a 100% crop of the first area:

![Superresolution crop #1 example|500x250](upload://xg6wTf3b096et2vxsW4OqYJcTdv.jpg)
100% crop of the base image, straight upscale.
![Superresolution crop #1 example result|500x250](upload://MviQ8vv8Ssi05zDkZ8BXFYUzjh.jpg)
100% crop, super resolution process result.

The second area crop:

![Superresolution crop #2 example |500x250](upload://adqN5LKhmMXAzR4yQgdreM7m3jG.jpg)
100% crop, super resolution process result.
![Superresolution crop #2 example result|500x250](upload://8GA4WOY9laBb00vMqgDDnTCpd3h.jpg)
100% crop, super resolution process result.

Obviously this doesn’t replace the ability to have that many raw pixels available in a single exposure, but if the subject is relatively static this method can do quite well to help increase the resolution. As with any mean/median blending technique, a nice side-effect of the process is great noise reduction as well…

Not sure if this warrants a full article post, but may consider it for later.


This is a companion discussion topic for the original entry at https://pixls.us/blog/2015/09/softness-and-superresolution/
1 Like

Actually there is more to it than that. It also depends on the material, a highly reflective material (think of the coating on street signs for instance) will depend a lot more on the angle of the light than on it’s apparent size. A similar thing can be observed when using reflective vs diffuse reflectors.

Also the close diffusion material as you call it does often already make the lightsource both slightly bigger, and more diffuse. By that I mean that the light is less focused/directional. If its more diffuse more light gets scattered around the surroundings, if there is something to reflect it, it could still end up being reflected from the subject into the camera. Try sticking a diffusor on a highly focused flashlight for an example of this.

So I don’t think the stick on diffusors are total hogwash but they are not very effective either.

1 Like

The best way to think about this is to imagine using a perfect mirror as a bounce into the subject. This would be the same effect as simply pointing the flash at the subject, at the appropriate distance. (Really a specularity vs. diffuse conversation at that point). Compared to using a big white panel instead.

By diffusion material, I should point out that I simply used a rubber band to hold the material over the flash head. So slightly bigger = 0.1% larger most likely (you get the idea). Less focused, yes, there is some scattering. I maintain that once you move that out to a usable distance, the end result is the same as using a bare flash. Any rays the subject sees from that is still coming from a small area (and with diffusion all you’re really doing is dropping power and throwing rays away into the environment). :smile:

I should get one of those lightsphere/omnibounces and do a test outside with it as well. Just for completeness…

I found a minor change that should be made. In step 3 I said to use the command below:

This will actually use an interpolative upscale mode in imagemagick. The better way to go is to upscale without interpolating:

mogrify -scale 200% -format tif -path ./resized *.jpg

This will upscale without creating new colored pixels, just larger blocks of the same color.

Then proceed as usual.

Hi Pat,
Have you tried median averaging, instead of mean? If you did, what are the pros/cons for both types of averaging?
-Sebastien

Median averaging can lead to strong posterization when you’re in low noise situations because it completely eliminates dithering, while the mean remains floating and you can add in the required dither later. But in noisy situations it quells noise better.

I wish some images of example to be able to check as it works this

1 Like

I usually put on some Barry White, makes the light softer instantly…
…I heard alcohol also helps sometimes.


Nice article, @patdavid.

1 Like

@bazza what do you mean? You’d like to have some example images? Also, for which thing - the apparent light softness or the super-resolution stuff?

Some images to try it