Averaged 'long exposure" and Luminosity Masked edits

Howdy folks! I’m new here, but have been using F/OSS graphics and photo-editing tools for several years now. My main experience with these tools is from/for image analysis and manipulation of remotely sensed data (satellite imagery), but I’ve been an amateur photog for a long time too. In the past several months, I’ve been merging these two together by starting to really try to learn photographic post-processing techniques. This is mainly “for fun”, but who knows where I may go with it in the future!

Anyway, with the introductions aside, I’d like to share with you the results of my most recent post-processing experiment. I’ve been following the tutorials by @patdavid on image averaging (from his old blog), and have been fooling around with trying to get a “long exposure” on a really simple camera (aka, my phone). I do have a an Olympus E-M10 and ND filters for real long exposures, but thought it would be fun to try to simulate this with a camera phone and post processing. So yeseterday I went down to the “waterfront” here in Tempe, AZ, and experimented shooting the water falling over the edge of the “infinity pool” that’s in front of the Tempe Center for the Arts. My phone is a Nexus 5x, and it has a decent little camera in it. I use the “ProShot” manual camera app because it has intervelometer shooting modes (timelapse), but I probably could have just kept hitting the shutter. Here’s an example of one of the photos:

I let the camera app decide on the shutter speed (it was pretty fast, at least 1/500 I think), but I set the ISO to 60. The camera has a fixed aperture of f2.0. I took about 30 of these pics, balancing the camera as steadily as possible on my knee. I then used align_image_stack to align them, and blended them together with Imagemagick’s -evaluate-sequence mean method. Here’s that result:

Actually, I also upscaled all the images by 2x with the convert -resize 200% method too, so technically this is also a “super resolution” image too (4x the megapixels!!). The whole command is:

align_image_stack -a "aligned" -C -v -s --use-given-order *.jpg
convert *.tif -resize %200 -evaluate-sequence mean averaged.jpg

This was all done from JPEG’s as I didn’t feel the need to go to RAW for this experiment. Even so, having just read the very nice tutorial on Luminosity Masking in Darktable (thanks @LightSweep!) , I thought I could use some of those techniques to enhance the final averaged image. I was able to bring the sky down and the shadows up separately, and then I also applied separate color casts and sharpening. It worked really well! Here’s the final image:

Not too shabby for a quick experiment shot on my cell phone!

Anyway, I’m glad to have found this forum, and thanks to all the devs, admins, and contributers for all the great advice, tutorials, and software that is being made truly free for us all to use! I look forward to being an active member of the community here!

3 Likes

Just a note, if you link to the image file instead of the imgur page, it’ll embed in the forum and we won’t have to click links.

Like this:

Ahh! Thanks @CarVac ! I will edit now. Was wondering why I couldn’t get the image markdown to work right…

1 Like

Looks like a successful experiment to me! :slight_smile:

Welcome and hope we can be helpful!

1 Like

Thanks @patdavid! Actually, I have a question about this averaging method, which you may be able to answer. When averaging across a sequence, can you use that to increase the bit-depth too? What I mean is, say you start with a stack of thirty 8-bit JPEG’s, which should have values of 0-255 for the Red, Green, and Blue channels respetively. As you average across the thirty images, does imagemagick automatically round to the nearest integer, or does it do floating point? I know that there is a method to set the final bit depth with depth 16 (setting to 16-bit images), but it didn’t seem to have the effect I wanted, at least not with a JPEG output container. I tried using a TIFF output format, but it never rendered the averaged image properly regardless of what bit-depth I tried. The file size did change, however (from about 8mb to about 30mb), which suggests that, although the final image was unusable, it did indeed have 16-bit pixels. Here’s what that TIFF looked like (re-rendered as a JPG so I can host it on IMGUR):

As you can see, it’s whited out, but it does look like a stack of images. If I can figure out why it looks like that (and correct it), then the increased bit-depth and resolution one gets from this image-stacking method would a pretty incredible way to squeeze out very high quality images from simple point-and-shoot, small-sensor cameras with no RAW output.

I figured it out! It had to do with the way Imagemagick handles alpha transparency with TIFF files. One has to flatten the output against a white background. Here’s the command that worked:

convert *.tif -layers trim-bounds -resize %200 -evaluate-sequence mean -background white -flatten -trim -depth 16 averaged.tif

The 'trim commands will ensure that any white space around the final image is cropped.

The output image looks great and identify averaged.tif yields the following:

averaged.tif TIFF 6192x3354 6192x3354+0+0 16-bit DirectClass 121.5MB 0.000u 0:00.000

A true 16-bit, 20.8 megapixel TIFF file from a stack of thirty 8-bit, 12 megapixel JPEG’s! Pretty cool! You can see that file size is large, however, at around 120 MB.

As proof of concept, I think this is very, very cool.

EDIT: Just for comparison, here’s what identify says about one of the original JPEG’s:

identify ProShot_20160420_181403.jpg 
ProShot_20160420_181403.jpg JPEG 3288x1850 3288x1850+0+0 8-bit DirectClass 1.087MB 0.000u 0:00.000
1 Like

Instead of flattening and all that, which is slow, try replacing *.tif by *.tif[0]
It will use the first layer of the TIFF, which is the one you want.

No. You increase the bit depth manually by using a higher bit depth container. For imagemagick’s convert, you do that using -depth=16. What you put into that container is a different story. You could put an image with 16 bits of meaningful data per pixel per channel into it, you could put data from an 8-bit JPEG, or a 2-bit black and white fax scan.

Now there is nothing wrong with using a 16-bit format as your intermediate format so that you don’t lose any data when pumping your image from one (16-bit-capable) program to another. In fact I strongly recommend doing so - use 16-bit uncompressed TIFFs, they have great support for image data and metadata and are fast to read and write. Don’t use JPEG as your intermediate format. But if your intention is to somehow recreate the data you lost when converting your raw photos to 8-bit JPEGs, then you’re out of luck. You should have stayed in (at least) a 16-bit workflow from the start.

1 Like

@Morgan_Hardwood

Ah! Thanks for that shortcut! Very helpful!

I believe I’m following your argument about the proper way to be using 16-bit throughout a workflow that starts with a high-bit depth image – and it makes sense to me. But, I am curious, however, about whether or not is possible to reconstruct the information that would be missing in a single 8-bit image by averaging across a large stack of different 8-bit images. The “theory” behind this approach would be that each of the, say, thirty different images would have slightly different values in their pixels, due to slight shifts in lighting and sensor/lens position during the capture of the sequence. I’m talking about taking several images over the course of a few minutes here, and not with the camera absolutely fixed and unmovable (a small amount of shake between shots). This technique clearly can result in increased image resolution and sharpness, with decreased noise. It seems to me that it ought to also allow for increased bit-dpeth, since the “average” of an 8-bit pixel with value of, say, 155 in the first image, and, say, 156 in the second image, should be 155.5. If you have many more such images in the stack, then the number of significant digits that you could resolve to would increase, yes? And so it would be theoretically possible to “create” at least a single-precision floating point number from a series of 8-bit integers, yes? If these suppositions are correct, then this method stands a chance of working. Of course, one thing your post also brings up is that I’m now not at all sure if Imgamagick is, in fact, producing such 16-bit floats in the intermediate image-averaging stage, or if it is just rounding to the nearest 8-bit integer, which I’m then just stuffing into a 16-bit container…

8 shots, one pixel:

RGB=[012, 122, 219]
RGB=[011, 125, 221]
RGB=[015, 129, 220]
RGB=[009, 128, 221]
RGB=[012, 126, 225]
RGB=[010, 128, 223]
RGB=[013, 129, 218]
RGB=[017, 127, 220]

RGB=[12.375, 126.75, 220.875] Mean average
RGB=[12, 127.5, 220.5] Median average

That gets you more colors, if you are using a 16-bit container, though I doubt these extra colors would be of much use in practice, and the time spent and quality lost while merging shots make it something of only artistic value.

Image stacking can reduce noise very well, but if one is willing to go to the lengths of taking many crappy shots and performing this technique to make them better, usually one is better off by taking more care “upstream” - usually that is possible. When shooting real estate, one uses a tripod and the lowest real ISO. When shooting concerts, one doesn’t have much choice other than good lenses and a good sensor. When stealing shots of a painting in an art gallery, one is better of just googling a high quality scan of the painting instead of taking 30 cellphone pics. Practice shows that astrophotography is the only field where average stacking is practically useful.

Focus stacking and exposure bracketing are entirely different, I’m not talking about those.

ImageMagick can perform calculations in double-precision floating point, though I don’t know whether the “standard build” can do that, it might be limited to 16-bit “only”. See the “Quantum” parameter (Q).

In the end, it’s just fun playing around with this.

Cool! I will read up on that for sure.

Yes, fully agree with this statement. Also agree that it is just fun to play around with this! But, since my background in this realm is with analysis of remotely-sensed imagery and photogrammetry (I’m an archaeologist and a landscape geomorphologist), this kind of thing is also potentially useful in my “day job” too! :wink: One application would be to use relatively low-cost photgraphic equipment (basic DSLR or mirrorless camera) to obtain very high resolution images of sediment layers or artifacts, with a very wide range of color values. These could potentially then be analyzed in more sophisticated ways usually reserved for imagery taken with equipment costing many 10’s of thousands of dollars.

I think you are probably right that this is highly impractical for many photography situations, but it is perhaps useful for those situations where you have a non-moving subject, and especially for when you want to capture great detail and color across a relatively large range of luminances. In addition to the astrophotography that you mention, I could see how this could also be interesting or potentially useful for general landscape photography. Probably not “worth” it unless one is going to print at a very large size, but could be a possible way to get great, high quality landscapes with not the greatest equipment! :slight_smile:

Actually, now that I think about it, one of the other cool things that happens with this technique is that most transient objects are removed. For example, in the above image, there was a woman who was walking her dog down that path:

As you can see, she was in the shot most of the time (except the first couple of frames), but she and her dog and all the other people were removed by the averaging process. From a digital heritage point of view, this technique could offer a way of getting very high quality images of popular heritage sites like Pompeii or Petra, etc., with the added benefit of not having to close the site down! It would just magic away all the tourists! Lol :smile:

Actually, in the OP I should have made and posted an animated gif of all the original shots. Here’s what that looks like:

Another quick follow up. Looks like the standard Ubuntu package for Imagemagick is build with a Quantum Depth of 16 bits:
convert -version
Version: ImageMagick 6.7.7-10 2014-03-06 Q16 http://www.imagemagick.org

The compilation notes suggest that 16-bits is the default too, so other builds will likely be 16-bit as well:
–with-quantum-depth number of bits in a pixel quantum (default 16).

See here for more info about that.

It seems that I need to set -define quantum:format=16 at the start of the command.

But reading further about HDRI support in Imagemagick suggests that I need to compile with HDRI support to get the most out of this.

The best way of increasing resolution with meaningful data is stitching. Averaging photos in this way or in other ways can remove moving objects and reduce noise, but I disagree with increased resolution claims. The end result is smooth, yes, but the details are nuked. You would get far better results through stitching.

I wrote “in other ways” above. My preferred method for reducing noise through stacking is by applying this formula to each layer:
layer opacity = 100 * (1 / (# of layers below + 1) )

1 Like

There is a Q16 build of imagemagick (I meant to post this yesterday and got distracted).

You shouldn’t need to do anything special to tell imagemagick to use 16bit depth (provided you are using a Q16 build in the first place).

I actually did a few projects with the ONR using early development of photogrammetry (through a company called Vexcel, eventually purchased by Microsoft for their long-range sensing hardware/expertise). We used it primarily as an inexpensive means of capturing ship-check data and construction validation in the shipyards… Good times! :smiley:

The biggest problem with using this for landscapes is that there is almost always the inevitable breeze and tree/bush/leaf/grass movement. :frowning:

I do think that this is a superior method for long exposure images of static scenes. Basically almost anywhere you might use an ND filter and long exposure times. This avoids any color cast from the filter, significantly reduces noise, and usually yields at least identical or better results.

Keep experimenting and reporting back! :smiley:

So, instead of computing an average for each pixel, this would sum the opacities, correct? You essentially set each pixel to a percentage of opaque that is relative to the number of other pixels below it? Interesting. It’s seems like it would function similarly to the poly method in convert -image-sequence, which allows for a weighted average to be computed…

This is interesting that you say this, as it is something I’ve also worked with a bit in the past when taking landscape shots and shots of sedimentary strata “for work”. I just get closer to the outcrop, or at least zoom in, and then take several photos and stitch them (with Hugin). I’ve found that for landscapes, you need “rotational” panning to keep the perspective intact. The ideal is to keep the focal plane in one spot (or as close to that as possible), and to only change the angle of view. For outcrops and other planar things (walls, structures, etc.), it is better to walk a linear and parallel transect, shooting perpendicularly back at the subject every couple of meters (depending on the distance between the subject and the track being walked). This allows for a final stitched image with minimal warping. I’ve spent a lot of time working on a good F/OSS “Structure from Motion” workflow, and the same technique seems to apply there as well.

Fair enough! I guess this will be true unless the subject is absolutely still?

Thanks! Yes, I figured out yesterday that I have the Q16 build, so that is definitely good to know!

I have a good friend who is there now. He’s a GIS specialist (as am I), who has been working on GPU accelated least-cost-path modeling. In fact, he edited a book on that subject to which I contributed a chapter: Least Cost Analysis of Social Landscapes.

Bought out by Microsoft, eh?! Is that why you are now so dedicated to F/OSS?!!! :smile:

I absolutely will!!

1 Like

Some recent news from the Mars global surveyor project that is very apropos of this thread: Image stacking to make “superresolution” orthophots of Mars’ terrain!. So cool!! :smile:

1 Like

I just came back to this for reference… awesome still!

1 Like