Outlier blending?

I saw this photo on Reddit from this thread: https://www.reddit.com/r/spacex/comments/lirayf/the_entirety_of_starship_sn9s_test_flight/

The photographer manually masked two hundred photos together.

Is there any good way to automate this?

I was thinking that the algorithm could be simple but absurdly I/O heavy; for each pixel you find the biggest outlier and if its z-score is over a certain threshold you use it and if it’s under then you use the median (or mean).

Any thoughts? Block-wise processing, for sure, but depending on how many shots there are the blocks would have to be smaller to fit in memory.

I want to experiment with some shots like this with multi-strobing, so it’s somewhat relevant to me.

Without seeing the image in more detail, it’s hard to be sure, but I would probably do this. I assume the camera is on a tripod, the 200 images are the same size, with the same exposure, etc.

  1. Make a frame that has no rocket, the “blank” frame. This might be a photo before takeoff, or the median of all 200 photos, or the mean of all 200 photos, or the mean of “before” and “after” images.

  2. Make a transparent image of the same size. This is the “cumulative” image,

  3. For each of the 200 photos: Identify the pixels that differ significantly from the blank frame. Copy those pixels to the cumulative image.

  4. Composite the cumulative image over the blank image.

We might optimize step 3 to examine just a portion of the images.

I made the movies at https://www.youtube.com/channel/UCQSlRAKpgJOMJVPgCjvhSiA with variations on that technique.

EDIT: Ah, but what about the smoke, or exhaust, or whatever it’s called? I suppose we get this on photos, and don’t want it on the final image. A simple thresholded-difference probably won’t work. We might need a “smoke detector” to mask it out.

You basically described what is done for image difference keyeing in compositing. There the clean background is called clean-plate. Masks are derived from the distance of a new pixel from the cleanplate. Semi transparency is hard to distinguish from spill-light or colors of the new object that just happen to be close to the background. So for it to work better, the clean-plate background is chosen to be a green- or bluescreen again, limiting the background to a smaller subset of ‘original’ pixel colors. Still, the manual work that one has to put in, is in differentiating between the binary mask for the core of an object and edge masks for potential semi-transparencies in which the mask contains values other than 0 and 1.

Thanks, @PhotoPhysicsGuy.

Thinking about the smoke, two things might help identify it:

  1. Perhaps the smoke has softer edges than the rocket.

  2. Perhaps the smoke in each photo is always lower then the rocket.

If either or both are true, we can detect which difference pixels are the rocket (which we want to copy) or the smoke (which we don’t want to copy).

But I’m just guessing.

1 Like

If you can assume no overlap between the object on different images (you’d remove the photo if it overlapped), you could perhaps calculate the outlier image and then do a different image segmentation to make a mask that would capture the smoke.

1 Like

Now that I think about it more, at least for this particular context of an object flying in the unchanging sky, casting no shadows, if you go based on the standard deviation of the non-outlier images, even the smoke will have a large z-score and you won’t have to worry about the transparency.