Median Blending: tripod at daylight

In this comment to A Look at Reducing Noise in Photographs Using Median Blending I found

if shooting in daylight, the tripod is now your enemy - hand-held in burst mode gives you a brace of images with random offsets required, quickly and easily

Why is here the tripod my enemy when shooting in daylight?

If you’d like to learn more about the various types of noise present in a digital image, to help you understand what averaging can and can’t help with (whether by median, mean or mode blending) and whether moving the sensor would help to mitigate a specific kind of noise during blending, read this paper by prof. Emil Martinec (who contributed to RawTherapee):
http://rawtherapee.com/mirror/noise/index.html

The person you quoted claims that when median blending demosaiced (i.e. not raw) images, “random offsets” are required. Are they? In Pixel Shift shooting, you want the same point of a scene to expose all four photosites on the Bayer sensor, and the Pixel Shift demosaicing algorithm then handles merging these individual photosite signals in the raw file. In that median blending article they’re dealing with demosaiced images, so the same does not apply. Whether there is any advantage to having each image in a median-stack series offset relative to each other, and by how much, and in what pattern, remains to be discovered.

I think it has to do with dithering, a concept that is important to astrophotography, but that we could as well apply to non astrophotography, at least, conceptually.
I think it is specially useful with random noise (hot pixels), and random noise is far more noticeable in astrophtography because of long exposures (as far as I know, the longer you expose, the noisier the image gets, due to sensor overheating),

UPDATE: @kuerbis Here’s another article on super resolution, recommending hand held shooting:

The real trick is that we’ll shoot this set of exposures completely hand held. The subtle motion of our hand will actually act just like a sensor shift mechanism and allow different pixels to capture different parts of the scenes. It sounds simple but it actually works.

Thanks for the answers and the links.

1 Like

As others may have noted in discuss and elsewhere, I am not confident in this handheld technique. First it is a pain to do, second it is haphazard and third we have built-in tech like pixel shift in more and more cameras now. I rather spend my time taking photos and sharing them.

PS This might be a hint: if you had a flimsy tripod, there will still be movement but more predictable than your hands.

So I clicked through to read your reference and had to laugh at the by-line…

1 Like

All of this sounds like Joe Random had a brilliant idea and a blog to write about it, but no evidence and no theoretical background to support it.

Comparing sensor-shift and hand-held motion blur is the definition of stupidity.

It’s not your enemy if the goal is noise reduction.

It could be an enemy in some super resolution technique, the pixel shift is required for recovering detail from aliasing information.

Given a sufficient number of observation images, and if the set of observations vary in their phase (i.e. if the images of the scene are shifted by a sub-pixel amount), then the phase information can be used to separate the aliased high-frequency content from the true low-frequency content, and the full-resolution image can be accurately reconstructed

https://en.m.wikipedia.org/wiki/Super-resolution_imaging#Aliasing

In that comment handheld is better because he mix noise reduction and super resolution.

It’s not hand-held motion blur, it’s hand-held motion shift. Small shifts.

Though the thread started with median blending, the article @gadolf referenced on 24 Aug is interesting and those decrying the handheld aspect might take a look. Evidence is provided, like this bit showing the improvement in sharpness and reduction in noise -
super-res-crop
It is using Averaging, and that’s a well-known way to reduce noise of course. And it seems right to me in principle (though no proof, ok) that having a given piece of fine detail land on different colour sensor sites across a number of photos, will average better than it landing in the same place each time. (I say “in principle” because if there is handheld blur, surely the whole thing will be less effective?) And sure enough, there’s an example showing elimination of moire. With the camera solidly fixed, every shot would have the same moire and therefore so would the average. But with handheld random movement, it’s cancelled out.

1 Like

I totally did… :laughing:

1 Like

Well, that’s nice to see, because we now have sorted this out and there can’t be any other definition of stupidity anymore :wink:

1 Like

Indeed, but both are related (motion causes blur).

The point remains. Sensor-shifting makes the sensor shift (no joke !) by a known amount over the sensor plane, so you can correlate all images on your computer by just shifting pixels coordinates. No biggie.

Now, with hand-held “shift”, you have no idea of the amount of shifting you got. Well, we have autocorrelation methods, using 2D Fourier transforms and convolving the image on top of itself to get the dephasing, so that could be easy. Unfortunately, your hand moves in 3D, that is, the shifting is not done over a plane but over a moving sphere, so you would have to correct 3 translations and 3 rotations before you are able to correlate the pictures, even possibly a slight defocusing. And, unless you got an accelerometer and a gyroscope to record the motion in-camera, you have to guess the motion direction from the pictures. So, you stack approximations on top of guesses and try to compute a correction with all of that, before you can actually stack and average anything. Hugin knows how to do that, but with some errors, and the result is far from perfect and not compatible with a “super resolution” purpose.

Also, the pixel-averaging/pixel median thing is as old as digital cameras : Noise Reduction By Image Averaging (see the Windows 98 screenshots ?). Noise is high-frequency, details are high-frequency, filtering one will filter the other too, but noise is random and details are constant, so average shots and you will dilute the randomness into the constantness… That’s statistics 101.

Photographers are like dogs who rediscover every day they have a tail, so they keep chasing it. Pixel averaging works great, costs nothing and you can do it in every software that works with layers. If your noise is roughly gaussian and you have enough shots, median == average (again, stats), so you could just stack and blend layers at low opacity. But for God’s sake, do yourself a favour and use a tripod.

Fstoppers and Petapixels are full of photogeeks that pose as engineers and speak of technics as if they understood it: optics, image processing, electronics, computers… you name it. Well, there are 2 kinds of people on the Internet: those who can solve an integral of convolution, and those who would better keep taking pictures and stop trying to educate others because they have no knowledge to share.

2 Likes

This is really reductionist and unkind. I mean, why stop at convolution integrals? (I swear if I never see another Laplacian in my life I’ll be happy - signal processing class about killed me - Dirichlet conditions for the win!). Why not say that anyone that does not have a Fields Medal on the internet shouldn’t contribute because they have no knowledge to share?

It’s an elitist and asinine worldview that belittles contributions and attempts at contributions from many people. I realize this may be old hat to you experts, but consider that some people talking about things like this may be one of the lucky 10,000:

The smartest folks in the world are useless to society and others if they cannot communicate. They might as well be the smartest person in their own cave never seeing anyone else. On the internet, the fastest way to get something written correctly is to write it in-correctly first… :wink:

Do > Say
Help > Disparage
5 Likes

Using freedom of speech to say inaccurate things on subjects we don’t understand and master is selfish and stupid. It only serves to dilute proper information and valid knowledge in noise, and make them more difficult to find and more painful to sort out. Overall, it decreases the quality of the internet, and waste bandwidth and energy to host and retrieve garbage that bring no value to nobody, except a little bit of fame for the fake expert who wrote it.

As a kid, I would binge this kind of website to “learn”. It took me years of engineering and maths classes to unlearn all that crap and discover I have been fooled by people who commit click-baits disguised as information. So much time lost…

It’s not a matter of Fields medal or degrees. It’s a matter of “have you actually applied the concepts you mess with before giving your opinion on how they are used ?”.

Being an elitist is making the access to the elite real hard to prevent people to join it. Asking for quality and a bit on self-censorship is merely having standards.

Indeed this is something that has vexed me as well for a long, long time. I can’t agree more.

I’m just worried that absolutism like the initial statement is also harmful. I have learned many great things from people who aren’t necessarily experts in those fields, and my example was only to serve as an illustration to the absurdity of an arbitrary goal to define when someone should be able to speak about a subject.

I think we’re on the same page, just that we might hope for someone to do some due diligence before presenting material. The best way to combat this, by the way, is to write correct articles to use as references for others when questions arise… hint… hint…

1 Like