Any FOSS able to remove motion blur?

Is there any FOSS able to de-shake images? Despite all the stabilization we have in modern cameras, this is sometimes useful… (my macro lens has no IS…). I’m not talking about removing a large motion blur CIS-style, but sharpening a picture where the blur is due to some residual motion.

I’m not 100% sure, but there might have been some work done by Vladimir Yuzhikov with a de-blurring algorithm:

http://yuzhikov.com/projects.html

It’s a blind deconvolution kernel that we might be able to poke @David_Tschumperle about implementing in G’MIC maybe? (pretty please?!) :slight_smile:

His github repo: GitHub - Y-Vladimir/SmartDeblur: Restoration of defocused and blurred photos/images

If you somehow can, use a tripod, a faster shutter speed or a flash to get rid of the motion blur.

De-shaking as you call it is a very messy process, especially given an image about which little is known, a ‘shake’ about which little is known, and in the presences of noise (if noise wasn’t an issue you would have probably just bumped the iso a bit higher). In essence it’s like asking the computer for numbers that sum to 42. He will give you back some numbers that sum to 42, but it’s likely not the ones you wanted to get.

SmartDeblur 1.27 doesn’t have blind deconvolution. You can manual change parameters for out of focus, gaussian blur and motion blur. You can’t combine those three (un)blurs. The commercial SmartDeblur can do a kind of blind deconvolution.

I also will be very happy with a FOSS implementation.

Manual deblur possibilities would be still great. If it would be possible to combine them it would be amazing. But deconvolving with a PSF would be more awesome. @David_Tschumperle wrote in this post:

So deconvolving with PSF is already possible in G’MIC, but there is no proper G’MIC-filter for this in GIMP. Maybe there can be made a filter in G’MIC for GIMP that uses two layers. One with the blurred image and one with the PSF-kernel.

You still have to find the PSF. I don’t think G’MIC can do that. I don’t know a FOSS that can estimate a PSF for blind deconvolution. I hope also there will be FOSS-software that can blind estimate PSF’s.

2 Likes
  • Tripod => weight
  • Faster shutter speed => bigger lens => weight
  • Faster shutter speed => better sensor => bigger camera => weight
  • Flash => weight

:sunglasses:

(and I have all of these…)

Deconvolution, e.g. Richardson-Lucy or Wiener, can reverse motion blur if you know the direction and properties of the blur.

If you mean video images, ffmpeg has deshake, transcode has stabilize, and Kdenlive supports two deshaking methods which IIRC are the ones I just listed but with a friendly UI. The ffmpeg method works best.

1 Like

There’s also 2D stabilization in Blender that does a great job if you don’t mind manually setting up tracking points…

Just found a three year old (!!!) test I did with 2D stabilization in Blender on a DigitalRev video:

This is conjecture but I guess that you’ll get better results bumping the ISO and denoising than you’d get from trying to perform blind deconvolution on the shaky photograph. But I’d love to be proven wrong in this assumption. :slight_smile:

[quote=“Ofnuts, post:6, topic:1141”]
Flash => weight
[/quote]While not true for all cameras, all cameras I own do have an internal flash. I’ve actually used the built in flash for some macro shots using a piece of paper as reflector to bounce off of. Ideal no? But since you explicitly mentioned macro. :slight_smile:

I agree that it is better to bump up the ISO and denoise than to deal with blur. And I absolutely agree about using flash when shooting macro, it makes a world of difference.

There are indeed some deconvolution algorithms in G’MIC, where the convolution kernel (PSD) can be passed as a parameter, including Richardson-Lucy. I think that the filter Testing / Jéjé / Deconvolution allows to specify the PSD as an additional layer. I’ve never used it anyway, so I can’t tell much about how and if it works.

Estimating the PSD is another story, and there is nothing yet in G’MIC to do that as far as I know.
Maybe one day. :slight_smile:

@Ofnuts, when shooting are you using a tripod? One hardware solution to reduce blur when in this situation is to switch-off in-camera stabilisation when using a tripod.
http://digital-photography-school.com/image-stabilization-on-tripods/

Tripod, remote trigger, mirror up, I do them all when I can. But when I chase bugs I’m hand-held or with a monopod, and even with high ISOs there is often not enough light for a very fast shutter speed since in macro-photo you favor small apertures to increase the DoF.

Well, sometimes you explicitly want to shoot bugs and flowers in natural light conditions, at least this is what I am doing most of the time. It is an “artistic” choice, to give a more natural feeling to the shot… then you are only left with higher ISO settings.

In macrophoto the flash usually makes a very dark background…

Hope I can add my question here: I have some motion blur in an image due to long exposure. Tripod was used, but it was on some sort of metal structure that maybe wasn’t 100% shake proof. Or maybe the 2 Sec. delay wasn’t long enough. Unfortunately I see the final result only back home.

Here is an example (a crop from a 115Mb TIFF):

_SDI4404%20-%20Copy

I tried SmartDeblur and Gimp/g’mic, but seems it seems to have no effect. Just wonder if there is a way to have some sort of fix on this type of motion blur.

Have you tried using the Deconvolve filter in the G’MIC plugin? I haven’t tried this filter. It could be crunchy, as many simple algorithms are.

image

Could probably generate a kernel from this squiggly line.

image

In general, most of the world has given up on trying to use postprocessing to try and undo motion blur like this, and instead have gone to image stacking.

@Morgan_Hardwood stated earlier in this thread that bumping up the ISO and increasing shutter speed, followed by denoising, was almost always easier than trying to undo motion blur.

This concept is taken further in mobile phones by taking long exposures and splitting them up into multiple short shots, and then stacking them later. This same technique can be used with more capable cameras, almost all of which have continuous drive modes - see HDR+ Pipeline as an example. I’ve used a modified variant of Tim’s pipeline that outputs a TIFF prior to demosaicing with very good results - GitHub - Entropy512/hdr-plus at savedng . At some point I hope to make that a little more user-friendly, but haven’t gotten around to it. (You must manually tag the TIFF with appropriate DNG metadata at the moment.)

Do you have the whole image I could try on?