creating non-uniform blur in post-processing

I have been experimenting with adding some blur in post-processing that can credibly mimic out of focus, possibly for a larger aperture that I have access to.

I know how to create the actual blur (diffuse & sharpen is neat), and also how to mask my image accordingly.

But the practical problem I am running into is that areas further from the focus plane (practically, for my photography, more distant objects) need more blur, depending on the distance. Simply using a mask with a soft edge to combine the original and the large blur is not convincing.

A workaround I came up with is 3ā€“4 instances of the same module, each adding more and more blur in more distant areas (radius increasing as we progress in the pipeline works best for me).

I wonder how people solve this (other than: get a lens with a larger aperture :wink:).

1 Like

Iā€™m looking for something similiar as well! When mask opacity is feathered, at 50% it basically is an equal combination of strong blur and a sharp base.

I think you are also looking for something where the mask opacity would determine blur radius at that location, instead of fading the blur using actual opacity :thinking:
This would work especially for lens blurs

1 Like

I think the blurs module is designed to simulate bokeh, so might be better than diffuse|sharpen. However, to get what you want (more blur with more distance) you basically need a depth map for the image, and no ā€œnormalā€ camera can do that.

perhaps indeed a depth map would be useful:

birdhouse:

depth map:

masked layer sharpened:

merged:

Depth maps are weird, I really donā€™t know how they work:
< AI reference deleted as requested by Admin>

I got this one from GIMP>filters>Gā€™MIC plugin.

To compare, open ā€˜birdhouseā€™ and ā€˜mergedā€™ in separate tabs and click back and forth ā€¦

I could also have applied the inverse map to another layer and blurred that layer.

I could have instead edited the whole image with wavelet decomposition but I donā€™t know if dt has that.

Itā€™s a new feature being added to ON1ā€¦they are claiming it to be part of their ā€œAIā€ ness rather than something from the camera metadata or settings I believeā€¦

Link just provided for an example not advocating for ON1ā€¦

1 Like

Both iOS and Android can include depth maps into the file.

https://medium.com/through-the-looking-glass/extracting-depth-map-photos-for-the-looking-glass-29c480d52c2a

https://www.summet.com/blog/2021/05/24/how-to-extract-depth-images-from-pixel-4-google-camera-android-phone-portrait-images/

Being able to create a raster mask based on that would be pretty neat. I know Lightroom can use the iOS depth map for some features.

1 Like

I have done something similar in gimp where I duplicated the layer, added gaussian blur and then used a gradient mask to blend the sharp and blurred layer in a way that made the distant landscape more blurred than the closer landscape. This technique would not be suitable for all images.

Iā€™ll mention that ImageMagick has non-uniform blur facilities. I show some examples at Selective blur. However, the blurs are Gaussian, rather than attempting to replicate photographic out-of-focus.

Under the hood, IM creates a set of kernels. The blur at each pixel uses the appropriate kernel, as defined by the blur mask.

Interesting link, although I donā€™t use IM much personally. It occurs to me that a useful mask might a MFT magnitude plot or the inverse from Gā€™MIC.

GAS is always the answer! :rofl:

1 Like

They donā€™t work see on your example how the bird house is found at the same distance as some trees (above the bird house) far behind. Thatā€™s part of the AI mess seen on smartphone where the simulated depth of field is wrong in many cases.

2 Likes

I think some of the smartphone stuff is because there are two cameras so they actually can produce a realistic depth map. Obviously the ai stuff that uses a flat image is going to be just a best guess

Just to clarify the OP: these are plain vanilla RAW files one would get from a digital camera. They do not have a depth map, I need to mimic it using masks. Also, I am interested in other effects where a depth map would be useless, eg imitating a tilt-shift look with a non-perpendicular focal plane.

My interest in this started when I was looking at images of a famous photographer (who I will not name here) with some ā€œbokehā€ that was excessive and weird looking, even for medium format large aperture lenses, and it had artifacts (eg a rail was blurred behind the focal plane, but not in front). Then I learned that this look comes from Lightroom tutorials, which use one mask.

Now I am wondering how I can do better, in Darktable. Note that I am not aiming to replicate something crazy, like a f/0.95 lens or whatever. Just want a bit more tasteful, realistic-looking blur.

2 Likes

That depth map predates the days of ā€˜AI-embedded-in-everythingā€™ hence itā€™s obvious errors which you have kindly pointed out.

Iā€™m wondering if thereā€™s some Fourier involvement in earlier depth-measurement functions ā€¦

1 Like

Itā€™s actually possible to do it with just one lens, as Google explains here:

And I believe newer iPhones use their LiDAR sensor to build the depth map as well.

2 Likes

Wow, using the autofocus dual pixels to work out a depth map is cool. Really should watch those Marc Levoy lectures again.

2 Likes

Iā€˜m wondering, if the phase detection approach could be applied to modern SLR/mirrorless systems to store the depth map as metadata. This might have potential to improve parametric masking even further.
Of course this thought raises also questions on potential performance degradations and file size increasesā€¦

1 Like

Perhaps that data is already stored? I think Iā€™ve seen some info about at least the active AF sensors, but I didnā€™t check in depth (not really useful info in most cases).

Remains the limited resolution youā€™d get from those (relatively few) pixels.

1 Like

The latest iPhones use LiDAR to capture depth and use it for AR or portrait blur. Iā€™m no expert, but IMO this could be one of the ways to further improve big camera photography. Maybe there could be a LiDAR hot shoe attachment to work out the depth? And later access some sort of depth map fileā€¦

The costs and effectiveness are an unknown to me, but perhaps this could work. You certainly will need to know the relative position of the LiDAR to the camera sensor, but what else idk.

1 Like

I found this on HN, seems relevant:

1 Like