I have been experimenting with adding some blur in post-processing that can credibly mimic out of focus, possibly for a larger aperture that I have access to.
I know how to create the actual blur (diffuse & sharpen is neat), and also how to mask my image accordingly.
But the practical problem I am running into is that areas further from the focus plane (practically, for my photography, more distant objects) need more blur, depending on the distance. Simply using a mask with a soft edge to combine the original and the large blur is not convincing.
A workaround I came up with is 3ā4 instances of the same module, each adding more and more blur in more distant areas (radius increasing as we progress in the pipeline works best for me).
I wonder how people solve this (other than: get a lens with a larger aperture ).
Iām looking for something similiar as well! When mask opacity is feathered, at 50% it basically is an equal combination of strong blur and a sharp base.
I think you are also looking for something where the mask opacity would determine blur radius at that location, instead of fading the blur using actual opacity
This would work especially for lens blurs
I think the blurs module is designed to simulate bokeh, so might be better than diffuse|sharpen. However, to get what you want (more blur with more distance) you basically need a depth map for the image, and no ānormalā camera can do that.
Itās a new feature being added to ON1ā¦they are claiming it to be part of their āAIā ness rather than something from the camera metadata or settings I believeā¦
Link just provided for an example not advocating for ON1ā¦
I have done something similar in gimp where I duplicated the layer, added gaussian blur and then used a gradient mask to blend the sharp and blurred layer in a way that made the distant landscape more blurred than the closer landscape. This technique would not be suitable for all images.
Iāll mention that ImageMagick has non-uniform blur facilities. I show some examples at Selective blur. However, the blurs are Gaussian, rather than attempting to replicate photographic out-of-focus.
Under the hood, IM creates a set of kernels. The blur at each pixel uses the appropriate kernel, as defined by the blur mask.
Interesting link, although I donāt use IM much personally. It occurs to me that a useful mask might a MFT magnitude plot or the inverse from GāMIC.
They donāt work see on your example how the bird house is found at the same distance as some trees (above the bird house) far behind. Thatās part of the AI mess seen on smartphone where the simulated depth of field is wrong in many cases.
I think some of the smartphone stuff is because there are two cameras so they actually can produce a realistic depth map. Obviously the ai stuff that uses a flat image is going to be just a best guess
Just to clarify the OP: these are plain vanilla RAW files one would get from a digital camera. They do not have a depth map, I need to mimic it using masks. Also, I am interested in other effects where a depth map would be useless, eg imitating a tilt-shift look with a non-perpendicular focal plane.
My interest in this started when I was looking at images of a famous photographer (who I will not name here) with some ābokehā that was excessive and weird looking, even for medium format large aperture lenses, and it had artifacts (eg a rail was blurred behind the focal plane, but not in front). Then I learned that this look comes from Lightroom tutorials, which use one mask.
Now I am wondering how I can do better, in Darktable. Note that I am not aiming to replicate something crazy, like a f/0.95 lens or whatever. Just want a bit more tasteful, realistic-looking blur.
Iām wondering, if the phase detection approach could be applied to modern SLR/mirrorless systems to store the depth map as metadata. This might have potential to improve parametric masking even further.
Of course this thought raises also questions on potential performance degradations and file size increasesā¦
Perhaps that data is already stored? I think Iāve seen some info about at least the active AF sensors, but I didnāt check in depth (not really useful info in most cases).
Remains the limited resolution youād get from those (relatively few) pixels.
The latest iPhones use LiDAR to capture depth and use it for AR or portrait blur. Iām no expert, but IMO this could be one of the ways to further improve big camera photography. Maybe there could be a LiDAR hot shoe attachment to work out the depth? And later access some sort of depth map fileā¦
The costs and effectiveness are an unknown to me, but perhaps this could work. You certainly will need to know the relative position of the LiDAR to the camera sensor, but what else idk.