Supposedly, this could work:
- create a mask
- blurred_image_mask = blur(image * mask)
- blurred_mask = blur(mask)
- blurred_image = blurred_image_mask / blurred_mask (use an arbitrary value when mask = 0)
- apply blurred_image to base image via mask.
The idea is that if you had a pixel in the background, far from the subject that you want to preserve, the value of the mask will be 1 there, so those areas of the image are preserved as-is. There will be a ‘black hole’ where the subject is. After the blur, areas far from the ‘hole’ will be blurred as before; areas close to the hole will be blurred, with some black from the ‘hole’ bleeding into that area.
Now, when you blur the mask itself, areas far from the whole remain white. The middle of the ‘hole’ will be black. And the areas around the mask’s edge will be grey, somewhere between 0 and 1.
Dividing the blurred background pixels, then:
- where the mask is white, far from the hole/subject → mask is white → no change, keep the blurred pixel
- where the mask is grey, close to the edge → pixel is darkened by the blurred mask bleeding into the pixel, but blurred mask < 1 → darkened pixel divided by e.g. 0.5 → restores brightness
- where the mask is black: we don’t care
Finally, apply the result of the division to the original image, subject to the mask.
There’s also something here, which may be useful: https://www.researchgate.net/publication/3557083_Normalized_and_differential_convolution