Computational DOF and Bokeh

This may be a controversial topic, but let’s face it.
Computational depth of field and bokeh produces amazing results on smartphones to the point that most people can’t notice any difference between a DSLR and a smartphone shot. People are ditching photographers for events left and right in favor of someone just capturing images with their phone.

Now while we spend thousands of dollars on lenses, most people get better results on their phone in the terms of bokeh and background separation via software.

Now my question is. How to do computational DoF and bokeh in Darktable like iPhone does?
It seems to me if I could do that then I could have images that are large in resolution but also have great background separation and bokeh without needing a $2k lens.

And the result would be better than on an iPhone because I’d have a greater control with masking out the subject.
And the result would be better than a $2k lens because it wouldn’t suffer of foreground separation when that’s unwanted.

Essentially what I’m wondering is when will the pro FOSS software get that ability?

This is a good thread about it:

Also I know that this bokeh and background separation trend is over hyped and not artistic but it sells photos. It sells it A LOT!

Only when the photographer has made the mistake of choosing the wrong aperture.

if you close the aperture you loose background separation too then but sure I agree with you.

The point is, computational dof saves time and money. Gets you professional looking results with completely amateur gear.
I can see a world where ppl import their images to their phones for editing if the desktop software can’t keep up.
And I don’t want to edit my images on a phone lol.

If someone is happy with computational DOF, then be happy. But I don’t see a point investing time and money into software functions that simulate something you can already control on your camera with opening or closing aperture.

And I never heard somebody complaining about “annoying foreground blur”. All I see in the screenshot (not even original sizes) is this:

  • The A7 image has got much more details in the jacket texture and in the red hat
  • The computational bokeh (iPhone) has got some flaws in the steps on the left side. The second step from below is more unsharp at the left image border than next to the jacket.

And I have a conspiracy theory: the “annoying foreground blur” looks very unnatural to me, like it has been added afterwards with some kind of gaussian blur (did not see the original size image).

I wouldn’t waste my time for these “Camera vs Smartphone” videos on YouTube.

1 Like

You didn’t understand me.

The point is that you simply can’t get that bokeh on a DSLR without a full frame camera and a f1.2 lens.
On the other hand, when using a phone you simply can’t get the big enough resolution and good enough masking.

There is a solution tho:
With computational DoF in for ex. Darktable you could be using a crop sensor DSLR with any lens at f8 where the sharpness is on average the best on any lens and still get the background separation and great looking bokeh in post with computational DoF.
It’s a win win.
You get everything except the satisfaction that you’ve captured that shot manually and without editing it.
But who cares, it would be an awesome professional looking result with the cheapest gear possible.
It would really be disruptive too, suddenly achieving good background separation and bokeh wouldn’t be an issue or a hardware limitation.

Of course it’s a full frame DSLR. That’s not even the point, I’m not advocating for phone photography here but for combining computational DoF with crop DSLR-s and average lenses.

Crop sensor cameras produce sharp results too, but good luck achieving this or greater level of background separation on them without computational DoF.

Of course, but if the masking was done manually or partially manually then it wouldn’t be an issue. That’s why I’d like the computational dof to be available in the software like Lightroom and Darktable.

I have been thinking about that for a while. Achieving a realistic lens blur is easy, the challenge is to get a depth map from the picture that let you dial up or down the effect depending on the area, and blend it nicely to keep the transitions realistic.

I have several reference papers and ideas to test. It’s low an my to-do list though.

4 Likes

I really don’t know if this is even relevant but afaik depth maps are required for a 3D Ken Burns Effect from a Single Image in video and there are 4 guys from Adobe who published a paper recently about that stuff and Adobe even let them open source the implementation.
Is this of any use to what we’re talking about?

Here’s the paper:

Here’s the GitHub repo:

Also there were some allegations about plagiarism or something and that’s how I’ve found out about this paper lol:
https://richardt.name/publications/photopopup/
This guy even implemented DoF bluring.

Note that I don’t really understand any of it but I still read it and follow it.

to complement what @anon41087856 just wrote, afaik phones have hardware support for computing a reasonable depth map (most often just a second lens). without that, I think it would be hard to produce something convincing. of course, I would be happy to be shown otherwise :slight_smile:

Parallax could be used as a depth map, but that paper uses a deep learning network, which is… not for tomorrow in darktable.

1 Like

I think this would be a perfect task for dedicated neural net

sorry… i replied before reading this one :slight_smile:

@KristijanZic

While neural networks, deep learning networks, or even quantum computing arrive, you may wish to try the Brenizer method.

It gives you most of what you want: taking pictures with a not so expensive lens (but a decent quality lens is preferable), getting high quality and very shallow depth of field, and keep using the raw processing program you like the most. The only extra things are a few extra pictures of the scene, and an application to create panoramas.

Look at the comment I wrote just before yours. I listed the implementations that don’t require nothing but an image. :smile:
I’m convinced there should be simpler solutions out there too.

The Brenizer method sucks. You need to have a static subject and a static background and some time to shoot your frames array. Then you need to stitch those pictures. Then you loose a lot of pixels and storage space.

The results are sometimes nice, but it’s an awful workflow and you really need to be lucky with the right conditions. It’s really random.

Don’t get me wrong, it’s a clever hack, but I won’t advertise it as a sensible workflow and bend a software to speed it up.

3 Likes

Yeah, I know about that and it’s great for static subjects but not so much for live events.

The thing that prompted this post is that I’ve seen our countries presidential candidates tweeting photos from their events that were captured by some 3rd person (campaign leader) of them giving speeches and stuff.

And it pains me to admit it, but with all the flaws it looked awesome (or good enough).
So if it’s good for our current president why wouldn’t it be good for me to use some sort of fake DoF.

The thing is, those are all moving subjects :confused:

1 Like

I’ll check the paper, thanks! until then, I remain skeptical

I thought about this too an see at the moment different (theoretical) possible ways to get depth maps.

  1. Create a lens with a prism and sprit the image into 9 images.
    Pro:
    A prototype is ready for tests: https://www.k-lens.de/
    @houz works for K-Lens and the software they use looks very familiarly :wink: : https://www.k-lens.de/en/produkt-02#&gid=1125898543&pid=3
    Con:
    The image size is reduced. (K-Lens expect to get ~2/3 of the original image size.)
    You need this extra lens.
  2. Use the pixel shift that some cameras use.
    Pro: The Hardware is ready
    Con: I have no clue if it works for creating depth maps
  3. Adding a LiDAR to the camera like a flash:
    Pro: LiDARs are getting much cheaper at the moment because they are needed for autonomy driving. (https://www.blickfeld.com/ https://www.livoxtech.com or Bosch will Laserradar entwickeln | heise Autos)
    Con: I haven’t heard form someone even trying to use this for photographic.
    Needs synchronisation with the camera
  4. A camera array.
    Pro High image resolution
    Con Needs some kind of synchronisation
    Expensive

I played a little bit with the demo images from k-lens and posted it here:
(The quality of the depth map was not perfect, but it was one of the first test images)

If you are interested in getting the depth map from a Samsung Gallaxy S10. I blogged here the exiftool command how to extract it and import it in GIMP :

If you have a focal length on your camera similar to your phone and shot the same scene with both, could you map the depth from one onto the other?

Also Lidar is well withing financial reach: https://www.amazon.com/MakerFocus-Single-Point-Ranging-Pixhawk-Compatible/dp/B075V5TZRY/ref=mp_s_a_1_3?keywords=lidar+sensor&qid=1578772604&sr=8-3

I think no hardware approach should be taken but the lidar is definitely interesting.

Now, can lidar damage a camera?:

There are some suggestions that, depending on the wavelength, lidar can damage eyes. You can always buy a new camera, but…

A depth map should be able to be produced from Canon’s dual-pixel cameras. Canon has a patent for quad pixel autofocus which would make it even better.

Combined with machine learning I think high quality depth maps and artificial bokeh will be possible.