Computational DOF and Bokeh

Ok, I never thought of the fact that most cameras already have pretty accurate depth sensors for AF. Dual pixel improves the image coverage by a mile, but still that data is not available for the user.

The k-lens approach is essentially light field processing, which can do refocusing/artificial bokeh without actually computing a depth map. Of course 3D (ok, 2.5D) imaging is also possible with such a lens.

@Tobias I’m not sure if pixel shift would work, the base line is very small, but that’s just gut feeling.

In theory it also should be possible to calculate a depth map by depth-from-defocus, which requires blur estimation. The current focus detection mode is based on edge detection if I’m not mistaken, but its gradient classification might be a base for blur estimation. I think our local hero @anon41087856 also has fought with deconvolution sharpening at some point, that might also be a way in.

Does anyone know if it is possible to get the AF information with magic lantern? (I have no Canon.)

If you want computational bokeh you need to do it in the camera. Just as the phones use multiple exposures or even the multiple lenses to help calculating the depth map. I think doing it post mortem from a single photo might not get you as good results as the phones.

Apparently it’s already contained in the raw file:
"During Dual Pixel RAW shooting, a single RAW file saves two images into the file. One image consists of the A+B combined image data and the other only the A image data. This means the Dual Pixel RAW files contains both the normal image and also any parallax information, which can be measured and subject distance information extrapolated. As Dual Pixel RAW images contain two images they are therefore double the file size of normal RAW images. "

source:

Dual Pixel RAW

At least in past it was a problem that one raw frame is Image A while the next raw frame is Image A + B, which are encoded using the same bit depth, means that A + B is often clipped and B can not be recovered completely…

As anything else, it’s just a matter of getting the maths right and respect lights transport models.

I’ve just stumbled across a video which illustrates how Dual Pixel AF works.

Basically it takes two slightly shifted/parallaxed images and then calculates the depth of the subject.
Now, I don’t have an EOS R or 5dmk4 but I presume that Dual Pixel RAW saves both images in a raw file.

This could be the vector the industry needs to attack the computational DoF problem.

EDIT: except Canon states that the two images A and B that are shot are not how they are saved. They say that first image is A image data and B image data combined and the second image is just B image data. The thing that worries me is that they are combining both image data in the first image.

Now does this means everything falls apart or can a depth map still be generated from A+B and B image data?

If you know B, and A+B, then you can calculate A = (A+B) - B.

But the A+B image might be lossy-compressed, or something. If you can link to such a raw file, we can take a look.

I’d say it’s not the fault of the reduced depth-of-field cause by wide lens aperture, but the angle of the focal plane (planes perpendicular to the lens axis). It is aligned with her spine, which is not useful for this seated pose. That is not really annoying foreground blur. It is lens-subject misalignment.

I don’t own a camera that can produce Dual Pixel RAW images but a guy that I’m following on twitter has an EOS R so I’ve DMd him if he wouldn’t mind to send us one here. So I’m hopeful that we can get a sample :slight_smile:

That’s the fault in Canons dual-pixel approach. If A is not clipped and B is not clipped, but A+B is clipped, this does not work.

It’s silly because they could have made Dual Pixel Raw contain A and B image and do A+B in Canon EOS Utility or in post in general. Why they decided for this approach blows my mind.
Then again, maybe it’s not an issue at all since we don’t have a sample file yet.

You can download some from here

Those are CR2 files. Is it the same thing in CR3?

Nonetheless, @snibgo my guy @FilmKitNet (FilmKit) from twitter has sent us a Dual Pixel RAW .cr3 under copyleft here: https://drive.google.com/open?id=1ngo8A5sVySdjBoS0_KHaYcVf_uxlorre

Thanks @FilmKitNet ! This is the second time I’ve bugged him for EOS R files :smiley:

Yes, same thing. You can switch between the two (a+b and b) raw frames in RT 5.8 if you want to inspect them.
grafik

1 Like

A bit of side note to the current discussion, if the composition allows it you can keep the same background blur but get more depth of field by switching to a longer lens. I guess in the cases shown just front focusing a bit could also help. Maybe Eye AF is not always the optimal choice. :slight_smile:

Regarding the actual topic of simulating bokeh. I’m not sure if I’d call it easy, but it depends on what the expectations are of course. In real time computer graphics (games) it’s quite common to just draw discs at known light sources or at the locations of bright pixels combined with depth peeling.

I think for a really high quality computational approach a synthetic aperture built from many shots could be fun. It might even be possible to generate most of the expusures needed by computing the optical flow on a few and interpolating. Could also be fun with a drone to simulate a truly gigantic lens for a miniature effect. :slight_smile: