Ok, I never thought of the fact that most cameras already have pretty accurate depth sensors for AF. Dual pixel improves the image coverage by a mile, but still that data is not available for the user.
The k-lens approach is essentially light field processing, which can do refocusing/artificial bokeh without actually computing a depth map. Of course 3D (ok, 2.5D) imaging is also possible with such a lens.
@Tobias I’m not sure if pixel shift would work, the base line is very small, but that’s just gut feeling.
In theory it also should be possible to calculate a depth map by depth-from-defocus, which requires blur estimation. The current focus detection mode is based on edge detection if I’m not mistaken, but its gradient classification might be a base for blur estimation. I think our local hero @aurelienpierre also has fought with deconvolution sharpening at some point, that might also be a way in.