Quick question on RT Richardson–Lucy implementation

It measures the orthogonality error, not the angle. Apparently, stochastic stuff relies on the assumption that independant vectors are orthogonal (my understanding finishes here). So they measure the orthogonality between the deblurred image X(t) at iteration t and the difference X_e(t) = X(t) - X(t-1) between 2 iterations and track when it hits a minimum to stop the iterations. When they become independant it means the RL deconvolution only adds noise, and not sharpness anymore.

1 Like

The argument that independent random unit vectors in high dimensions are typically close to orthogonal is pretty nice.

Generate independent unit vectors A = X/|X| and B=Y/|Y|, where X and Y are standard Gaussians vectors in some large Euclidean space of dimension N. That is, each coordinate is an independent standard Gaussian. (This works because such vectors are isotropic.) By the law of large numbers, it is permissible for large N to replace |X| and |Y| by sqrt(N). So we consider now X*Y/N, where * denotes inner product. This is a sum of mean zero independent random variables, so again for large N it is approximately equal to zero, with fluctuations about zero of order 1/sqrt(N) as given by the central limit theorem.

For information: I’m just working on the auto-calculation of the capture sharpening radius value.

For bayer sensor it already works fine in my tests (I tested with several hundred of bayer files from different cameras to make the calculation more robust). Of course there will be cases where it does not work fine, which will be 1) high-ISO files, 2) files with unfixed hot (stuck) pixels (dead pixels are no problem in this case).

In general it will give a quite good capture sharpening radius value.

Of course it adds a bit of processing time to capture sharpening if the auto calculation of the radius is enabled, but it’s really minor (on my 8-core from 2013 the calculation of the radius takes ~40 ms for a 100 MP PhaseOne file)

Now I have to do that for xtrans as well before I push the stuff to capture sharpening branch…

7 Likes

Thanks for your work Into, as always I’m hoping for a solution for xtrans sensors.
As a side note, any idea if automatic hot/dead pixel filtering can be developing one day for xtrans? (Same for auto raw CA correction, but I don’t want to hijack that thread even more).

1 Like

As RT 5.7 is released now, I merged the current state of capture sharpening into 5.7 dev. Auto-calculation of RL radius is still wip, though it is already present in ART…

4 Likes

Now also auto calculation of radius for bayer and xtrans is in dev.
@Carmelo_DrRaw You can remove the capture sharpening branch from nightly builds

2 Likes

Thanks! I will check it out in a few weeks.

Some news: Today I implemented a tiled version of capture sharpening. That leads to a ~2x speedup of capture sharpening. I have to fix some small border issues and the progress-bar, but I think it will be ready at sunday.

Now the interesting part: The tiled processing would allow (with some additional code and one additional adjuster in UI) to use different radius values depending on the disctance of a tile to the center tile without impact on processing time of capture sharpening.

I will try that when the speedup is completed and pushed to dev…

5 Likes

What would be the benefit of this?

Currently it’s just a brainfart aiming to sharpen the outer (more blurred) regions of an image with a different (larger) deconvolution radius.

Clever idea!

How much cleverness for the adaptive de-convolution of various lenses? :stuck_out_tongue:

That would be extremely difficult without a priori depth data. Unless you’re primary goal is to deconvolute diffraction.

In theory, if you had:

  1. Measured OOF PSF data for the lens (I posted a link to one of Prof Hank Dietz’s slide decks on the subject earlier in this thread)
  2. Two separate shots taken at different focus settings that could be analyzed

You could potentially determine depth (I believe this is how Panasonic’s DFD autofocus works) and then deconvolute out the OOF PSF using that data. TBD how to handle occlusion.

Going the other way (applying an OOF PSF to an image that contains depth data) is much easier IF you have the depth data (such as from Canon’s dual pixel RAW), in fact this is what the portrait mode of many newer phone cameras do in order to simulate a much wider aperture than the original optical system.

Yes, that was my intention. Though I don’t have good example files to test with…

It’s in dev now.

2 Likes

Reverse portrait mode :laughing:
While smartphones aim to blur the backgroud, RT aims to bring it in focus.

Would it be possible set this as an option? It seems worthwhile for landscape type photos, but not for portraits.

Can anyone explain why the diffraction would be greater radially outward of the center?
Also, I think that increasing the capture sharpening radius further from the center requires a proper CA correction first. Otherwise deconvolution can get messy…

You are right. It’s not diffraction. But usually the blur is not uniform over the image area. Why not try to use different radius values in different regions?

Yes, on bayer-sensors images one should turn on raw-ca-correction.

@heckflosse Using the same PSF to counter different types of blurring seems like a very tricky thing to do…
Maybe it looks good and works well, but maybe we end up where the original discussion started: having a theoretically wrong approach to tackle a certain optical phenomenon.
So the tiled approach is really cool and may speed up the module as a whole, but I have my reservations otherwise.

The tiling could be a base to use different PSF for the tiles. For example a 24MP file currently has 30 * 20 tiles, each of them could use a different PSF. Taking an image of a chart with 600 small points (one point for each tile) one could calculate (roughly) the PSF for each tile. For shooting scenarios with constant conditions (e.g. microscopy) that could work. But that’s pure theory right now…