Quick question on RT Richardson–Lucy implementation

Have you consider clamping the RL output to the local min and max of the original image to prevent halos?

image

it’s written just below the formula that < Q, P > is the scalar product of the Q and P vectors aka < Q, P > = P_0 Q_0 + P_1 Q_1 + … + P_n Q_n. |Q| should be the vector norm L2 such that |Q| = \sqrt{|Q_0 |^2 + |Q_1|^2 + … + |Q_n|^2} . So, writing that in C code, where P and Q are 2 vectors both having m * n size, you get:

float scalar_prod = 0.0f; // < Q, P >
float norm_P = norm_Q = 0.0f; // |P| and |Q|
for(int i = 0; i < m; ++i)
  for(int j = 0; j < n; ++j)
  {
    scalar_prod += Q[i * m + j] * P[i * m + j);
    norm_Q += Q[i * m + j] * Q[i * m + j];
    norm_P += P[i * m + j] * P[i * m + j];
  }

float ADE = fabsf(arcsinf( scalar_prod / (sqrtf(norm_P) * sqrtf(norm_G))));

Same in Python with Numpy arrays:

P = numpy.array(…);
Q = numpy.array(…);
ADE = numpy.abs(numpy.arcsin(
    numpy.sum(P * Q) / (numpy.sum(P*P) * numpy.sum(Q*Q))

Notice you can skip the abs for performance if your picture is properly scaled in [0; 1].

You don’t need to do that on the whole image, you can only extract the center area as a control sample.

3 Likes

The domain of the arcsin function is the image of sin function which is the interval [-1,1]. Therefore, I think that |Q| must be the square root of <Q,Q>. Not the formula above. (The Cauchy Schwartz inequality tells us that the ratio has modulus at most one.)

Right, I didn’t do bounds checks and assumed euclidian (L2) norms are usually written ||x|| or |x|_2. You are right.

Yet another proof than explaining the notations in papers is never a luxury.

There is something fishy with the formula, since the way to compute the angle between two vectors is using \arccos, not \arcsin: File:Inner-product-angle.svg - Wikipedia

It measures the orthogonality error, not the angle. Apparently, stochastic stuff relies on the assumption that independant vectors are orthogonal (my understanding finishes here). So they measure the orthogonality between the deblurred image X(t) at iteration t and the difference X_e(t) = X(t) - X(t-1) between 2 iterations and track when it hits a minimum to stop the iterations. When they become independant it means the RL deconvolution only adds noise, and not sharpness anymore.

1 Like

The argument that independent random unit vectors in high dimensions are typically close to orthogonal is pretty nice.

Generate independent unit vectors A = X/|X| and B=Y/|Y|, where X and Y are standard Gaussians vectors in some large Euclidean space of dimension N. That is, each coordinate is an independent standard Gaussian. (This works because such vectors are isotropic.) By the law of large numbers, it is permissible for large N to replace |X| and |Y| by sqrt(N). So we consider now X*Y/N, where * denotes inner product. This is a sum of mean zero independent random variables, so again for large N it is approximately equal to zero, with fluctuations about zero of order 1/sqrt(N) as given by the central limit theorem.

For information: I’m just working on the auto-calculation of the capture sharpening radius value.

For bayer sensor it already works fine in my tests (I tested with several hundred of bayer files from different cameras to make the calculation more robust). Of course there will be cases where it does not work fine, which will be 1) high-ISO files, 2) files with unfixed hot (stuck) pixels (dead pixels are no problem in this case).

In general it will give a quite good capture sharpening radius value.

Of course it adds a bit of processing time to capture sharpening if the auto calculation of the radius is enabled, but it’s really minor (on my 8-core from 2013 the calculation of the radius takes ~40 ms for a 100 MP PhaseOne file)

Now I have to do that for xtrans as well before I push the stuff to capture sharpening branch…

7 Likes

Thanks for your work Into, as always I’m hoping for a solution for xtrans sensors.
As a side note, any idea if automatic hot/dead pixel filtering can be developing one day for xtrans? (Same for auto raw CA correction, but I don’t want to hijack that thread even more).

1 Like

As RT 5.7 is released now, I merged the current state of capture sharpening into 5.7 dev. Auto-calculation of RL radius is still wip, though it is already present in ART…

4 Likes

Now also auto calculation of radius for bayer and xtrans is in dev.
@Carmelo_DrRaw You can remove the capture sharpening branch from nightly builds

2 Likes

Thanks! I will check it out in a few weeks.

Some news: Today I implemented a tiled version of capture sharpening. That leads to a ~2x speedup of capture sharpening. I have to fix some small border issues and the progress-bar, but I think it will be ready at sunday.

Now the interesting part: The tiled processing would allow (with some additional code and one additional adjuster in UI) to use different radius values depending on the disctance of a tile to the center tile without impact on processing time of capture sharpening.

I will try that when the speedup is completed and pushed to dev…

5 Likes

What would be the benefit of this?

Currently it’s just a brainfart aiming to sharpen the outer (more blurred) regions of an image with a different (larger) deconvolution radius.

Clever idea!

How much cleverness for the adaptive de-convolution of various lenses? :stuck_out_tongue:

That would be extremely difficult without a priori depth data. Unless you’re primary goal is to deconvolute diffraction.

In theory, if you had:

  1. Measured OOF PSF data for the lens (I posted a link to one of Prof Hank Dietz’s slide decks on the subject earlier in this thread)
  2. Two separate shots taken at different focus settings that could be analyzed

You could potentially determine depth (I believe this is how Panasonic’s DFD autofocus works) and then deconvolute out the OOF PSF using that data. TBD how to handle occlusion.

Going the other way (applying an OOF PSF to an image that contains depth data) is much easier IF you have the depth data (such as from Canon’s dual pixel RAW), in fact this is what the portrait mode of many newer phone cameras do in order to simulate a much wider aperture than the original optical system.

Yes, that was my intention. Though I don’t have good example files to test with…

It’s in dev now.

2 Likes