Quick question on RT Richardson–Lucy implementation

With some extreme pixel peeping on multiple images, I found that the capture sharpening seems to work a bit worse with AMaZE than for example VNG4. See for yourself. I don’t like how the sharpening seems to enhance some non-existent diagonals. Also the green background seems to have a more nervous texture with AMaZE and sharpening.

The capture sharpening settings are: contrast threshold = 0, gamma = 1.0, radius = 0.60, iterations = 15.

Well, I found a great counter-example too :stuck_out_tongue:
Here AMaZE performs a lot better.

I’m currently testing capture-sharpening on pixelshift files :wink:

You might want a stopping criterion in your algo to avoid degeneration:

https://core.ac.uk/download/pdf/20327456.pdf

https://www.researchgate.net/publication/280221976_A_stopping_criterion_to_halt_iterations_at_the_Richardson-Lucy_deconvolution_of_radiographic_images

Could you break this down for me?

image

Looks like a strange type of bra-ket notation Bra–ket notation - Wikipedia

I tested a couple of images and it produced some very nice results when the images were sharp to begin with.
But one image, trees with overexposed sky behind the branches turned out bad. The branches got a black halo against the white (blow out) sky. A special case perhaps but still…

As you all know, math isn’t my forte. Looks related to

Yes, that’s still an issue. Though I don’t know whether we can solve it, as in the neibhourhood of clipped pixels we work on non linear data.

I see, that would disqualify the function from quite a lot of images though. Almost all images including water and bright sunlight will be affected as every light sparkle in the waves will be surrounded by a black circle.

Well, at gamma = 1.0 that’s true. Try with gamma 3.0 (which is the same as in RL-deconvolution in Details tab). In my tests this gives the same (low) dark halos near clipped regions as in RL-deconvolution in Details tab, but still reduces the bright halos.

So the gist of the article is: calculate the histogram difference between two iterations, apply fuzz, and then stop whenever there is ‘no’ change. Seems like quite an expensive calculation tbh. I wonder if that is worth the processing time over simply doing a default of 15 or 30 iterations.

I’ll try to make a comparison showing the impact of the number of iterations.

Have you consider clamping the RL output to the local min and max of the original image to prevent halos?

image

it’s written just below the formula that < Q, P > is the scalar product of the Q and P vectors aka < Q, P > = P_0 Q_0 + P_1 Q_1 + … + P_n Q_n. |Q| should be the vector norm L2 such that |Q| = \sqrt{|Q_0 |^2 + |Q_1|^2 + … + |Q_n|^2} . So, writing that in C code, where P and Q are 2 vectors both having m * n size, you get:

float scalar_prod = 0.0f; // < Q, P >
float norm_P = norm_Q = 0.0f; // |P| and |Q|
for(int i = 0; i < m; ++i)
  for(int j = 0; j < n; ++j)
  {
    scalar_prod += Q[i * m + j] * P[i * m + j);
    norm_Q += Q[i * m + j] * Q[i * m + j];
    norm_P += P[i * m + j] * P[i * m + j];
  }

float ADE = fabsf(arcsinf( scalar_prod / (sqrtf(norm_P) * sqrtf(norm_G))));

Same in Python with Numpy arrays:

P = numpy.array(…);
Q = numpy.array(…);
ADE = numpy.abs(numpy.arcsin(
    numpy.sum(P * Q) / (numpy.sum(P*P) * numpy.sum(Q*Q))

Notice you can skip the abs for performance if your picture is properly scaled in [0; 1].

You don’t need to do that on the whole image, you can only extract the center area as a control sample.

3 Likes

The domain of the arcsin function is the image of sin function which is the interval [-1,1]. Therefore, I think that |Q| must be the square root of <Q,Q>. Not the formula above. (The Cauchy Schwartz inequality tells us that the ratio has modulus at most one.)

Right, I didn’t do bounds checks and assumed euclidian (L2) norms are usually written ||x|| or |x|_2. You are right.

Yet another proof than explaining the notations in papers is never a luxury.

There is something fishy with the formula, since the way to compute the angle between two vectors is using \arccos, not \arcsin: File:Inner-product-angle.svg - Wikipedia

It measures the orthogonality error, not the angle. Apparently, stochastic stuff relies on the assumption that independant vectors are orthogonal (my understanding finishes here). So they measure the orthogonality between the deblurred image X(t) at iteration t and the difference X_e(t) = X(t) - X(t-1) between 2 iterations and track when it hits a minimum to stop the iterations. When they become independant it means the RL deconvolution only adds noise, and not sharpness anymore.

1 Like

The argument that independent random unit vectors in high dimensions are typically close to orthogonal is pretty nice.

Generate independent unit vectors A = X/|X| and B=Y/|Y|, where X and Y are standard Gaussians vectors in some large Euclidean space of dimension N. That is, each coordinate is an independent standard Gaussian. (This works because such vectors are isotropic.) By the law of large numbers, it is permissible for large N to replace |X| and |Y| by sqrt(N). So we consider now X*Y/N, where * denotes inner product. This is a sum of mean zero independent random variables, so again for large N it is approximately equal to zero, with fluctuations about zero of order 1/sqrt(N) as given by the central limit theorem.

For information: I’m just working on the auto-calculation of the capture sharpening radius value.

For bayer sensor it already works fine in my tests (I tested with several hundred of bayer files from different cameras to make the calculation more robust). Of course there will be cases where it does not work fine, which will be 1) high-ISO files, 2) files with unfixed hot (stuck) pixels (dead pixels are no problem in this case).

In general it will give a quite good capture sharpening radius value.

Of course it adds a bit of processing time to capture sharpening if the auto calculation of the radius is enabled, but it’s really minor (on my 8-core from 2013 the calculation of the radius takes ~40 ms for a 100 MP PhaseOne file)

Now I have to do that for xtrans as well before I push the stuff to capture sharpening branch…

7 Likes