With some extreme pixel peeping on multiple images, I found that the capture sharpening seems to work a bit worse with AMaZE than for example VNG4. See for yourself. I don’t like how the sharpening seems to enhance some non-existent diagonals. Also the green background seems to have a more nervous texture with AMaZE and sharpening.
I tested a couple of images and it produced some very nice results when the images were sharp to begin with.
But one image, trees with overexposed sky behind the branches turned out bad. The branches got a black halo against the white (blow out) sky. A special case perhaps but still…
I see, that would disqualify the function from quite a lot of images though. Almost all images including water and bright sunlight will be affected as every light sparkle in the waves will be surrounded by a black circle.
Well, at gamma = 1.0 that’s true. Try with gamma 3.0 (which is the same as in RL-deconvolution in Details tab). In my tests this gives the same (low) dark halos near clipped regions as in RL-deconvolution in Details tab, but still reduces the bright halos.
So the gist of the article is: calculate the histogram difference between two iterations, apply fuzz, and then stop whenever there is ‘no’ change. Seems like quite an expensive calculation tbh. I wonder if that is worth the processing time over simply doing a default of 15 or 30 iterations.
I’ll try to make a comparison showing the impact of the number of iterations.
it’s written just below the formula that < Q, P > is the scalar product of the Q and P vectors aka < Q, P > = P_0 Q_0 + P_1 Q_1 + … + P_n Q_n. |Q| should be the vector norm L2 such that |Q| = \sqrt{|Q_0 |^2 + |Q_1|^2 + … + |Q_n|^2} . So, writing that in C code, where P and Q are 2 vectors both having m * n size, you get:
float scalar_prod = 0.0f; // < Q, P >
float norm_P = norm_Q = 0.0f; // |P| and |Q|
for(int i = 0; i < m; ++i)
for(int j = 0; j < n; ++j)
{
scalar_prod += Q[i * m + j] * P[i * m + j);
norm_Q += Q[i * m + j] * Q[i * m + j];
norm_P += P[i * m + j] * P[i * m + j];
}
float ADE = fabsf(arcsinf( scalar_prod / (sqrtf(norm_P) * sqrtf(norm_G))));
The domain of the arcsin function is the image of sin function which is the interval [-1,1]. Therefore, I think that |Q| must be the square root of <Q,Q>. Not the formula above. (The Cauchy Schwartz inequality tells us that the ratio has modulus at most one.)
There is something fishy with the formula, since the way to compute the angle between two vectors is using \arccos, not \arcsin: File:Inner-product-angle.svg - Wikipedia
It measures the orthogonality error, not the angle. Apparently, stochastic stuff relies on the assumption that independant vectors are orthogonal (my understanding finishes here). So they measure the orthogonality between the deblurred image X(t) at iteration t and the difference X_e(t) = X(t) - X(t-1) between 2 iterations and track when it hits a minimum to stop the iterations. When they become independant it means the RL deconvolution only adds noise, and not sharpness anymore.
The argument that independent random unit vectors in high dimensions are typically close to orthogonal is pretty nice.
Generate independent unit vectors A = X/|X| and B=Y/|Y|, where X and Y are standard Gaussians vectors in some large Euclidean space of dimension N. That is, each coordinate is an independent standard Gaussian. (This works because such vectors are isotropic.) By the law of large numbers, it is permissible for large N to replace |X| and |Y| by sqrt(N). So we consider now X*Y/N, where * denotes inner product. This is a sum of mean zero independent random variables, so again for large N it is approximately equal to zero, with fluctuations about zero of order 1/sqrt(N) as given by the central limit theorem.
For information: I’m just working on the auto-calculation of the capture sharpening radius value.
For bayer sensor it already works fine in my tests (I tested with several hundred of bayer files from different cameras to make the calculation more robust). Of course there will be cases where it does not work fine, which will be 1) high-ISO files, 2) files with unfixed hot (stuck) pixels (dead pixels are no problem in this case).
In general it will give a quite good capture sharpening radius value.
Of course it adds a bit of processing time to capture sharpening if the auto calculation of the radius is enabled, but it’s really minor (on my 8-core from 2013 the calculation of the radius takes ~40 ms for a 100 MP PhaseOne file)
Now I have to do that for xtrans as well before I push the stuff to capture sharpening branch…