Isn’t there also a typo here in the WB normalization after interpolation - shouldn’t it be RGB[k] / wb[k]? At least it does a different thing than the corresponding OpenCL line.
Another thing I noticed: 5x5 box blur is applied on the binary clipping mask. Doesn’t it cause some of the clipped pixels to get less than full opacity? Would it be good to first dilate the mask by a few pixels and then blur to get 100% opacity on all clipped pixels and a smooth roll-off into the non-clipped surroundings?
(Also, is there a chance of the divisor to be zero here?)
I think it might be also beneficial to disregard the center sample when computing the regression. The data in one channel is clipped anyway, so that data point can be thought of as an outlier.
This seems risky because we are working with laplacians in a quasi-Fourier space here, not with the actual image, so there is no definition of dark or bright, only oscillations around average value. Dark or bright appear after we collapse (sum back) all wavelets scales.
After some tests, it seems to move the problem elsewhere, but does not yield smooth reconstruction. Basically, the current dark fringes may appear when the radius of reconstruction is too large, so the darker sky is inpainted into the sun disc.
Right, that makes sense. Perhaps one could try to avoid darkening the image instead when forming the reconstructed image at the last wavelet scale (I don’t remember if the original image is available at that point).
The original image is not available at that point (I could make it, but that comes with RAM penalty). However, I’m exploring a scale coefficient that would soften the correction as we reconstruct farther.
Nice! One thing I also had in mind: the color diffusion on norms doesn’t necessary preserve the norm as one - should you renormalize the ratios before recostructing from the saved norm?
No, I tried it and it’s worse. I think that’s because the original norm is already wrong in relation to the neighbourhood, so we use it only as some “high-frequency” metric that let us blur the “low-frequency” ratios without damaging details.
BTW, you were right about renormalizing ratios. When I tried it, the wavelets processing happened bottom to top and I renormalized scale by scale, which was bad. This is what I get by renormalizing only once on the collapsed wavelets (at the end instead of in-between):
Is it me or something else? Today I checked out the git version a4b3314-dirty incl. Aureliens bug fixings for guided laplacians.
I applied guided laplacians to the photo and get a large black box when I export the image to jpg. Am I missing something?
btw: Fortschritt means ‘progress’.