@cuniek, that’s interesting, thanks.
You posted some examples yesterday, I was wondering what camera/lens was used for the first, the bad moire. Did the sensor have an AA filter?
@cuniek, that’s interesting, thanks.
It is Nikon D750 with Sigma 35 mm ART.
Nikon has “only” 24 MP on FF sensor, and very weak AA filter that blurs only vertically.
Well done!, very clever, I take my hat off.
A question please - what was the colour of the subject? Did it have some green in it, like the Adobe sample, or was it more neutral, like Mine1?
That’s interesting, I didn’t know some filters work in only one direction. Presumably that’s the main reason for the extreme/unusual moire. It doesn’t seem fair to expect a demosaicing algorithm to cope with input like this. To me, the cameras with no filter at all are ignoring the maths and physics, and have to accept the consequences!
I believe the subject was neutral. It had black and white stripes.
I still wonder why you stop down your lenses to f11 or f16. That does not give max. resolution imho.
I would understand doing that for macro shots and even for landscape shots in case you need very large dof. But your samples don’t need very large dof and using f11 or f16 in that case limits detail resolution because of diffraction.
It is not the demosaicing algorithm’s fault. I don’t think anyone here has tried to solely blame the demosaicing for aliasing issues. That is exactly why camera manufacturers have been putting anti-aliasing filters on image sensors. A well designed AA filter will mitigate the most offensive aliasing issues, and deconvolution sharpening can restore the signal to almost the same level as a non-AA filtered image, with still less aliasing artifacts. I think that on the high resolution sensors, it is wrong thinking for Canon, Nikon and Sony to be removing the AA filter. I’d rather have an AA filter on my A7R II.
Aspiring for ‘as much detail as possible’ does not equal to wanting theoretical max resolution. I am not interested in absolute perfection in everything - an impossible goal. As I mentioned above, I am willing to trade off max sharpness to have an AA-filtered image that is softer but with less artifacts. Who cares if the edges are perfectly crisp but full of mazing, zipper and ringing artifacts? I’ll much rather have a slightly softer image that is artifact-free.
There were various reasons to stop down that far. The depth-of-field was actually needed, and the lens suffers from some curvature of field so stopping down helped with the focus compromise. Also, the corner resolution gains outweighed the tiny loss due to diffraction. And the whole thing sharpens up quite nicely with well controlled deconvolution sharpening. f/11 or f/16 is perfectly fine! I’m not obsessed about acutance! I’m not sure where this misunderstanding is coming from. Slightly soft edges are fine, digital artifacts I am not as fine with. The point is to avoid any artifacts which gives away the fact that it is a digital image.
Maybe you can go which less sharpening giving less artifacts when you start with more crisp input?
Just a thought
Of course! When the situation allows for it. More often than not one needs to stop down to f/11, 13 or 16. It is also possible to take many more frames at smaller aperture values and try to focus blend, if the subject allows.
There are surely about a million different compromises one can make for the final result. But don’t forget that the size of the artifacts remains the same for the same amount of enlargement, assuming the input resolution remains constant. One argument against that would be to have way more megapixels. I’m now routinely getting 80-100+ megapixels stitching with my Sony A7R II. It’s helping a lot, and I’m much happier than when I was shooting with my Canon 5D II. But that’s just barely enough for a 40x60 inch print. For many kinds of meaningful subject matter, multi-row stitching is just too slow and not practical, so routinely achieving 400+ megapixels is not easy.
So it is better to just have less processing artifacts in the end. Same reasoning - why use Lightroom’s demosaicing when we can use Raw Therapee’s AMaZE?
By suggesting not to stop down so far I didn’t assume that the input resolution remains constant
But I still agree that avoiding demosaic artifacts is also a key for better results. For that reason we are working on it
I know That’s the point - there are so many ways to improve the output.
I cannot thank everyone enough.
Saw this in one of my images. Is this an example of the artifacts discussed? Amaze left VNG right. I see a twisty stripeye effect on the handrail to the left. Much more smooth to the right. Only change between the images is demosaicing.
Yes, that’s a good example! How does it look with the recently (by @cuniek) improved dcb demosaic in RT?
Pulled dev today so build should be recent enough? This is Amaze vs DCB, DCB looking much better.
Anyone want a play can download the DNG at https://filebin.net/opcd1a3khomrmukm
Sorry for joining late to the discussion. I’m the author of one of the pixel interpolation methods AMaZE is using, referred in the code as “Adaptive Ratios”.
As Emil said, AMaZE performs several pixel value estimations and then tries to select the best suited result for each of the four cardinal directions. If I recall correctly, Emil’s implementation uses the Adaptive Ratios and the widely spread Hamilton-Adams interpolation methods. Both methods produce this kind of artifacts when the intensity of the blue and the red channels happens to be too far away from each other. When this occurs, the under or overestimation is quite noticeable, since the degree of color correction in alternate rows is very different.
I believe the implementation of the Adaptive Ratios in AMaZE is from 2009. Since then, in my own demosaicing algorithms I use a refined version of the pixel estimation method which produces virtually no artifacts even in the hardest parts of the image. And, unlike AMaZE, there is no need to perform a separate diagonal refinement at all, so it should run faster.
In the example image you are using, this is the straight result of the demosaicing:
If someone is kind enough to reupload the rest of the sample images used for the crops (most of their links expired) I would post more examples.
My code is part of a full demosaicing routine (RCD) written à la dcraw: pure ANSI C with neither parallelization nor optimization. It just works but it’s probably not ready for production. If it might be of any interest, I would release it gladly.
The goal of the RCD algorithm is to minimize any kind of artifact even with artificial lightning or strong chromatic aberrations. This way, the resulting image can be safely sharpened for final printing. Intentionally it does not deal with moirée, though, so it’s not suited for all images.
The algorithm can be included in RawTherapee, as it happened with IGV, or the standalone pixel estimation method can be used to update AMaZE. Given this last option, I would suggest you to update too the full red and blue channel interpolation in AMaZE, whose quality is just average. I believe integrating the one I use in RCD will be pretty straightforward and yield better results every time.
@LuisSanz Welcome to the forum and thanks for chiming in! Is your code readily available somewhere? I am not a dev but like to read about and try new things.
This sounds great!
Here is the raw file for the mountain picture which features above -
Just so everyone knows, you can upload raw files directly to the forum here. Just drag and drop onto your post compositor. It will help avoid dead links in the future (future @LuisSanz will thank you).
I will give it a try for sure