As far as the routine in the library goes, I think you need to expose all the parameters needed to shape the behavior for all the envisioned use cases. Then, offer parameter sets in the documentation that application programmers can use to make UI presets in whatever manner they choose. In rawproc, what I’d then do is to expose the parameters directly to the user, then provide documentation about the use cases, and configuration properties that would allow construction of presets. RT/ART would probably be better served with radio button presets like you describe. But your focus would be the algorithm and the parameters required to shape its behavior…
I always thought that was what moiré in regard to RAW files was. It’s an artefact that happens at the capturing stage due to inadequate anti-aliasing filters.
If we want to fight moiré, we should make differences between cameras with and without aa filters. I can post a link to a Pentax-K1 (no aa filter) pixelshift file, where pixelshift combine does not show moiré, but demosaicing shows moiré (false colors). Any interest?
Edit: Having a good pixelshift file would also be great to test the new rcd implemenation, because the pixelshift combined output is more close to the truth.
I’m personally curious if this sort of large-scale moire is indeed less present on x-trans or not. Surely it happens occasionally, if not in the same situations. (diagonal patterns?) Perhaps an x-trans image can be synthesized from a pixel-shift image.
By finding the correct interpolation directions, most of the false colors are removed. But some color aliasing remains since it was captured by the sensor and it’s not a product of the demosaicing itself.
Yes, it happens. If the photos Iain kindly shared had been captured with an X-Trans camera, I’m pretty sure there would have been color aliasing in the raw data.
That’s why I think a filter to deal with high scale color aliasing should be separated from the demosaicing so that any image, regardless of the chosen demosaicing method or the CFA it uses, benefit from it.
Do the open source raw developers have any tool to fight it? The filter @Iain wrote seems to do a good job.
That image is a simple example, but sometimes both interpolation directs give different moiré patterns
I think this is because the underling pattern is a checkerboard rather than lines,
If you look at the example image from my other thread
You can see that the demosaic has largely recovered that pattern. As a result of having correct green pixels the moiré is removed, when interpolating the colour differences.
I am just being cautious because, while it is my photo, I can’t ask the subject about it because I don’t know ho they are. It’s just a guy at a party I photographed 10 years ago
I have been kicking around some ideas and thought I would share.
Moiré detection:
On the green bayer channel detect horizontal edges by convolution
1,0,1,0
0,1,0,1
-1,0,-1,0
0,-1,0,-1
and vertical edges
1,0,-1,0
0,1,0,-1
1,0,-1,0
0,1,0,-1
Add the results
Detect high frequency components on both diagonals
North-west
1,0
0,-1
North-east
0,1
-1,0
Add the results of the diagonal convolutions
Get the absolute values. Subtract the edge-detection results from the high frequency results. Choose a sensible value for a threshold.
Picking the interpolation direction:
I have been playing around with using hue as factor. If you interpolate the red and blue channels independently, that is, not using the differences from the green channel, you always get moiré. If you compare the hue of the independently interpolated image to the hue of images interpolated in each direction with colour differences the one with the biggest difference in hue should be the right direction.
That might not be the best method, but I thought you might find it interesting anyway.
Exported from PhotoFlow. The comparison is from G’MIC’s display, which tends to exaggerate colour and individual pixels. Becomes more apparent in weaker or darker edges as I process the exported linear image. Pinging @Carmelo_DrRaw.