How to determine AA vs No AA?

(warning: sciency speak)

I have experience in a very different field in which this apparently “bad practice” of sacrificing resolution is the norm: Fourier-transform infrared spectroscopy (FTIR).

In an FTIR spectrometer there is a signal that is measured as a function of some position x, which has a hard maximum limit. From the point of view of this function S(x), this is like dropping it to zero after a specific point x_max. The measured spectrum is the Fourier-transform of this signal. But due to this drop to zero the resulting data is formally equivalent to the real spectrum convoluted with the Fourier-transform of a boxcar function, which is the sinc function. This latter function has strong oscillations, which end up messing with the spectrum peaks that are near resolution-limited, showing wings and extra peaks that are not really there. The usual solution is to take S(x) and soften the hard step near x_max, using different types of shapes that are a compromise between less oscillations and lower resolution. This is called apodization, and every time I show this to somebody that is new to the field they have that look in the face of “why would you do that? you lose resolution!”

The difference here with the AA filter in a camera is that it is just a mathematical step, it can be reverted if the original function S is saved instead of the final spectrum. But 90% of the time people care more about not seeing ghosts in the data (“moire”) than achieving the ultimate resolution of the equipment, so using and apodization step is actually the default in any FTIR software.

3 Likes

The frequency of the pattern has to be near the resolution, so it is not as much of a problem as in the days of 6Mp sensors. But you can still get moire in high-res sensors, eg on bird feathers.

It is not about the demosaicing algoritm. The signal is lost and nothing can reconstruct it. You can only mitigate it to a certain extent, eg by blurring the area.

Yep. Welcome to post-2000 optical design, driven by people peeping pixels and eyeballing MTF charts, instead of taking photographs.

1 Like

A layman question but couldn’t some post processing filter out moire? I don’t believe Nikon has some conspiracy to remove it from their high resolutions cameras, to exchange sharpness for accuracy, but keep it in the lower res ones.

The only explanation I can see is that in the high res ones since video is down-sampled they can control for this, and not in their lower res ones.

Maybe algorithms to deal with it in video are not there yet and create shimmers and other problems, whilst with photography it’s a good compromise since it’s a single frame?

No, the signal is lost. Only AI-based reconstructions or similar can fix it, not traditional signal processing.

It is just a practical decision in response to market demand. With high-res sensors, you get moire only in some corner cases, and then you can always stop down and diffraction will help.

Eg consider the OM1ii, with its 24 Mp sensor, made up of approx 3 μm pixels. At f/4, the Airy Disk will have a diameter is above 5 μm, so diffraction will counteract moire quite a bit. So do lens imperfections, which have an additional effect.

Eg Nikon’s Z9 has larger pixels (bit above 4 μm). Yes, at wide apertures, using a very good lens, you may get moire. But stop down a bit and you are good.

AFAIK videographers like mist filters to avoid moire, I haven’t tried that.

1 Like

Thanks, that makes sense.

1 Like

Moire is just the most obvious distortion. Even when there’s no moire, sharp edges are just as distorted, people mistake the distortion for detail and think it’s a benefit.

Hi,

It’s camera specific. My OMD 1 has not got an AA filter. My Olympus PenF does have an AA filter.

1 Like