The current logic is working only with cameras that use the X-Trans III sensor. The X-T3 uses the X-Trans 4 sensor.
The X-Trans 4 sensor has an auto focus area that covers the majority of the sensor. There might be an advantage to applying this logic to that sensor. It should be possible to see the pattern in a stacked dark frame. Since the pattern covers the entire sensor, however, the normal astrophotography practice of dithering the mount between each exposure might be just as effective at eliminating the pattern noise. It wasnāt a priority to handle these sensors when making the logic work.
The X-Trans III sensor has an autofocus area that covers only part of the sensor. This is why it was so important to have a fix for that sensor. It presents a problem that cannot be fixed by dithering alone.
The logic for this proved to be a bit more complicated than whatās in the original code I shared. The X-Trans III sensors can have different starting points for the RGB pattern. This might be related to how the sensors are manufactured when the color filter array is added to the wafer of sensors before they are cut. Weāre unsure. In addition to the shifting RGB pattern, the sensors can also have different auto focus pixel locations. What this means is a given RGB pattern could have different auto focus pixel locations compared to another sensor with the exact same RGB pattern. Locating the pixels to use for a correction becomes a problem.
To solve this, the logic actually looks at the RGB pattern and computes the correction based on four possible locations of the auto focus pixels within the RGB pattern. The correction, and pixel locations, that has the largest difference is what gets used as a correction.
When writing the logic that ādetectsā the location of the auto focus pixels, we also made it only look at green pixels when computing the average of non-auto focus pixels. Normally, you would only need to correct the master dark and master bias frames to correct this artifact. For those corrections, the color of the pixel used in the computation does not matter. I tested an OIII filter with my Fujifilm X-T20, however, and determined that images with very heavy color casting could cause the pattern to be visible in light frames. If the logic is applied to light frames, the color of the pixels used by the logic suddenly matters. It probably isnāt normal to use a narrowband filter with a color camera, so this falls outside of the expected use and hasnāt been heavily tested, but the underlying logic is setup to be used on light frames if necessary.
Sorry for the long post. I never explained all of this on this thread. It was fun working with @lock042 on this code.