Handling Fujifilm X-Trans Auto Focus Pixels

I have a Fujifilm X-T20. I am using Siril. I noticed that there was a weird square in the middle of my dark and bias stacks that were sometimes affecting my final images. After a little searching, I learned this is related to the phase detection auto focus system. The photosites used for auto focus get a little less light than the surrounding photosites. The camera somehow compensates for this and increases the values from these specific photosites. This makes it such that the pattern is not visible in any image where the signal from light sufficiently exceeds the sensor noise. So a light frame or a flat frame won’t show the pattern at all (at least not normally).

Darks and biases, however, are not based on signal from light at all. So when the pixel values are compensated by the camera, they become just a little bit brighter than the surrounding pixels in a dark frame or a bias frame. If these pixels are left alone, they will affect the final image. This is a stretched dark stack from a Fujifilm X-T20. The lighter grey square in the middle is from the auto focus system.

There is a post about this on the PixInsight forums and a workaround was developed that used a sample of pixels in the auto focus area. The difference between the average values for the auto focus pixels compared to the average values of non-auto focus pixels in the same area was used to adjust the pixels involved.

I didn’t know how to do this using Siril alone. The value needed to fudge the auto focus pixels needs to be computed. It does not appear to be static. So… after spending far too much time learning how to write my first CFITSIO program, I came up with the following solution. :slight_smile:

The attached text file is really a c program using CFITSIO to read and write FITS files. (change the name to fix_xtrans.c before compiling)

fix_xtrans.txt (7.8 KB)

I built this on a Linux box with CFITSIO installed. I used the following command to compile it. If you can compile the example processes included with CFITSIO, this code should compile the same way.

gcc fix_xtrans.c -o fix_xtrans -lcfitsio -lm

The program takes two parameters. The first is the fits file, the second is the name of the output file.

fix_xtrans dark_stacked.fit dark_corrected.fit

It will then compute the fudge amount and apply it to the correct pixels.

The intent is that this would run against the stacked master dark and master bias files before those files are used against lights and flats.

The code currently only supports Fujifilm X-T20, but could easily be modified for other cameras that have this same issue. I only have the X-T20. Finding the auto focus pixels seemed to be easiest when using a minimum stack of dark frames. That’s not normally how you would stack darks, but it was easier to see that way.

I now await for multiple people to chime in and tell me how I could have done this easier. :wink: I’d love to hear about anybody who uses this with success.


Hello, thank you for your post.
First, you need to know that this issue does not appear when you preprocess your images without cosmetic correction.
Second, thanks for your code I will take a look. But my first question would be how to make your code compatible with all FUJI DSLR. I think the size and place of this square is not the same for all models and this is a pain in the ass.

So, of course we want to try to fix this issue but I would like to find generic way to do it.

Thanks again.

1 Like

I assume the square might appear in the final image any time darks or biases are applied to the flow. If you just stack light frames, it shouldn’t appear. In that case, the pixel values the camera creates would be compensating correctly. It’s only when we try to image the sensor noise without signal coming in as light that the values are off.

I apologize for my code. I haven’t programmed in C for about 20 years. :slight_smile:

I think every sensor is going to be different, but there aren’t too many X-Trans sensors. I have an is_af_pixel function in the code that should be able to have multiple sensor definitions defined. That would allow the code to work for multiple cameras. It is a certainly a pain in the ass though.

One thing that could be a parameter is the sample size that is used to compute the fudge amount. My code is currently sampling a 2048x2048 square. This is smaller than the total size of the auto focus area on my camera. I thought the amp glow might raise the average to the point where I end up running too many pixels negative, so I was a little conservative. The AF area seems to be well within the glow on my sensor, but the AF area could be larger on other cameras.

I didn’t know if this is something that would actually be included in Siril or not. It would certainly be awesome if it was, but it’s a very specific use case. One thing that might make it more generic is to create a bitmap image of all of the auto focus pixels and write logic that drives off of the bitmap instead of any hardcoded values in the code. Adding support for sensors could be as simple as creating a new bitmap of the affected pixels… something like a hot pixel map. Existing Siril functions could then be used to multiply the bitmap by the fudge amount and subtract it from the master image.

The switch from DSLR to mirrorless cameras might bring more issues like this. Auto focus systems are forced to move to the sensor on those cameras.

Wouldn’t this be a good discussion to have with the folks of other raw photo editors here? or is it only important within the scope of siril?

That’s a fair question. My code was specific to the master dark and master bias FITS files generated by Siril… although FITS files generated by other applications should work the same.

I absolutely love RawTherapee and use it for post processing astrophotography images all the time. I have also used it as part of a non-linear stacking flow, but it’s not ideal in that we can’t easily control selecting multiple darks and multiple flats for calibration frames. So this specific issue might be outside of the normal use case.

That being said, since raw conversion tools like RawTherapee normally support dark frame subtraction in some form, it might be worth handling these AF pixels more gracefully if they aren’t already handled. I’m really not sure though. The extreme stretches where these values become visible seem to be specific to astrophotography stacking.

If this correction is applied by the camera for all pictures (dark, bias, light), doesn’t it compensate during the normal process of darkframe/bias subtraction? I don’t understand how is this different from the normal case in which some pixels have randomly more gain than others.

Right, this is confusing, but it is very much something that manifests itself on the dark and bias frames only. It doesn’t seem to affect lights or flats directly. The reason has everything to do with where the signal is coming from.

I’m making some big assumptions about how this works. I might be wrong, but this is how I understand it…

For lights and flats, the signal we are wanting to capture is the light that comes in through the lens and hits the sensor. This light is going through all the fun stuff at the sensor. The camera is compensating somehow (unsure exactly how) for the auto focus pixels because a little less light hits those photosites. This compensation results in a regular image that doesn’t have the artifact.

For darks and biases, however, we don’t capture any light that came in through the optical path. We’re only capturing various types of noise at the sensor. Any compensation done by the camera is incorrectly amplifying this noise thinking it is somehow signal to be corrected. This creates the artifact.

The light frame needs the camera adjusted values to look right. If we subtract those away using a dark frame, those values are suddenly lower than they need to be and the artifact appears in the main image. We want the light frame to be corrected by the camera, and we do not want to subtract that correction when we apply the dark frame.

The other part of this to think about is that the amount of correction applied by the camera does not appear to be a static value. This makes it difficult to create an easy fix without needing to sample pixel values each time. I’d love to know what the camera was actually doing. Does anybody know anyone at Fujifilm? :slight_smile:

OK, now I understand. Probably what’s happening is that the camera is correcting all files using a multiplicative factor, without taking into account the non-zero floor (it’s not using dark and bias frames as it should! :wink: ).

However, the math isn’t basically (Signal - Bias - Dark)? If it were only a multiplicative correction, it shouldn’t affect the end result. But maybe there is, as you say, a more complicated (non linear) algorithm involved.

On the other hand, if it is applying the correction also to the light frames, the question arises about how do you correct those. The effective noise floor will also be affected there by the camera correction, and it should show in a night mostly-black image.

Hello. I don’t have xtrans sensor but users told me that this square is only visible when you enable cosmetic correction. Disable it, you won’t see anything.

I will have to do some more experiments, but I have my doubts about that being correct. Cosmetic correction isn’t even implemented yet for X-trans.

The log even says this…

Cannot apply cosmetic correction on XTRANS sensor yet

I can try stacking with it enabled/disabled, but that is applied when pre-processing the light frames. The master dark will already show the box at that point.

This is on dev branch, because I’ve disabled it because of it.

@guille2306, A night sky is never black… if it is, the exposure was taken incorrectly. This is why we try to expose with the peak of the histogram not being all the way over to the left on the chart. The peak largely represents the sky background… and still has much higher values than a dark or bias frame.

I had the same thought about correcting the light frames though. The noise for those pixels is amplified along with the signal… so the dark and bias frames are not perfectly compensating. I guess the answer to that is to dither aggressively. :slight_smile:

And I really think It is correct. I know users disabling cosmetic correction to avoid issues.

I am using the dev branch from a couple days ago. I did some tests using a short stack of M13. I used the DSLR_preprocessing.ssf script. I also ran it once with the -cfa and -equalize_cfa flags removed. In both cases, the resulting image showed a square.

Here’s a screen grab using a negative view and the histogram display mode on the green channel. I have applied a background extraction to remove a gradient. This makes it slightly easier to see, but the square was certainly visible before doing that. The images in this stack were dithered when acquired, so the edges of the square are a bit soft. I hope it shows up in this post.

I then used the same data and steps again. The only difference was I used my code to correct the master bias and master dark before those were applied to the flats and the lights. The result is as follows. Notice the square is gone.

If there’s something else I should try, please let me know. I did experiment with processing without darks and just using flats and bias frames, but the amp glow was then visible on the edges of the final image. The darks are needed to remove that.

This square can be very difficult to see depending on the subject matter and how well the processing takes care of the rest of the noise. Images like this with consistent backgrounds show it best. Dithering (during acquisition) actually makes it easier to see as it smooths out some of the other noise. I learned that with some of my tests.

Thanks for looking at all of this. I really do like Siril. :slight_smile:

1 Like

@setnes : please could you send me some files ? For example your dark frames, bias frames and flat frames and 2 or 3 lights?
I think I have an idea on how to implement it in the Siril workflow.


My eye fell on this thread, and we had this https://github.com/Beep6581/RawTherapee/issues/5824 posted last week. There seems to be some non-uniformity in the sensor that can not be attributed to the lens (the shots are completely overexposed). Could this be related do you think?

In any case, if raw values are being modified depending on their location, it might be something to consider in software like RawTherapee as well.

I would be interested in a series of shots taken with the lens cap on and with different shutter speeds (from relatively short like 1/10 s to relatively long like 5 s, or extremely long in the case of astrophotography). Could somebody make these and share them?


I’ll try to make them later today. Should LENR be on our off?


I would keep LENR off for now, thanks Daniel!

1 Like

@setnes : I’ve coded something. So waiting for your frames.
If someone else has another camera model with same issue, please either send me a dark frame. I need to build a library of camera.

@Thanatomanic @lock042
Presently, I have access to an X-T1, an X-T3, and an X-Pro2.
I do not know if they are affected as well – but in case you
need some shots from any of them, just tell me (with clear
instructions, please).

Have fun!
Claes in Lund, Sweden