I’m using a OSC camera (ASI534MC) and I’m used to see “green” images from the camera or in the first steps of my processing workflow. Usually I choose AutoStrech visualization to help into identify gradients / defects. But this option is of reduced before the color balance. Usually what I get is:
But today, I was trying ASIStudio and I’ve seen that the same FITS on their FITS viewer looks like:
As you can see no green tint and a more useful visualization. Also in Pixinsight the autostrech remove the green dominant.
Investigating about what is doing SiriL I’ve tried to split the original image on R,G and B components, autostrech each component (with the histogram tool) and composed again a RGB image:
Now this is the kind of autostrech preview that will be very useful. For example, I can see quickly that I have a gradient to remove and some green tint that the SCNR can fix quickly
So, the question is, why the current preview scheme is applied? Could be for performance reasons?