After some frustrating times trying to use PixInsight, I became disenchanted with the lack of documentation, the multiple ways to do what seem awfully much like the same thing, and I decided to give Siril another look. The tutorials looked interesting and they actually seem to explain concepts instead of offering a bunch of do this/do that recipes. But I come across a strange problem I don’t know what to with in background extraction.
The tutorial tells me under “Use Case”
The first reflex to have, and which allows you to properly see the gradient present in the image, is to switch to
Histogram
visualization mode. Thus, the image is visually very stretched and all the defects become visible.
Makes sense, but the examples given in the tutorial are monochrome.
Here is a stacked image I took of 66 5 minute images of the Whale Nebula, NGC4631, with my ZWO ASI533 MC Pro camera. NIght was moonless, light pollution very low. Stacking was with Siril script.
Not sure why it’s so blue but background extraction WILL get rid of most of that.
Now here is the Histogram Visualization mode:
Well that sure looks odd. Where are all those crazy colors coming from? And how does this mode guide me in doing background extraction? All I can do is put in a bunch of samples just about everywhere except on stars, the main galaxy and that other galaxy below it.
The best I can get with Background Extraction is something like this (autostretch)
Not too bad but still more color than I’d like and it’s more vivid on my screen than it is here. and, when I tried to do color calibration it gave me an error message that my background extraction was wrong.
What am I not understanding?
Thanks.