Edges of stack showing, dark spots in stacked star field image, what's wrong here?

Fair enough, I will try to be constructive at all times.

1 Like

Itā€™s clear for nobody.

What kind of normalization did you use?

additive with scaling

Can you show the script you are using? Otherwise, whatā€™s the rejection method you used? What version of siril on what OS did you use?

Here is a sample of how siril is being invoked:

stack r_pp_IMG rej 5 5 -norm=addscale -output_norm

Where the r_pp_ images have already been pre-processed and registered by siril, but that was done in previous stages.

Here is a portion of the log for the stacking portion:

log: Rejection stacking complete. 99 images have been stacked.
log: Integration of 99 images:
log: Pixel combination ā€¦ average
log: Normalization ā€¦ additive + scaling
log: Pixel rejection ā€¦ Winsorized sigma clipping
log: Rejection parameters ā€¦ low=5.000 high=5.000
log: Background noise value (channel: #0): 153.785 (2.347e-03)
log: Background noise value (channel: #1): 123.203 (1.880e-03)
log: Background noise value (channel: #2): 101.252 (1.545e-03)
log: Saving FITS: file Final_Stacked_IMG.fits, 3 layer(s), 6072x4044 pixels

I have also tried other rejection settings.

If you would like me to try some specific combinations of settings, I am happy to try them.

I could also upload the files but they are huge. I could perhaps make a lower resolution version by binning them prior to registration.

This is all on Windows 10, siril version 0.99.6

Somewhere in the log not far from here you should have a rejection report, something like "Pixel rejection in channel ", can you copy these?

Maybe you could make available an image of each of the sessions, from the r_pp_ sequence, to see the differences in level and how the normalization manages to cope with it?
Thanks

OK, here is an experiment to show the problem clearly.

I took 120 sub-exposures which were very well aligned during capture so they should stack without much in the way of messy edges.

I made three stacks:

  1. All 120 subs.

  2. 100 subs - I just omitted 20 subs.

  3. 100 subs + 20 cropped square. In this case I took the same 20 subs that were omitted from #2, and I cropped them to square in the middle of the frame.

Here they are after an identical stretch.

First, all 120 images

Second, 100 images

Note that visually, this looks identical to the 120 image stack. Thatā€™s because the average of 100 samples and the average of 120 samples are bound to be very similar.

Here is the stack of 120 images, with 20 of them cropped to square, and 100 not cropped. The 100 that are not cropped are the same images as the 100 image stack. The 20 which are cropped are the same ones in the 120 stack.

So, the square center of this image ought to be identical to the center of the 120 stacked. The left and right edges of this image ought to be identical to the 100 stacked.

But the visual effect is quite different.

This is with additive + scaling for normalization, and linear fit clipping for rejection (low and high of 5 as per default).

This behavior is consistent with my theory in a post above that the wrong normalization number is being used in the areas which are exposed by cropping and only have 100 subs, versus the center that has 120. But I donā€™t know siril internals so perhaps it could be caused another way.

Here is the log description of rejection and stacking:

19:17:45: Starting stackingā€¦
19:18:57: Pixel rejection in channel #0: 0.096% - 37.173%
19:18:57: Pixel rejection in channel #1: 0.152% - 42.457%
19:18:57: Pixel rejection in channel #2: 0.054% - 34.100%
19:18:58: Rejection stacking complete. 120 images have been stacked.
19:18:58: Integration of 120 images:
19:18:58: Pixel combination ā€¦ average
19:18:58: Normalization ā€¦ additive + scaling
19:18:58: Pixel rejection ā€¦ linear fit clipping
19:18:58: Rejection parameters ā€¦ low=5.000 high=5.000
19:18:58: Background noise value (channel: #0): 55.160 (8.417e-04)
19:18:58: Background noise value (channel: #1): 47.334 (7.223e-04)
19:18:58: Background noise value (channel: #2): 50.414 (7.693e-04)

Thatā€™s an interesting test, thank you. When you say you cropped the images, does it mean you replaced the sides by black or white pixels or you really changed their dimensions?

I literally cut off the sides. The cropped shots are 5482 x 5482 pixels. The non-cropped versions are 8216 x 5482 pixels.

Here is the basic idea. The problem I faced with my original post is that I had sub-frames taken in the same area, but with somewhat different centers. When they are overlapped by Siril the edges show - and show much more than I think they should.

The cropping is a way to simulate that in a controlled fashion. When the cropped images are registered, with the uncropped images they cannot completely overlap the non-cropped frames. So the final integrated images will have some areas with a stack of 100 frames and some with a stack of 120 frames.

But because I made the partial frames by cropping we can also get the full answer of what they would look like if I had not cropped - i.e. the stack of the whole 120 original frames. Or by omitting the 20 frames, see what the areas with 100 frames exposed look like.

Here is my conclusion: there is some bug in the way that Siril is handling a case where pixels in the final stack may be N images deep in some places and M images deep in other places. Integration ought to be done on a per-pixel basis so it shouldnā€™t matter, but thatā€™s not what appears to be happening.

That is why we see the edges so clearly. You shouldnā€™t be able to see edges between 100 and 120 image stacks, without very careful examination.

It wasnā€™t found previously because it either looked like an artifact, or if somebody asked they were told they should just crop away the edges - as I was told in earlier posts in this thread. The difference in brightness could be ascribed to differences between frames. But this experiment shows that is clearly not the case.

If my theory is correct then the reason we canā€™t see the edges in APP or DSS is not because of some special feature they have, but rather because they are doing the pixel math correctly.

Again, I donā€™t know the internals of Siril, so maybe I am wrong, but that is what this looks like to me.

1 Like

Just curious as to whether you need more, or a different, demonstration.

Or, if you donā€™t need more, whether I am wrong about my theory.

no we donā€™t need more, we can see the problem, thank you. Itā€™s not as easy to fix as changing the division, it has been postponed for the release after 0.99.8.

1 Like

Hi,

I think I also got affected by this problem.
Iā€™m wondering what is the progress on the fix?
Is there any development build available, or maybe a git repo/branch to build the app from?

I would be happy to test it.

Sure, but no improvement so far.
AS @vinvin said here

But itā€™s not a very important feature in fact, most of the time, people tend to get their images relatively well framed, otherwise you lose the benefit of increasing the signal/noise on the largest part of the image.

This has been fixed in the dev version, or for next release.

1 Like