Edges of stack showing, dark spots in stacked star field image, what's wrong here?

The screen shot from DSS is from the manual page that I have a link to.

Standard mode takes a reference frame - outlined in green, and clips the result to its borders. That is the mode that does the correct stacking of my frames in posts above.

Mosaic mode would give you all of the frames shown.

Intersection mode gives you only the part where they all intersect.

Please let’s chill out.

1 Like

In the event that there is somebody from the SIRIL development team reading this thread who is actually interested in the technical aspect, here is a suggestion of what is going on.

During integration, one does an average of all of the pixels for a given location in the image. That average might be a weighted average, and there is also the option of doing outlier rejection.

So if we have M sub-frames in a given area, you find the M pixels at a location, do outlier rejection to get N pixels where N<= M. Those N pixel values are then added together - possibly after multiplying by a weighting factor (quality factor, or SNR factor, there are several approaches). And then you do the average - i.e. sum the N pixel values and divide by N.

The result that SIRIL is putting out seems to me that it MIGHT be do a bug where the the number M is a global value for a stack. So if we have M images to stack, you base M, or the quality factors on the maximum number of rather than the number of pixels that are actually overlapping at that place in the image.

That’s just a guess on my part, and it could be that the problem is due to something else.

However, this would explain why there are such pronounced edges - the areas where there are fewer frames are not actually an average. There should be very little difference between the averages. But if the maximum number across the whole stack is Mglobal, then the areas of the stack that have a lower number Mlocal would effectively be suppressed by a factor Mlocal/Mglobal.

Or as an alternative, something similar could occur in the process of image normalization. In that case each pixel of each image might effectively be multiplied by something like (Mlocal/Mglobal).

Isn’t telling a software developer doing a bad job also arrogant? We reap what we sow.

1 Like

The actual quote was a bad job of stacking - not an overall bad job. There are many good aspects of SIRIL.

However, I think that if you look at the images I posted, SIRIL does a bad job of stacking my frames, while DSS and APP do vastly better.

If you think that the SIRIL result is good compared to DSS, then I really don’t know what to say.

Also, note that in my original post I suggested that that the problem was my mistake and/or ignorance. It’s only when I got the aggressive assertion that this is the way SIRIL is supposed to be, that the others are low quality etc that I was goaded into calling out the obvious bad result.

1 Like

As a moderator, I did not even think about comparing Siril to DSS or whatever. My intention was just about being polite and wording constructive vs. destructive criticism.

Fair enough, I will try to be constructive at all times.

1 Like

It’s clear for nobody.

What kind of normalization did you use?

additive with scaling

Can you show the script you are using? Otherwise, what’s the rejection method you used? What version of siril on what OS did you use?

Here is a sample of how siril is being invoked:

stack r_pp_IMG rej 5 5 -norm=addscale -output_norm

Where the r_pp_ images have already been pre-processed and registered by siril, but that was done in previous stages.

Here is a portion of the log for the stacking portion:

log: Rejection stacking complete. 99 images have been stacked.
log: Integration of 99 images:
log: Pixel combination … average
log: Normalization … additive + scaling
log: Pixel rejection … Winsorized sigma clipping
log: Rejection parameters … low=5.000 high=5.000
log: Background noise value (channel: #0): 153.785 (2.347e-03)
log: Background noise value (channel: #1): 123.203 (1.880e-03)
log: Background noise value (channel: #2): 101.252 (1.545e-03)
log: Saving FITS: file Final_Stacked_IMG.fits, 3 layer(s), 6072x4044 pixels

I have also tried other rejection settings.

If you would like me to try some specific combinations of settings, I am happy to try them.

I could also upload the files but they are huge. I could perhaps make a lower resolution version by binning them prior to registration.

This is all on Windows 10, siril version 0.99.6

Somewhere in the log not far from here you should have a rejection report, something like "Pixel rejection in channel ", can you copy these?

Maybe you could make available an image of each of the sessions, from the r_pp_ sequence, to see the differences in level and how the normalization manages to cope with it?
Thanks

OK, here is an experiment to show the problem clearly.

I took 120 sub-exposures which were very well aligned during capture so they should stack without much in the way of messy edges.

I made three stacks:

  1. All 120 subs.

  2. 100 subs - I just omitted 20 subs.

  3. 100 subs + 20 cropped square. In this case I took the same 20 subs that were omitted from #2, and I cropped them to square in the middle of the frame.

Here they are after an identical stretch.

First, all 120 images

Second, 100 images

Note that visually, this looks identical to the 120 image stack. That’s because the average of 100 samples and the average of 120 samples are bound to be very similar.

Here is the stack of 120 images, with 20 of them cropped to square, and 100 not cropped. The 100 that are not cropped are the same images as the 100 image stack. The 20 which are cropped are the same ones in the 120 stack.

So, the square center of this image ought to be identical to the center of the 120 stacked. The left and right edges of this image ought to be identical to the 100 stacked.

But the visual effect is quite different.

This is with additive + scaling for normalization, and linear fit clipping for rejection (low and high of 5 as per default).

This behavior is consistent with my theory in a post above that the wrong normalization number is being used in the areas which are exposed by cropping and only have 100 subs, versus the center that has 120. But I don’t know siril internals so perhaps it could be caused another way.

Here is the log description of rejection and stacking:

19:17:45: Starting stacking…
19:18:57: Pixel rejection in channel #0: 0.096% - 37.173%
19:18:57: Pixel rejection in channel #1: 0.152% - 42.457%
19:18:57: Pixel rejection in channel #2: 0.054% - 34.100%
19:18:58: Rejection stacking complete. 120 images have been stacked.
19:18:58: Integration of 120 images:
19:18:58: Pixel combination … average
19:18:58: Normalization … additive + scaling
19:18:58: Pixel rejection … linear fit clipping
19:18:58: Rejection parameters … low=5.000 high=5.000
19:18:58: Background noise value (channel: #0): 55.160 (8.417e-04)
19:18:58: Background noise value (channel: #1): 47.334 (7.223e-04)
19:18:58: Background noise value (channel: #2): 50.414 (7.693e-04)

That’s an interesting test, thank you. When you say you cropped the images, does it mean you replaced the sides by black or white pixels or you really changed their dimensions?

I literally cut off the sides. The cropped shots are 5482 x 5482 pixels. The non-cropped versions are 8216 x 5482 pixels.

Here is the basic idea. The problem I faced with my original post is that I had sub-frames taken in the same area, but with somewhat different centers. When they are overlapped by Siril the edges show - and show much more than I think they should.

The cropping is a way to simulate that in a controlled fashion. When the cropped images are registered, with the uncropped images they cannot completely overlap the non-cropped frames. So the final integrated images will have some areas with a stack of 100 frames and some with a stack of 120 frames.

But because I made the partial frames by cropping we can also get the full answer of what they would look like if I had not cropped - i.e. the stack of the whole 120 original frames. Or by omitting the 20 frames, see what the areas with 100 frames exposed look like.

Here is my conclusion: there is some bug in the way that Siril is handling a case where pixels in the final stack may be N images deep in some places and M images deep in other places. Integration ought to be done on a per-pixel basis so it shouldn’t matter, but that’s not what appears to be happening.

That is why we see the edges so clearly. You shouldn’t be able to see edges between 100 and 120 image stacks, without very careful examination.

It wasn’t found previously because it either looked like an artifact, or if somebody asked they were told they should just crop away the edges - as I was told in earlier posts in this thread. The difference in brightness could be ascribed to differences between frames. But this experiment shows that is clearly not the case.

If my theory is correct then the reason we can’t see the edges in APP or DSS is not because of some special feature they have, but rather because they are doing the pixel math correctly.

Again, I don’t know the internals of Siril, so maybe I am wrong, but that is what this looks like to me.

1 Like

Just curious as to whether you need more, or a different, demonstration.

Or, if you don’t need more, whether I am wrong about my theory.

no we don’t need more, we can see the problem, thank you. It’s not as easy to fix as changing the division, it has been postponed for the release after 0.99.8.

1 Like

Hi,

I think I also got affected by this problem.
I’m wondering what is the progress on the fix?
Is there any development build available, or maybe a git repo/branch to build the app from?

I would be happy to test it.

Sure, but no improvement so far.
AS @vinvin said here

But it’s not a very important feature in fact, most of the time, people tend to get their images relatively well framed, otherwise you lose the benefit of increasing the signal/noise on the largest part of the image.

This has been fixed in the dev version, or for next release.

1 Like