Edges of stack showing, dark spots in stacked star field image, what's wrong here?

I am stacking a bunch of sub-exposures, some new, some all. All were processed with Siril with full calibration (i.e. lights, darks, bias, flats etc.). Then I register and stack them using background normalization, and averaging with linear fit clipping.

Because I have some older data, my stacks do not totally line up - some subs may only cover half of a frame or less.

I get results like this - here I have applied an extreme stretch to show the problem.

First, I don’t understand the black polka dots. Second, I don’t understand why I should see bright edges where the subs don’t completely overlap. That should only occur if there was a huge difference in the background, which doesn’t appear to be the case.

If instead, I use Deep Sky Stacker for stacking, using the same pre-processed files from Siril, I get results like this

Which is what I had expected from Siril. Perhaps there are settings that I am not using correctly?

I had once a similar problem and it was due to some bias files.
Have you tried without the biases?


Did you use a script to do it? Do you have thermal control on the camera, and if not, did you use the same darks for different sessions with significant temperature variation? Did you use normalization in the stacking step? Can you check at various steps of the processing, like at registration, if the images look like what they should look like and if star detection works fine?

You can load the intermediate sequences manually or look at the log for clues.

I did use a script to run Siril from the command line. The camera is a QHY astro camera that is cooled - it does a very good job of maintaining temperature. I was probably using it at -10C .

I will go back and look further for problems with the sub-exposures. However the thing that gets me is that Deep Sky Stacker is using the IDENTICAL sub-exposures.

The workflow is pre-process the frames with Siril - with flats, darks, biases etc.

Then I register and stack with Siril and get this (after an extreme stretch to show the problem)

meanwhile I take the same pp_ files from Siril and register and stack with Deep Sky Stacker and get this

The Siril version shows the edges of where different frames overlapped. The Deep Sky Stacker one does not.

Here is an even more extreme stretch of the DSS version:

It shows some of the frame edges, very very faintly. You can see this along the top of the frame above about 1/3 of the frame length from the left side. The amount of an “edge effect” that I see from DSS is about what I would expect - i.e. almost none.

There are 175 subs averaged together to make these images. So the areas with the most overlap is 175. I don’t know the least overlap but I would guess that it would be 150, or maybe even as low as 125.

With decent subs, the averaging of 175 samples of background and the averaging of 150 or 125 samples shouldn’t cause a big difference in tone.

If there was a sub that was crazy bright, that might causes this this, but there aren’t any like that.

So, I don’t think that the problem lies with the pp_ files, I think the problem is in registration and integration.

As a further experiment I took the pp_ files from SIRIL and stacked them in Astro Pixel Processor. This came out much like the Deep Sky Stacker version - virtually no hard edges.

Siril does not stack in mosaic mode where it implies to have a different signal-to-noise ratio on the whole image. So this is perfectly normal to have the edges visible. You need to crop it.

but DSS and APP do?

Yes they can. With a poor quality in the edges

No, not with poor quality, it depends on the number of images. If you have 100 images stacked in some parts of the frame, and 120 images elsewhere, the part you are saying has “low quality”, still has 100 images stacked.

The SNR will be higher in the parts with 120 by 9.5%. That’s a tiny difference in SNR. It shouldn’t be visible. The portions with 100 images stacked can still be very high quality by any absolute standard.

This ought to have nothing to do with mosaic mode. I did NOT use mosaic mode in either DSS or APP. I am stacking images on a reference frame. The result is clipped to that refence frame.

How is that a mosaic? There is only a single panel.

It’s true that my subs do not all line up perfectly, but that does not make it a mosaic.

SIRIL made a design decision to not do mosaics. Mosaics involve correcting for lens distortion and are more complicated in the image geometry.

But that’s not relevant here. Apparently you are interpreting no mosaics as an excuse to do a bad job on stacking a single frame.

What is the origin of the edges? Doing the math on averaging would suggest that the should not be big difference between an average of 100 versus say an average of 120.

Obviously if there was a big difference in the background brightness in the 20 shots that were different between the two that would be one thing but that is not the case for my shots.

And, even if there is some difference, SIRIL has a normalization option, which I used, which is supposed to (as much as possible) get rid of variations in background brightness.

So its not clear to me why the edges are so visible in SIRIL.

But regardless of why its there, it’s a bug. It just means SIRIL is useless for stacking unless there is perfect alignment.

Which is unfortunate - it makes SIRIL very limited in applicability for people that want to incorporate subs from previous sessions. If you set out to shoot a specific shot, then you can align things, but if you have relevant frames you shot with a different alignment in the past - then you can’t.

I belive this mode is called mosaic in DSS. Even if it is just an overlap.

Hey please, feel free to contribute if you think you can do better.


Blockquote I belive this mode is called mosaic in DSS. Even if it is just an overlap.

No, it is called normal mode. There is an example showing overlapping frame in normal mode in the DSS manual http://deepskystacker.free.fr/english/index.html

Hey please, feel free to contribute if you think you can do better.

Actually, I just did contribute. I pointed out a deficiency in the specifications.

In general for a software developer to tell a user “contribute if you think you can do better” is a super arrogant, defensive and dismissive way to shut down criticism. As if the only people who can comment need to be able to be developers.

I hope that attitude isn’t typical of SIRIL, because it is very hard to make something great if there is nothing but aggressive defense of the current limitations. Why would a good developer try to push things ahead with that kind of attitude?

1 Like

Um, lock042, can you see that the shot you demonstrated is showing exactly this case - and that the software is in Standard mode, NOT mosaic mode.

You are so defensive that you didn’t even look at your own example.

This kind of comment doesn’t help me to be kind. But that’s probably me.

The screen shot from DSS is from the manual page that I have a link to.

Standard mode takes a reference frame - outlined in green, and clips the result to its borders. That is the mode that does the correct stacking of my frames in posts above.

Mosaic mode would give you all of the frames shown.

Intersection mode gives you only the part where they all intersect.

Please let’s chill out.

1 Like

In the event that there is somebody from the SIRIL development team reading this thread who is actually interested in the technical aspect, here is a suggestion of what is going on.

During integration, one does an average of all of the pixels for a given location in the image. That average might be a weighted average, and there is also the option of doing outlier rejection.

So if we have M sub-frames in a given area, you find the M pixels at a location, do outlier rejection to get N pixels where N<= M. Those N pixel values are then added together - possibly after multiplying by a weighting factor (quality factor, or SNR factor, there are several approaches). And then you do the average - i.e. sum the N pixel values and divide by N.

The result that SIRIL is putting out seems to me that it MIGHT be do a bug where the the number M is a global value for a stack. So if we have M images to stack, you base M, or the quality factors on the maximum number of rather than the number of pixels that are actually overlapping at that place in the image.

That’s just a guess on my part, and it could be that the problem is due to something else.

However, this would explain why there are such pronounced edges - the areas where there are fewer frames are not actually an average. There should be very little difference between the averages. But if the maximum number across the whole stack is Mglobal, then the areas of the stack that have a lower number Mlocal would effectively be suppressed by a factor Mlocal/Mglobal.

Or as an alternative, something similar could occur in the process of image normalization. In that case each pixel of each image might effectively be multiplied by something like (Mlocal/Mglobal).

Isn’t telling a software developer doing a bad job also arrogant? We reap what we sow.

1 Like

The actual quote was a bad job of stacking - not an overall bad job. There are many good aspects of SIRIL.

However, I think that if you look at the images I posted, SIRIL does a bad job of stacking my frames, while DSS and APP do vastly better.

If you think that the SIRIL result is good compared to DSS, then I really don’t know what to say.

Also, note that in my original post I suggested that that the problem was my mistake and/or ignorance. It’s only when I got the aggressive assertion that this is the way SIRIL is supposed to be, that the others are low quality etc that I was goaded into calling out the obvious bad result.

1 Like

As a moderator, I did not even think about comparing Siril to DSS or whatever. My intention was just about being polite and wording constructive vs. destructive criticism.