Why does stacking increase brightness

I am fairly new to astro imaging, but I enjoy this immensely. I hope you don’t mind this newbie question, but why does stacking in Siril increase brightness? I though, stacking would just reduce the signal to noise ratio.

As you can see in the attached image, the more I have to stack for a given region, the brighter the picture becomes. This results in highly visible edges when the source frames are not perfectly aligned.

Why is this the case with stacking? It not only improves signal to noise, but also brightness?

Your visualization sliders are not in min and max values. Put the high slider to the max.

Thanks for the quick reply, but I don’t quite understand? Min/Max is selected? The difference in brightness is present in the picture after the stack, it is not an effect of the auto-stretch. Please see same effect in e.g. logarithmic. It is also present in linear mode.

My source images are quite identical. Same night, same camera, same settings. I only did not perfectly align the camera again after changing the battery.

I speak about the sliders under the preview, not the min/max button.

Ah, but that doesn’t change anything, either…

OK because now you are in log mode.
Stacking is just mean operation. It does not change brightness.

And on the edges you can have artifacts due to the number of images used to normalize.

Still present, also in linear mode…

Sorry, but maybe my understanding of stacking is wrong…

But I thought, for each pixel x, you do:
x = 1/N*(sum of all x_n from the stack),
where N is the number of pictures you have that contains pixel x

To me, it looks like the 1/N is missing? The more I stack of a pixel, the brighter it gets?!

In the edges this is a known situation: Handling out-of-frame values in mean stacking (#647) · Issues · FA / Siril · GitLab

You need to crop.

Sorry, maybe I am using it wrong, but the more I have of the same region to stack, the brighter that region becomes. Not just with the data from deneb last night, but also with Orion, etc…

Thanks for pointing to gitlab. From reading that, I take it changing the behavior was in the works but is scrapped for now? Compiling the latest master will not help?

So far, we are not working on it: because they are not easy solution for now.
Indeed we have no way, in our current implementation, to know if a pixel is out of the image. Indeed, Siril pixels can have any real values.

oh… that’s really unfortunate… But wouldn’t it just suffice to have the divisor 1/N per pixel? Each time one adds up a pixel, increment N locally for that one pixel? One would just need to store one more value per pixel, counting how often that pixel has been added up?

Yes it should be that simple, except in the case of global registration where images are rotated, for the stacking operation siril doesn’t know if some pixels equal the value X because it’s the actual value in the image or because it’s the value of out-of-image pixels.
We handle two types of pixel formats: 16-bit unsigned integers and 32-bit floating point value. For the former, we use 0 as undefined value pixel, so the confusion is quite severe, and for the latter, we could indeed use minus infinity or any other large arbitrary value to keep track of out-of-image pixels.
We just did not decide how to handle it yet, and there may be more obvious solutions for the cases where rotation is not used in registration, a case for which images are not rotated and image shifting information is available in stacking.

2 Likes

Thanks alot! That clarifies everything!

So the problem comes in a sense from starting from the reference picture and then traversing the other pictures (the stack).

One could also traverse the registered images (the stack) and average over the complete area/domain they cover and crop afterwards. I guess that would make selecting a reference image unnecessary…

so, if a_i is the lower left and b_i is the upper right coordinate of registered image i, one could average over min_i a_i to max_i b_i from a perspective of each registered stack image. That way, one would never need the value of a non-existing pixel

This is what I mean… first traverse all registered images, e.g. P1 and P2 here, to determine the domain they cover (black). This background domain starts with N=0. Then, add each picture P_i into the background domain and increment N locally.

This would free from the need to select a reference image, which I imagine is quite beneficial.

stackdemo

But that would kill speed performance on the other hand.
And when you have 200 000 images that become very slow I guess.

I guess only if they are vastly mis-aligned. If they match perfectly, then everything should stay the same, except N is a local number, but that should be doable.

Then again, if the user has vastly misaligned data, it’s kinda the user’s problem…

You could also ask the question in reverse… from the perspective of a registered image:

“Is my pixel x going to be in the final image?” This is easy, as the final image is rectangular.

Rather than from starting from the reference image, asking “where does this pixel come from?”

The problem of finding the smallest image that contains all images is not the same as finding how many images take part in each stacked pixel. They can be solved independently. But indeed you can solve them together, it’s just a different way of managing images, which by the way we could manage, in the sense that we would have all required information, if rotation was not used. We just need to implement it.
But it’s not a very important feature in fact, most of the time, people tend to get their images relatively well framed, otherwise you lose the benefit of increasing the signal/noise on the largest part of the image.

1 Like