Stacking sets of images within a sequence in Siril

The trouble is that I have ~900 images in a sequence. I want to sum stack every set of 5 images within that stack in order. For example: I want to sum stack images 1-5, 6-10, 11-15, etc… How do I go about doing this? Would I need some kind of script?

Yes. You can find example at our tutorial pages, especially the one about comets.

1 Like

Hello,

I had the same objective and taking a look at this tutorial (very nice! :slight_smile: ) I used the “superstack.sh” shell to perform it. The problem I have is that output images are always between 0 and 1. I tried ‘sum’ and ‘mean’ (I want to perform photometry with the stacked images) and all the possible combination of parameters of the “stack” command related to normalization (-nonorm, etc), but none of them works for me. The maximum value in my images is around 37000 counts, and I would like to obtain a stacked image with similar dynamic range. How can I do it?

Thanks a lot in advance.

Xavier

You have values in the range [0,1] because by default Siril uses 32bit float format. However, I don’t see the problem with this format and the photometry.

I would prefer to keep the oroginal ADU because it allows me to transform it to electrons (which allows me to perform some statistical analysis). Seeing ADU values is helpful to get an idea of what is going on. Even using a float format, it would be a good idea to keep the original dyanic range (the important is the number of digits).

Using the [0, 1] normalized images, I have problems with error estimation using AstroImageJ (calculated error bars are HUGE). If I preform a rescaling, error bars go to normal. Hence, I guess AstroImageJ has some problem with [0,1] normalized images.

You can set your preference to stay in 16bits then. But you will lose some precision.

Yes, I would lose precision. Numerical processing is better performed in float or double.

Please, could you include an option to generate output images with the original dynamic range? That is, I guess that you normalize considering the number of bits of FITS file. Hence, it would be just a product of the normalized image by 2^n_bits.

In addition, it would be useful that the user chooses the way to perform the normalization. I have seen that mean and sum perform it in a different way: one of them (do not remember which one) rescale the maximum value of the image into 1, and the other method rescale 2^nbits to 1).

Best,

Xavier

Hi,

sum and mean are not really meant for the same use case and they do not work the same internally (I mean in the way the computations are parallelized):

  • sum is mainly dedicated at stacking very large sequences for lucky imaging. There’s no input renorm and the output renorm is forced to make the most of dynamic range present (stacking is done in 32b but converted back to user prefered bitdepth. Then if in 16b, without this final “renorm stretch”, we would loose details by going back to the original range)
  • mean gives more flexibility. You can normalize the images before stacking and renorm the output stack. You can also use no rejection (which should produce exactly the same image as sum if you add -output_norm to the command). However, if you need to preserve the orginal range, just don’t specify -output_norm and it should behave as intended.

There are still some slight inconsistencies if there are negative values in the input images (should not happen if you work in 16b), they will be corrected for next release.

Cheers,

Cecile

Hi Cecile,

Thanks for your clarification. Yes, “sum” renormalizes the output image from 0 to 1 (where 1 value corresponds to the pixel with the highest value).

My intented use of stacking is for photometry. That is, I want to stack several images in order to obtain an image of “longer” exposure. It is important for me to have the values of the ADUs (or the equivalent number of electrons) in order to analyse how they change along the time between images (not to generate a lightcurve but for other statistical analysis). For this reason, “sum” is very annoying, because although I can obtain the correct lightcurve (relative values between stars in the same field are preserved) I cannot compare different images between them (and many other analysis like SNR evolution, number of counts, errors, etc). For this reason I used “mean”.

When I use “mean” with -nonorm, with “none” (to avoid any rejection) and without -output_norm, the output image is between 0 and a number lower than 1. At least it just keeps the relative dynamic range of the original image (that is, if the maximum value in a 16 bits image of a particular pixel is 32768, the value of this pixel in the output image after “mean” is 0.5). In this last case, I can live with it, but output images have no physical units (no ADUs, no electrons, no anything). At least I can multiply its value by 2^16 in order to recover ADUs. But it is an extra step to perform.

For this last reason, I asked whether you can introduce an option in order the user can choose to generate output images that keep the same physical units as the original images (e.g. ADUs). I think it is more intuitive than working between 0 and 1.

Thanks for your help,

Xavier