Siril output normalisation?

Hi there, I’m very much a newcomer to AP in general and learning the basics with Siril which I’m really enjoying :slight_smile: But I have a question- in the stacking tab there’s a little tick box for output normalisation but I can’t find any information about what it does apart from the mouse-over warning not to use it for “master stacking”.

When would it be beneficial to use this and why? Does it improve or degrade the data? And when it says “master stacking” is that meaning master darks and biases (not flats?)?
Thanks,
Mark

That normalizes the output to the max value. In other words that stretches the image a bit.

Hi Lock042, thanks for that but when and when not would you want that? Am i right in thinking that the photometric colour calibration should automatically adjust levels to get the correct balance between layers? And if i’ve used same exposures for all colours, would normalising their outputs lead to inaccuracy of the image? Maybe accuracy isn’t really important in AP if all you want is a good looking image? Sorry this is all pretty new to me and a lot to understand
Mark

sorry for resurrecting this thread. I have the very exact doubt. The tip states not to use norm for master stacking. However all the ssf scripts shipped with siril have the -output_norm parameter when generating the master light (not for calibration masters).
Additionally, the tip says that the data is normalized within the [0, 1] range, not against the maximum.

Could you please clarify?

m

When we talk about masters, we talk about master bias, flats and darks. Generally we do not speak about master lights.

The output will be given in the range [0, 1] when you work in 32bits format. That means that max will be 1.0

thanks a lot for the clarification.
I may take the risk and ask something that is probably offtopic. If the data is normalized within the range [0, 1], how can a 3rd party tool know that max value a pixel can take is 1.0 instead of the max value a float32 number is able to represent? Sorry for such off topic!

Siril’s 32-bit pixel representation is ALWAYS a floating-point representation. Not integer. So if we assume that this representation is in the range [0, 1], there’s no confusion.

thanks for the reply @lock042.
Sorry I think I’m not explaining correctly probably due to lack of knowledge or misinterpretation. When the fit is saved by siril it is being normalized in the range [0, 1] if the -output_norm parameter is given. When opening the file with another software (yes I’m talking about PI) the stats on normalized format is correctly represented. How the software knows that the data is already normalized in order to avoid another normalization when representing statistics in a normalized way?

Sorry again for the offtopic

PS: if you consider appropriate I can open a new thread or even close this one since I know I’m involving software not related with siril