When creating a new sequence with 32bits fits, Siril carry out automatically a normalisation of entry datas in the interval (0,1). When we look on the statistics of each converted image of the sequence, real numbers displayed are values normalized from (0,1) multiplied by 65535 !

This not correct. As a consequence, pre-processing of flats is not correct and leads to incorrect master-flats with negative numbers !

Take the example of a sequence created with biases having the following statistics in entry : Mean = 264, Minimum = 200, Maximum = 333. Normalized values computed by Siril are : Mean = 0.48, Min = 0, Max =1. This is correct, but now values displayed in real number are : Mean = 31456, Min = 0, Max =65535.

This is an issue as the true real value of bias mean is 264 and not 31456.

Now consider the sequence of flats with the following entry data : Mean = 47907, Min = 251, Max = 57103. The normalisation computed during the sequence conversion leads to normalized values : Mean = 0.83, Min =0, Max = 1.

After pre-processing of flats (i.e Flat - bias operation), we obtain the new values of a pp_flat : Mean = 0.36, Min = -0.55, Max = 0.593 !

In fact, it seems that siril subtracted normalized values (0.83 - 0.48) multiplied by 65535, instead of the true initial real values (47907 - 264). So all pp_flats are totaly wrong and so the master_flats after stacking.

This curious behavior seems new since with 1.2.0-rc1. Pre-processing results are fully correct with 1.0.6.