thanks a lot for the clarification.
I may take the risk and ask something that is probably offtopic. If the data is normalized within the range [0, 1], how can a 3rd party tool know that max value a pixel can take is 1.0 instead of the max value a float32 number is able to represent? Sorry for such off topic!