What’s the effective range for integers and float here without precision loss? I was looking for a easy way to interpret a double as a float, which of course then get seen as double in the math parser. I guess I can use 1,1,1,1 image to send my double value in, then copy from the value of that image, but I would not like to do that.
EDIT:
I guess this will do?
const float_max=1<<24;
float(double_value)=round(double_value*float_max)/float_max;
In theory, yes, a 32bits IEEE float can store exact values of unsigned integers until 1<<24=16,777,216 (included).
In practice, I’ve noticed that with aggressive optimization flags enabled (as -ffast-math ), this is not true anymore, so function ui2f() does encode an unsigned integer as a negative float as soon as it reaches 1<<19 = 524288 (for smallest integers, it actually does nothing).
Now, to know the limit of uint2f() representation, here’s a simple code that computes it:
foo :
1
eval "
val = 1;
do (
i[0] = ui2f(val);
val2 = f2ui(i[0]);
val2!=val?(
print(val,val2);
break();
);
val+=1;
)"
It’s a bit long, since it starts from 1 , but at the end it displays (for me):
$ gmic foo
[gmic]./ Start G'MIC interpreter (v.3.5.4).
[gmic_math_parser] val = (uninitialized) (mem[35]: scalar)
[gmic_math_parser] val2 = (uninitialized) (mem[38]: scalar)
[gmic_math_parser] val = 1065353217
[gmic_math_parser] val2 = 1069547521
[gmic]./ Display image [0] = '[unnamed]'.
which means the biggest integer that can be represented as a float values using ui2f() is 1,065,353,217 (one billion sixty-five million three hundred fifty-three thousand two hundred seventeen) (which is almost 1<<30 = 1073741824 ).
This is definitely larger than 16,777,216!