Description
Casting from an integer type to a floating point type leaves the rounding behavior explicitly unspecified. Is there (or are there plans for) some way of deterministically turning an int into a float regardless of the target architecture? I really don't want 9007199254740993u64 as f64
to be converted to 9007199254740992.0
on one machine and 9007199254740994.0
on another (the example u64 is 2^53 + 1, the smallest natural number not precisely representable as an f64).
Options could be to either use the floating point rounding mode (interpreting the int as real number and then rounding it to the next representable float just like the semantics of all IEEE754 float operations), or supporting explicit rounding modes.
Apologies if such a feature already exists and I just didn't find it.