Why the precision of the float is up to 6 digits after the decimal point and the precision of the double is up to 15 digits after the decimal point? Can anyone
I know the integer format would be different between big-endian machine and little-endian machine, is it the same for float point format (IEEE 754)?