'Does single floating point operation calculated at higher precision and immediately truncated always produce identical result?

Does single floating point operation (like a+b, a-b, a*b or a/b) calculated at higher precision (80 bits) and immediately truncated (to 32 bits) always produce identical result to calculation made at original type's precision (32 bits)?

Or could the least significant bit be different in the result? Why?

EDIT: Part of example from this blog post

float tmp;  // 32 bit precision temporary variable
push a;     // converts 32 to 64 bit
push b;     // converts 32 to 64 bit
multiply;   // 64 bit computation
pop tmp;    // converts result to 32 bits

Author of this example explains this code like this:

Even though the multiply and add instructions are using 64 bit internal precision, the results are immediately converted back to 32 bit format, so this does not affect the result.

So what I am asking, is this always true? Single operation like this will always produce identical result to the last bit no matter on what platform?

I am programming in C#, where we have no control with what precision floating-point operations are done.

From C# specification:

Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an “extended” or “long double” floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type.

And I need to know if single operations on floating-points (like C# example below) are deterministic.

double a = 2.5d;
double b = 0.1d;
myClassInstance.someDoubleField = a*b; // value should be converted out of extended precision 

So is this someDoubleField value going to be identical on all platforms?



Solution 1:[1]

Yes, it's established in this paper:

Samuel A. Figueroa, "When is double rounding innocuous?" ACM SIGNUM Newsletter, Volume 30 Issue 3, July 1995 doi:10.1145/221332.221334

The main results is that if the input type has p-bit significands, and the number of bits in the significand of the computation type is at least 2p+2 bits, the elementary operations +, -, *, / and sqrt will all be correctly rounded when truncated back.

An IEEE754 binary32 number (i.e. the typical C float type) has a 24-bit significand, so it is in fact sufficient to use binary64 (i.e. typical C double) which has a 53-bit significand. In fact, this is a pretty common trick used by JavaScript compilers to make use of binary32 operations when the language itself only has a binary64 type.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Simon Byrne