'What is the difference between using INTXX_C macros and performing type cast to literals?

For example this code is broken (I've just fixed it in actual code..).

uint64_t a = 1 << 60

It can be fixed as,

uint64_t a = (uint64_t)1 << 60

but then this passed my brain.

uint64_t a = UINT64_C(1) << 60

I know that UINT64_C(1) is a macro that expands usually as 1ul in 64-bit systems, but then what makes it different than just doing a type cast?



Solution 1:[1]

(uint64_t)1 is formally an int value 1 casted to uint64_t, whereas 1ul is a constant 1 of type unsigned long which is probably the same as uint64_t on a 64-bit system. As you are dealing with constants, all calculations will be done by the compiler and the result is the same.

The macro is a portable way to specify the correct suffix for a constant (literal) of type uint64_t. The suffix appended by the macro (ul, system specific) can be used for literal constants only.

The cast (uint64_t) can be used for both constant and variable values. With a constant, it will have the same effect as the suffix or suffix-adding macro, whereas with a variable of a different type it may perform a truncation or extension of the value (e.g., fill the higher bits with 0 when changing from 32 bits to 64 bits).

Whether to use UINT64_C(1) or (uint64_t)1 is a matter of taste. The macro makes it a bit more clear that you are dealing with a constant.

As mentioned in a comment, 1ul is a uint32_t, not a uint64_t on windows system. I expect that the macro UINT64_C will append the platform-specific suffix corresponding to uint64_t, so it might append uLL in this case. See also https://stackoverflow.com/a/52490273/10622916.

Solution 2:[2]

There is no obvious difference or advantage, these macros are kind of redundant. There are some minor, subtle differences between the cast and the macro:

  • (uintn_t)1 might be cumbersome to use for preprocessor purposes, whereas UINTN_C(1) expands into a single pp token.

  • The resulting type of the UINTN_C is actually uint_leastn_t and not uintn_t. So it is not necessarily the type you expected.

  • Static analysers for coding standards like MISRA-C might moan if you type 1 rather than 1u in your code, since shifting signed integers isn't a brilliant idea regardless of their size.
    (uint64_t)1u is MISRA compliant, UINT64_c(1) might not be, or at least the analyser won't be able to tell since it can't expand pp tokens like a compiler. And UINT64_C(1u) will likely not work, since this macro implementation probably looks something like this:

    #define UINT64_C(n) ((uint_least64_t) n ## ull)
    // BAD: 1u##ull = 1uull
    

In general, I would recommend to use an explicit cast. Or better yet wrap all of this inside a named constant:

#define MY_BIT ( (uint64_t)1u << 60 )

Solution 3:[3]

UINT64_C(1) produces a single token via token pasting, whereas ((uint64_t)1) is a constant expression with the same value.

They can be used interchangeably in the sample code posted, but not in preprocessor directives such as #if expressions.

XXX_C macros should be used to define constants that can be used in #if expressions. They are only needed if the constant must have a specific type, otherwise just spelling the constant in decimal or hexadecimal without a suffix is sufficient.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2
Solution 3 chqrlie