'How to store a *signed* 24 bit int into another variable?

I need to encode a 24 bit integer into the end of 32 bit int.

(the first byte contains other data, the other three are empty for use by the 24 bit int)

I already have a SET_BYTE macro, and I can successfully do the following for unsigned 24 bit values:

SET_BYTE(DEST, START_BYTE_INDEX,   (uint8_t)(VALUE)); 
SET_BYTE(DEST, START_BYTE_INDEX+1, (uint8_t)(VALUE >> 8)); 
SET_BYTE(DEST, START_BYTE_INDEX+2, (uint8_t)(VALUE >> 16)); 

What I'm stuck on is that I'm not quite sure how to modify this approach to work for signed 24 bit integers?


If I attempt to store the value -22 with the above (modified to use int8_t, obviously), for example, I get the following byte values:

-22
-1
-1

which read back as these (if I do <<0, <<8 and <<16 when reading):

-22
-256
-65536

I'm assuming I just need to totally avoid these shifts with signed values, but I'm not sure what the correct approach is?



Solution 1:[1]

If you need a ?32-bit structure (as you said, you have other data in the last byte) use struct bitfields:

struct Packed
{
    uint8_t byte;
    int32_t smaller : 24;
};

alternatively, you can abuse std::bitset:

struct fake_int24_t
{
    static_assert(sizeof(int) <= sizeof(unsigned long long), "");

    bitset<24> data;

    operator int() const {
        return data.to_ulong() - (1 << 23);
    }

    fake_int24_t& operator=(int val) {
        assert(val >= -(1 << 23)); // 2's complement
        assert(val < (1 << 23));

        data = bitset<24>(val + (1 << 23));

        return *this;
    }
};

live demo

sizeof(fake_int24_t) doesn't have to be 3, though, so you might want to stick a normal integer in there.

Solution 2:[2]

to encode a 24 bit integer into the end of 32 bit int.

OP's existing macros implies the 24-bit unsigned type is store in little endian - see following.
Let us assume the 24-bit signed uses the same endian and the common 2's complement integer encoding.

SET_BYTE(DEST, START_BYTE_INDEX,   (uint8_t)(VALUE)); 
SET_BYTE(DEST, START_BYTE_INDEX+1, (uint8_t)(VALUE >> 8)); 
...

Note, we do not know the endian of the 32-bit int nor its encoding (2's , 1's or sign-mag). As it turns out, we do not need that information.

struct signed_24_bit {
  uint8_t other_data;
  uint8_t use_by_the_24_bit_int[3];
};

int decode_24_bit_integer(struct signed_24_bit x) {
  int32_t y = x.use_by_the_24_bit_int[0] // 1st byte of int24
      + (x.use_by_the_24_bit_int[1] * 0x100)
      + (x.use_by_the_24_bit_int[2] * 0x10000);

  // If y > INT24_MAX
  if (y > 0x7FFFFF) y -= 0x1000000;
  return y;
}

// Assume `x` is in range of `int24_t`
struct signed_24_bit encode_24_bit_integer(int x) {
  if ( x < 0) x += 0x1000000;
  struct signed_24_bit y = { 0, { x, x/0x100, x/0x10000 } };
  return y;
}

Solution 3:[3]

I'd go with a sign-magnitude representation in the 24-bit part:

int value = -22;
unsigned long target = std::abs(value) & ((1 << 23) - 1);
if (value < 0)
    target |= 1 << 23;

To extract the value, just reverse the process:

int result = target & ((1 << 23) - 1);
if (target & (1 << 23))
    result = -result;

(caution: not tested)

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 chux - Reinstate Monica
Solution 3 Pete Becker