'What is Biased Notation?

I have read: "Like an unsigned int, but offset by −(2^(n−1) − 1), where n is the number of bits in the numeral. Aside: Technically we could choose any bias we please, but the choice presented here is extraordinarily common." - http://inst.eecs.berkeley.edu/~cs61c/sp14/disc/00/Disc0.pdf

However, I don't get what the point is. Can someone explain this to me with examples? Also, when should I use it, given other options like one's compliment, sign and mag, and two's compliment?



Solution 1:[1]

A "representation" is a way of encoding information so that it easy to extract details or inferences from the encoded information.

Most modern CPUs "represent" numbers using "twos complement notation". They do this because it is easy to design digital circuits that can do what amounts to arithmetic on these values quickly (add, subtract, multiply, divide, ...). Twos complement also has the nice property that one can interpret the most significant bit as either a power-of-two (giving "unsigned numbers") or as a sign bit (giving signed numbers) without changing essentially any of the hardware used to implement the arithmetic.

Older machines used other bases, e.g, quite common in the 60s were machines that represented numbers as sets of binary-coded-decimal digits stuck in 4-bit addressable nibbles (the IBM 1620 and 1401 are examples of this). So, you can represent that same concept or value different ways.

A bias just means that whatever representation you chose (for numbers), you have added a constant bias to that value. Presumably that is done to enable something to be done more effectively. I can't speak to "?(2^(n?1) ? 1)" being "an extraordinaly common (bias)"; I do lots of assembly and C coding and pretty don't find a need to "bias" values.

However, there is a common example. Modern CPUs largely implement IEEE floating point, which stores floating point numbers with sign, exponent, mantissa. The exponent is is power of two, symmetric around zero, but biased by 2^(N-1) if I recall correctly, for an N-bit exponent.

This bias allows floating point values with the same sign to be compared for equal/less/greater by using the standard machine twos-complement instructions rather than a special floating point instruction, which means that sometimes use of actual floating point compares can be avoided. (See http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm for dark corner details). [Thanks to @PotatoSwatter for noting the inaccuracy of my initial answer here, and making me go dig this out.]

Solution 2:[2]

Biased notation is a way of storing a range of values that doesn't start with zero.

Put simply, you take an existing representation that goes from zero to N, and then add a bias B to each number so it now goes from B to N+B.

  • Floating-point exponents are stored with a bias to keep the dynamic range of the type "centered" on 1.
  • Excess-three encoding is a technique for simplifying decimal arithmetic using a bias of three.
  • Two's complement notation could be considered as biased notation with a bias of INT_MIN and the most-significant bit flipped.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 Potatoswatter