'Worst-case time complexity for adding two binary numbers using bit manipulation
I'm looking over the solution to adding 2 binary numbers, using the bit manipulation approach. The input are 2 binary-number strings, and the expected output is the string for the binary addition result.
Here is the Java code similar to the solution,
class BinaryAddition {
public String addBinary(String a, String b) {
BigInteger x = new BigInteger(a, 2);
BigInteger y = new BigInteger(b, 2);
BigInteger zero = new BigInteger("0", 2);
BigInteger carry, answer;
while (y.compareTo(zero) != 0) {
answer = x.xor(y);
carry = x.and(y).shiftLeft(1);
x = answer;
y = carry;
}
return x.toString(2);
}
}
While the algorithm makes a lot of sense, I'm somehow arriving at a different worst-case time complexity than the O(M + N)
stated in the Leetcode solution, where M, N
refers to the lengths of input strings.
In my opinion, it should be O(max(M, N)^2)
, since the while loop can run up to max(M, N)
times, and each iteration can take max(M, N)
operations. For example, adding 1111 and 1001 would take 4 iterations of the while loop.
Appreciate your advice on this or where I might have gone wrong, thanks.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|