The "Counting Bits" problem asks you to write a function that, given a non-negative integer n
, returns an array ans
of length n + 1
such that for each i
(where 0 ≤ i ≤ n
), ans[i]
is the number of 1-bits (also known as the Hamming weight or population count) in the binary representation of i
.
Constraints:
0 ≤ n ≤ 105
0
to n
should be processed efficiently.n + 1
with the correct counts for all numbers in range.
At first glance, a straightforward way to solve this problem is to individually count the number of 1-bits in the binary representation of every number from 0
to n
. This could be done by repeatedly dividing by 2 and checking the remainder or using bitwise operations. However, this approach could be slow for large n
since each number might require up to O(\log n)
operations.
To optimize, we look for patterns or relationships between the bit counts of consecutive numbers. For example, is there a way to compute the bit count of i
using the bit count of a smaller number? If so, we could build up the answer array efficiently using previously computed results, a technique known as dynamic programming.
The key insight is to notice that the number of 1-bits in i
is related to the number of 1-bits in i // 2
(or i >> 1
), since right-shifting removes the least significant bit. If i
is even, its bit count is the same as i // 2
; if it's odd, it's one more.
We can use dynamic programming to solve this problem efficiently. The idea is:
ans
of length n + 1
, where ans[0] = 0
(since 0 has no 1-bits).
i
from 1 to n
:
i
is equal to the number of 1-bits in i >> 1
(which is i
divided by 2, discarding the least significant bit), plus 1 if the least significant bit of i
is 1 (i.e., if i % 2 == 1
).
ans[i] = ans[i >> 1] + (i & 1)
ans[i]
is computed in constant time, using a previously computed value.
This method takes advantage of the relationship between a number and its half in binary, and uses previously calculated results to avoid redundant computation.
Let's walk through the process for n = 5
:
ans = [0, 0, 0, 0, 0, 0]
i
from 1 to 5:
i = 1
: ans[1] = ans[0] + 1 = 0 + 1 = 1
i = 2
: ans[2] = ans[1] + 0 = 1 + 0 = 1
i = 3
: ans[3] = ans[1] + 1 = 1 + 1 = 2
i = 4
: ans[4] = ans[2] + 0 = 1 + 0 = 1
i = 5
: ans[5] = ans[2] + 1 = 1 + 1 = 2
[0, 1, 1, 2, 1, 2]
.
This matches the number of 1-bits in binary for each number:
Brute-force approach:
n
, count the 1-bits by checking each bit. This takes up to O(\log n)
operations per number.
O(n \log n)
O(n)
for the output array.
ans[i]
is computed in O(1)
time using a previously computed value.
O(n)
O(n)
for the output array.
The optimized approach is much faster and suitable for large n
.
class Solution:
def countBits(self, n: int) -> list[int]:
ans = [0] * (n + 1)
for i in range(1, n + 1):
ans[i] = ans[i >> 1] + (i & 1)
return ans
class Solution {
public:
vector<int> countBits(int n) {
vector<int> ans(n + 1, 0);
for (int i = 1; i <= n; ++i) {
ans[i] = ans[i >> 1] + (i & 1);
}
return ans;
}
};
class Solution {
public int[] countBits(int n) {
int[] ans = new int[n + 1];
for (int i = 1; i <= n; i++) {
ans[i] = ans[i >> 1] + (i & 1);
}
return ans;
}
}
var countBits = function(n) {
const ans = new Array(n + 1).fill(0);
for (let i = 1; i <= n; i++) {
ans[i] = ans[i >> 1] + (i & 1);
}
return ans;
};
The "Counting Bits" problem is a classic example of how recognizing patterns and relationships in data can lead to efficient solutions. By leveraging the connection between a number and its half in binary, we use dynamic programming to compute the bit counts for all numbers up to n
in linear time. This not only optimizes performance but also demonstrates the power of building solutions incrementally using previously solved subproblems.