class Solution:
def subsetsWithDup(self, nums):
nums.sort()
res = []
subset = []
def backtrack(start):
res.append(subset[:])
for i in range(start, len(nums)):
if i > start and nums[i] == nums[i-1]:
continue
subset.append(nums[i])
backtrack(i + 1)
subset.pop()
backtrack(0)
return res
class Solution {
public:
vector<vector<int>> subsetsWithDup(vector<int>& nums) {
sort(nums.begin(), nums.end());
vector<vector<int>> res;
vector<int> subset;
backtrack(0, nums, subset, res);
return res;
}
private:
void backtrack(int start, vector<int>& nums, vector<int>& subset, vector<vector<int>>& res) {
res.push_back(subset);
for (int i = start; i < nums.size(); ++i) {
if (i > start && nums[i] == nums[i-1]) continue;
subset.push_back(nums[i]);
backtrack(i + 1, nums, subset, res);
subset.pop_back();
}
}
};
class Solution {
public List<List<Integer>> subsetsWithDup(int[] nums) {
Arrays.sort(nums);
List<List<Integer>> res = new ArrayList<>();
backtrack(0, nums, new ArrayList<>(), res);
return res;
}
private void backtrack(int start, int[] nums, List<Integer> subset, List<List<Integer>> res) {
res.add(new ArrayList<>(subset));
for (int i = start; i < nums.length; i++) {
if (i > start && nums[i] == nums[i-1]) continue;
subset.add(nums[i]);
backtrack(i + 1, nums, subset, res);
subset.remove(subset.size() - 1);
}
}
}
var subsetsWithDup = function(nums) {
nums.sort((a, b) => a - b);
const res = [];
const subset = [];
function backtrack(start) {
res.push([...subset]);
for (let i = start; i < nums.length; i++) {
if (i > start && nums[i] === nums[i-1]) continue;
subset.push(nums[i]);
backtrack(i + 1);
subset.pop();
}
}
backtrack(0);
return res;
};
Given an integer array nums
that may contain duplicates, return all possible subsets (the power set). The solution set must not contain duplicate subsets. Each subset can be in any order, but the result must not have repeated sets.
nums
.
To solve this problem, we first think about how to generate all subsets (the power set) of a list. For lists without duplicates, we can use backtracking or iterative methods to explore every possibility. However, when duplicates are present, naive approaches will generate repeated subsets. For example, if nums = [1,2,2]
, a brute-force approach will produce duplicate [2]
and [1,2]
subsets.
Our main challenge is to avoid these duplicates. One intuitive way is to use a set to filter out duplicates after generating all subsets, but this is inefficient. Instead, we can leverage the fact that duplicates are adjacent when the input is sorted. By sorting the array first, we can skip over duplicate numbers during our subset generation, ensuring that each unique subset is only created once.
nums
puts duplicates next to each other, making it easy to skip repeated elements during subset generation.nums[i] == nums[i-1]
and i > start
.This approach ensures that each unique subset is generated exactly once, efficiently handling duplicates.
Let's walk through an example with nums = [1, 2, 2]
.
nums
: [1, 2, 2]
[]
1
: [1]
2
: [1, 2]
2
: [1, 2, 2]
[1]
, skip duplicate 2
at the same recursion level.[]
, add 2
: [2]
2
: [2, 2]
[]
, skip duplicate 2
at the same recursion level.
The final result is: [[], [1], [1,2], [1,2,2], [2], [2,2]]
Note how we never generate duplicate subsets, thanks to skipping repeated elements at the same recursion depth.
2^n
possible subsets and then filter duplicates, leading to O(2^n * n) time and extra space for storing/intersecting sets.2^n
subsets in the worst case, but skips duplicates efficiently.The key improvement is avoiding duplicate subsets, so we don't waste time or space on redundant work.
The Subsets II problem requires generating all unique subsets from an array that may contain duplicates. By sorting the array first and carefully skipping duplicate elements during backtracking, we ensure that each subset is generated only once. This approach is both efficient and elegant, leveraging the properties of sorted data and recursive exploration to avoid unnecessary computations. The result is a clean and robust solution that works for any input size or duplicate pattern.