h
to map each number to its index.h[num] = i
for each index i
and value num
in nums
.s
to store unique triplets.i
and j
to select every unique pair of elements from nums
.(nums[i], nums[j])
, compute the third number desired = -nums[i] - nums[j]
.desired
exists in the hash map and is not the same as i
or j
.s
containing all unique triplets that sum to 0.
nums.sort()
i
from 0
to n - 1
.nums[i]
as the first element of the triplet.nums[i] > 0
, break the loop since the array is sorted and no triplet can sum to 0.nums[i]
is the same as nums[i - 1]
, skip it to avoid duplicate triplets.lo = i + 1
and hi = n - 1
.lo < hi
, compute the sum of nums[i] + nums[lo] + nums[hi]
.lo
and hi
pointers inward.lo
.hi
.answer
list containing all unique triplets.[
The “3Sum” problem presents a common computational challenge: finding all unique triplets in an array of integers such that their sum equals zero. Given an input array nums
, the goal is to identify all sets of three elements (a, b, c)
where a + b + c = 0
, with each triplet returned in non-descending order and without duplication.
Consider the input [-1, 0, 1, 2, -1, -4]
. A valid output would be [[-1, -1, 2], [-1, 0, 1]]
. While there are several possible combinations of three numbers, only these two sum to zero and are unique in terms of value composition. The duplicates and permutations such as [0, -1, 1]
are not allowed in the final result.
This problem blends combinatorics with efficiency constraints and is best solved with a well-thought-out algorithm. The naive brute-force approach is intuitive but inefficient, making this a great opportunity to practice optimizing solutions through sorting, early pruning, and the two-pointer technique.
3Sum is often considered a rite of passage for software engineers preparing for technical interviews. While the problem itself may appear simple at first glance, crafting an optimal solution requires balancing correctness with efficiency. It demands a deep understanding of array traversal, pointer manipulation, and duplicate avoidance — all under time and space constraints that make brute-force methods infeasible in large-scale scenarios.
Beyond the realm of interviews, 3Sum teaches foundational strategies used in many other algorithmic problems, such as 4Sum, kSum, and variants of subset or combination sums. It’s also a subtle introduction to constraint satisfaction — a key concept in more advanced computational fields like dynamic programming, backtracking, and search algorithms.
The most direct way to solve the 3Sum problem is by using three nested loops to consider every possible combination of three distinct elements. For each such triplet, we check whether the sum of the numbers is zero. If so, we store it in a result set, ensuring that duplicates are filtered out either via sorting or using a hash-based set.
While this method is conceptually simple, it performs poorly on large inputs. With three levels of iteration, the time complexity is O(n³), making it impractical for arrays with more than a few hundred elements. Additionally, the overhead required to eliminate duplicate triplets further degrades performance and complicates implementation. Thus, we seek a more efficient strategy.
To optimize the solution, we first observe that sorting the array allows us to use a two-pointer approach for the inner search, reducing time complexity dramatically. The overall strategy is to fix one number and find the other two such that their sum equals the negative of the fixed number.
By sorting the array, we gain the ability to efficiently skip duplicates, ensure non-descending order, and perform binary-like traversal through pointer narrowing. This transformation turns a triple nested loop into a single outer loop with a linear inner loop — a common and powerful optimization pattern.
nums
in ascending order.i
from 0
to nums.length - 3
.nums[i]
, skip it if it's the same as the previous number to avoid duplicates.left = i + 1
and right = nums.length - 1
.left < right
:
total = nums[i] + nums[left] + nums[right]
.total == 0
, record the triplet, then increment left
and decrement right
, skipping duplicates.total < 0
, increment left
to increase the sum.total > 0
, decrement right
to decrease the sum.This method ensures that each triplet is checked only once and that no duplicates enter the final output. Since each element is touched at most once per outer iteration, the time complexity is reduced to O(n²).
Let's walk through the example input [-1, 0, 1, 2, -1, -4]
. First, we sort the array: [-4, -1, -1, 0, 1, 2]
.
Starting at i = 0
with nums[i] = -4
, we set left = 1
and right = 5
. The sum is -4 + (-1) + 2 = -3
, which is too low. We move left
forward. We continue this process until no zero-sum triplet is found for this index.
At i = 1
with nums[i] = -1
, left = 2
, right = 5
. We find that -1 + (-1) + 2 = 0
, which is valid. We store [-1, -1, 2]
. After skipping duplicates, we then find [-1, 0, 1]
as the next valid triplet. We continue until all positions have been scanned.
The final result is [[-1, -1, 2], [-1, 0, 1]]
.
The dominant factor in this algorithm is the nested loop formed by the outer index i
and the two-pointer scan inside. Since the array is scanned once and the pointers traverse it linearly per iteration, the overall time complexity is O(n²).
The space complexity is O(1) if we disregard the space used to store the output triplets. We only use a few variables for indexing and no additional data structures beyond the result list.
[0, 0, 0, 0]
must return only one triplet [0, 0, 0]
, even though there are many duplicates.The 3Sum problem elegantly demonstrates the evolution of problem-solving approaches — from naive brute force to optimized two-pointer strategies. It is a classic example of how sorting and constraint-based iteration can simplify a seemingly combinatorial problem.
By mastering this problem, developers gain confidence in dealing with variations like 4Sum and kSum, and deepen their understanding of how space and time complexity shape the choice of algorithm. Efficient use of sorting and pointer narrowing, paired with careful handling of edge cases, is the essence of algorithmic problem-solving.