Want Help Cracking FAANG?

(Then click this)

×
Back to Question Bank

1836. Remove Duplicates From an Unsorted Linked List - Leetcode Solution

Problem Description

You are given the head of an unsorted singly linked list. Your task is to remove all duplicate nodes from the list so that each value appears only once. The order of the nodes should remain the same as their first occurrence.

Constraints:

  • Each node in the linked list contains an integer value.
  • Return the head of the modified linked list with duplicates removed.
  • Do not create new nodes; modify the list in-place.
  • There may be multiple duplicates, but only the first occurrence of each value should be kept.
Example:
Input: 1 -> 2 -> 3 -> 2 -> 4 -> 3
Output: 1 -> 2 -> 3 -> 4

Thought Process

When approaching this problem, the first idea that comes to mind is to compare each node with every other node to find duplicates. However, this brute-force approach is inefficient, especially for long lists.

To optimize, we can use extra memory to keep track of values we've already seen. This way, as we traverse the list, we can quickly check if a value has already appeared and remove the node if it has. This is similar to how you might keep a checklist to avoid repeating items when packing for a trip.

The key is to balance between time efficiency and the amount of extra space used.

Solution Approach

To efficiently remove duplicates from an unsorted linked list, we can use a hash set to record which values we've already encountered. Here's how the algorithm works step-by-step:

  1. Initialize a Set: Create an empty set (such as HashSet or Set) to store the values we've seen so far.
  2. Traverse the List: Use two pointers:
    • prev: Points to the previous node (or None at the start).
    • curr: Points to the current node being examined.
  3. Check for Duplicates: For each node:
    • If curr.val is not in the set, add it and move both pointers forward.
    • If curr.val is already in the set, remove curr by setting prev.next to curr.next, then move curr forward (but prev stays at the last valid node).
  4. Continue Until End: Repeat until curr is None.
  5. Return the Head: The list is now free of duplicates, so return the original head.

This approach uses a set for O(1) lookups, making the overall process efficient.

Example Walkthrough

Let's walk through the input 1 -> 2 -> 3 -> 2 -> 4 -> 3 step by step:

  1. Start: Seen set is empty.
    Current node: 1
    Not in set, add 1. List remains 1 -> 2 -> 3 -> 2 -> 4 -> 3
  2. Current node: 2
    Not in set, add 2. List remains unchanged.
  3. Current node: 3
    Not in set, add 3. List remains unchanged.
  4. Current node: 2
    Already in set! Remove this node by linking previous node (3) to next node (4). List becomes 1 -> 2 -> 3 -> 4 -> 3
  5. Current node: 4
    Not in set, add 4. List remains unchanged.
  6. Current node: 3
    Already in set! Remove this node by linking previous node (4) to next node (None). List becomes 1 -> 2 -> 3 -> 4
Final result: 1 -> 2 -> 3 -> 4

Time and Space Complexity

Brute-force approach:
For each node, check every other node for duplicates. This leads to O(n2) time complexity and O(1) extra space.

Optimized approach (using a set):

  • Time Complexity: O(n), because we traverse the list once and set operations (add/check) are O(1) on average.
  • Space Complexity: O(n), since in the worst case all nodes have unique values and we store each in the set.
The optimized approach is much faster for large lists, trading some extra memory for speed.

Code Implementation

class ListNode:
    def __init__(self, val=0, next=None):
        self.val = val
        self.next = next

def removeDuplicates(head):
    seen = set()
    curr = head
    prev = None
    while curr:
        if curr.val in seen:
            prev.next = curr.next
        else:
            seen.add(curr.val)
            prev = curr
        curr = curr.next
    return head
      
/**
 * Definition for singly-linked list.
 * struct ListNode {
 *     int val;
 *     ListNode *next;
 *     ListNode(int x) : val(x), next(nullptr) {}
 * };
 */
#include <unordered_set>
ListNode* removeDuplicates(ListNode* head) {
    std::unordered_set<int> seen;
    ListNode* curr = head;
    ListNode* prev = nullptr;
    while (curr) {
        if (seen.count(curr->val)) {
            prev->next = curr->next;
        } else {
            seen.insert(curr->val);
            prev = curr;
        }
        curr = curr->next;
    }
    return head;
}
      
/**
 * Definition for singly-linked list.
 * class ListNode {
 *     int val;
 *     ListNode next;
 *     ListNode(int x) { val = x; }
 * }
 */
import java.util.HashSet;
class Solution {
    public ListNode removeDuplicates(ListNode head) {
        HashSet<Integer> seen = new HashSet<>();
        ListNode curr = head;
        ListNode prev = null;
        while (curr != null) {
            if (seen.contains(curr.val)) {
                prev.next = curr.next;
            } else {
                seen.add(curr.val);
                prev = curr;
            }
            curr = curr.next;
        }
        return head;
    }
}
      
/**
 * Definition for singly-linked list.
 * function ListNode(val, next) {
 *     this.val = (val===undefined ? 0 : val)
 *     this.next = (next===undefined ? null : next)
 * }
 */
var removeDuplicates = function(head) {
    let seen = new Set();
    let curr = head;
    let prev = null;
    while (curr) {
        if (seen.has(curr.val)) {
            prev.next = curr.next;
        } else {
            seen.add(curr.val);
            prev = curr;
        }
        curr = curr.next;
    }
    return head;
};
      

Summary

In summary, removing duplicates from an unsorted linked list can be done efficiently by using a hash set to keep track of seen values. This approach allows us to process each node in a single pass, removing duplicates in-place and preserving the order of first occurrences. The main insight is to trade a small amount of extra memory for a significant speedup, making this solution both practical and elegant for real-world data.