from collections import defaultdict, deque
class Solution:
def findLadders(self, beginWord, endWord, wordList):
wordSet = set(wordList)
if endWord not in wordSet:
return []
# BFS to build the tree of shortest paths
layer = {}
layer[beginWord] = [[beginWord]]
while layer:
new_layer = defaultdict(list)
for word in layer:
if word == endWord:
return layer[word]
for i in range(len(word)):
for c in 'abcdefghijklmnopqrstuvwxyz':
new_word = word[:i] + c + word[i+1:]
if new_word in wordSet:
new_layer[new_word] += [j + [new_word] for j in layer[word]]
wordSet -= set(new_layer.keys())
layer = new_layer
return []
#include <vector>
#include <string>
#include <unordered_set>
#include <unordered_map>
#include <queue>
using namespace std;
class Solution {
public:
vector<vector<string>> findLadders(string beginWord, string endWord, vector<string>& wordList) {
unordered_set<string> wordSet(wordList.begin(), wordList.end());
vector<vector<string>> res;
if (wordSet.find(endWord) == wordSet.end()) return res;
unordered_map<string, vector<vector<string>>> layer;
layer[beginWord] = {{beginWord}};
while (!layer.empty()) {
unordered_map<string, vector<vector<string>>> new_layer;
for (auto& item : layer) {
string word = item.first;
if (word == endWord) {
return layer[word];
}
for (int i = 0; i < word.size(); ++i) {
string newWord = word;
for (char c = 'a'; c <= 'z'; ++c) {
newWord[i] = c;
if (wordSet.count(newWord)) {
for (auto seq : item.second) {
seq.push_back(newWord);
new_layer[newWord].push_back(seq);
}
}
}
}
}
for (auto& item : new_layer)
wordSet.erase(item.first);
layer = new_layer;
}
return {};
}
};
import java.util.*;
class Solution {
public List<List<String>> findLadders(String beginWord, String endWord, List<String> wordList) {
Set<String> wordSet = new HashSet<>(wordList);
List<List<String>> res = new ArrayList<>();
if (!wordSet.contains(endWord)) return res;
Map<String, List<List<String>>> layer = new HashMap<>();
layer.put(beginWord, new ArrayList<>(Arrays.asList(Arrays.asList(beginWord))));
while (!layer.isEmpty()) {
Map<String, List<List<String>>> newLayer = new HashMap<>();
for (String word : layer.keySet()) {
if (word.equals(endWord)) {
return layer.get(word);
}
for (int i = 0; i < word.length(); i++) {
char[] chars = word.toCharArray();
for (char c = 'a'; c <= 'z'; c++) {
chars[i] = c;
String newWord = new String(chars);
if (wordSet.contains(newWord)) {
for (List<String> seq : layer.get(word)) {
List<String> next = new ArrayList<>(seq);
next.add(newWord);
newLayer.computeIfAbsent(newWord, k -> new ArrayList<>()).add(next);
}
}
}
}
}
for (String key : newLayer.keySet())
wordSet.remove(key);
layer = newLayer;
}
return res;
}
}
var findLadders = function(beginWord, endWord, wordList) {
const wordSet = new Set(wordList);
if (!wordSet.has(endWord)) return [];
let layer = {};
layer[beginWord] = [[beginWord]];
while (Object.keys(layer).length > 0) {
const newLayer = {};
for (const word in layer) {
if (word === endWord) {
return layer[word];
}
for (let i = 0; i < word.length; i++) {
for (let c = 97; c <= 122; c++) {
const newWord = word.slice(0, i) + String.fromCharCode(c) + word.slice(i + 1);
if (wordSet.has(newWord)) {
if (!newLayer[newWord]) newLayer[newWord] = [];
for (const seq of layer[word]) {
newLayer[newWord].push([...seq, newWord]);
}
}
}
}
}
for (const key in newLayer) wordSet.delete(key);
layer = newLayer;
}
return [];
};
The Word Ladder II problem asks you to find all the shortest transformation sequences from a starting word beginWord
to a target word endWord
, given a list of valid words wordList
. Each transformation can only change a single letter at a time, and each transformed word must exist in wordList
.
beginWord
and end with endWord
.
At first glance, this problem seems similar to finding a path in a graph, where each word is a node and an edge exists between words that differ by one letter. A brute-force approach would try all possible paths from beginWord
to endWord
, but this is highly inefficient due to the exponential number of possible paths.
The key insight is to recognize that the problem asks for all shortest paths. This suggests a breadth-first search (BFS) approach, which explores the graph level by level and naturally finds the shortest paths first. By keeping track of all possible paths at each level, we can collect all valid shortest transformations.
To optimize, we ensure that once a word is visited at the current level, it is not revisited in future levels, preventing cycles and unnecessary exploration.
Here is a step-by-step breakdown of the strategy:
beginWord
. At each level, generate all valid one-letter transformations that exist in wordList
.
wordList
). For each neighbor, append it to the current path and add to the next level.
wordList
(or a set version of it), so they are not revisited in subsequent levels.
endWord
is found at a level, collect all paths leading to it and return them, since BFS guarantees these are the shortest.
We use hash maps (dictionaries) to efficiently track paths and sets for O(1) word lookups.
Let's walk through an example:
Input:
beginWord = "hit"
endWord = "cog"
wordList = ["hot","dot","dog","lot","log","cog"]
Step-by-step:
["hit"]
["hot"]
(from "hit" by changing 'h' to 'h', 'i' to 'o', 't' to 't')
["dot","lot"]
(from "hot" change 'h' to 'd' or 'l')
["dog","log"]
(from "dot" to "dog", "lot" to "log")
["cog"]
(from "dog" or "log")
The two shortest transformation sequences are:
["hit","hot","dot","dog","cog"]
["hit","hot","lot","log","cog"]
The BFS ensures we only return the shortest paths, and by tracking all possible sequences at each level, we don't miss any valid transformation.
Brute-force approach: Exploring all possible paths could take O(N!) time, where N is the number of words, since each word could be used or not used in each path.
Optimized BFS approach:
Note: If there are many shortest paths, output size can be large.
The Word Ladder II problem is elegantly solved by modeling the problem as a graph and using BFS to find all shortest transformation sequences. The key insights are to use level-order traversal to guarantee shortest paths, track entire paths for reconstruction, and prevent revisiting words to avoid cycles. This approach balances efficiency and completeness, ensuring all valid solutions are found without redundant computation.