Try every possible split: at position i, check if s[0..i] is in the dictionary and recursively check s[i..n]. If any split works, return true.
Why it works
Exhaustively tries all partitions of the string into dictionary words. If any partition succeeds, the string is breakable.
Why we can do better
Problem: Overlapping subproblems — canBreak(i) is called from multiple split positions. Memoization or bottom-up DP solves each suffix/prefix exactly once, reducing to O(n² · m) where m = max word length.
Java — Recursive
import java.util.*;
classSolution {
publicbooleanwordBreak(String s, List<String> wordDict) {
Set<String> dict = newHashSet<>(wordDict);
returncanBreak(s, dict, 0);
}
privatebooleancanBreak(String s, Set<String> d, int start) {
if (start == s.length()) returntrue;
for (int end = start+1; end <= s.length(); end++)
if (d.contains(s.substring(start, end)) && canBreak(s, d, end))
returntrue;
returnfalse;
}
}
Python — Recursive
classSolution:
defwordBreak(self, s: str, wordDict: list[str]) -> bool:
words = set(wordDict)
defcan_break(start):
if start == len(s): returnTruefor end in range(start+1, len(s)+1):
if s[start:end] in words andcan_break(end):
returnTruereturnFalsereturncan_break(0)
Metric
Value
Time
O(2ⁿ)
Space
O(n) call stack
03
Section Three · Approach 2
Bottom-Up DP — O(n² · m)
dp[i] = true if s[0..i-1] can be segmented. For each i, check all j < i: if dp[j] is true and s[j..i] is in the dictionary, set dp[i] = true.
💡 Mental model: You're cutting a long ribbon with letters into labeled pieces. You slide a knife along it from left to right: "Can I cut here such that the piece to the left of my cut matches a word in my catalog, and the portion before that piece was already confirmed cuttable?" You mark each valid cut position with a green pin. If the very end gets a pin, the entire ribbon is cuttable.
Algorithm
dp[0] = true — empty string is trivially breakable.
For i from 1 to n: for j from 0 to i-1: if dp[j] and s[j..i] in dict → dp[i] = true, break inner loop.
Return dp[n].
🎯 When to recognize this pattern:
"Can a string be segmented into valid parts?" The signal: partition a sequence into pieces where each piece satisfies a condition, and you need to check feasibility or count ways.
Also used in LC 140 (Word Break II — list all segmentations), LC 472 (Concatenated Words).
Optimization — limit j range:
Instead of checking all j from 0 to i-1, only check j ≥ i - maxWordLen.
No dictionary word is longer than maxWordLen, so substrings longer than that can't match.
This prunes the inner loop significantly for small dictionaries with short words.
04
Section Four · Trace
Visual Walkthrough
Trace for s = "leetcode", dict = {"leet", "code"}.
Word Break — DP (s="leetcode")
05
Section Five · Implementation
Code — Java & Python
Java — Bottom-up DP
import java.util.*;
classSolution {
publicbooleanwordBreak(String s, List<String> wordDict) {
Set<String> dict = newHashSet<>(wordDict);
int n = s.length();
boolean[] dp = newboolean[n + 1];
dp[0] = true; // empty prefix is breakablefor (int i = 1; i <= n; i++)
for (int j = 0; j < i; j++)
if (dp[j] && dict.contains(s.substring(j, i))) {
dp[i] = true;
break; // one valid split suffices
}
return dp[n];
}
}
Python — Bottom-up DP
classSolution:
defwordBreak(self, s: str, wordDict: list[str]) -> bool:
words = set(wordDict)
n = len(s)
dp = [False] * (n + 1)
dp[0] = Truefor i in range(1, n + 1):
for j in range(i):
if dp[j] and s[j:i] in words:
dp[i] = Truebreakreturn dp[n]
06
Section Six · Analysis
Complexity Analysis
Approach
Time
Space
Trade-off
Naive recursion
O(2ⁿ)
O(n)
Exponential due to overlapping
Memoized recursion
O(n² · m)
O(n)
Top-down; same complexity as bottom-up
Bottom-up DP ← optimal
O(n² · m)
O(n)
n = string length, m = avg word length for hashing; clean iterative
Trie optimization: For very large dictionaries, building a Trie from word list allows O(m) lookup per substring instead of hashing. Time becomes O(n² · m) but with better constants. For interviews, HashSet is sufficient given constraints.
Need to find the right split points; greedy fails.
⚠ Common Mistake: Not using a HashSet for the dictionary, resulting in O(k) linear scan per lookup (k = dict size). With n=300 and k=1000 and n² inner steps, that's 90 million × O(k) = ~90 billion ops. Always convert the list to a set for O(1) average-case lookup.