Algorithms

Dynamic Programming Fundamentals

Break a problem into overlapping subproblems, solve each once, cache the result β€” turning exponential brute force into polynomial time. The most feared interview topic, but once you see the pattern, every DP problem is the same three steps: state, recurrence, base case.

Foundations Β· Array Β· Linked List Β· Stack Β· Queue Β· HashMap Β· Binary Tree Β· BST Β· Heap Β· Trie Β· Graph Β· Sorting Β· Binary Search Β· Two Pointers Β· Sliding Window Β· Dynamic Prog.
01
Section One Β· Foundation

What is Dynamic Programming?

BRUTE FORCE β€” O(2ⁿ) f(5) f(4) f(3) f(3) f(2) f(2) f(1) f(2) f(1) f(3) computed 2Γ—, f(2) computed 3Γ— DP CACHE β€” O(n) 0 1 1 2 3 f(0) f(1) f(2) f(3) f(4) f(5) = 5 each value computed exactly ONCE then looked up in O(1) O(2ⁿ) β†’ O(n) β€” same answer, exponentially faster

Dynamic programming is an optimisation technique that applies when a problem has two properties: overlapping subproblems (the same subproblem is solved multiple times in a naive recursive solution) and optimal substructure (the optimal solution to the problem contains optimal solutions to its subproblems). The technique works by solving each subproblem once and caching the result β€” subsequent calls look up the cached value instead of recomputing. This turns the exponential branching of naive recursion into polynomial time. There are two approaches: top-down (memoisation) β€” write the natural recursion and add a cache, and bottom-up (tabulation) β€” fill a table iteratively from the smallest subproblems upward. Both produce the same result; bottom-up is typically faster (no recursion overhead) and easier to optimise for space. Every DP problem reduces to three decisions: (1) what is the STATE β€” the parameters that uniquely identify a subproblem; (2) what is the RECURRENCE β€” how the current state relates to smaller states; (3) what is the BASE CASE β€” the smallest state with a known answer.

Analogy: Computing directions in a city. If someone asks you how to get from A to E, you figure out A→B→C→D→E. Then someone asks A to D — you already know the A→B→C→D part. Without caching, you recompute the entire route. With DP, you store each intermediate result: once you know the best route from A to C, you never recompute it — you just look it up. The "table" is your collection of solved sub-routes.
02
Section Two Β· Mental Model

How It Thinks

Naive recursion solves the same subproblem multiple times (overlapping subproblems)
β–Ά
Exponential time β€” fib(n) has O(2ⁿ) calls because fib(k) is recomputed at every node that needs it
Cache (memo[n]) stores each subproblem's result after its first computation
β–Ά
Every subproblem is solved at most once β€” n states Γ— O(1) lookup = O(n) total. Exponential becomes polynomial.
Top-down (memoisation): write the recursion naturally, add a cache before returning
β–Ά
Easiest to write β€” the recursive structure mirrors the recurrence directly. Only computes states that are actually needed (lazy evaluation).
Bottom-up (tabulation): fill a table from base cases upward using the recurrence
β–Ά
No recursion overhead, no stack depth concern. Iterative loop over states in dependency order. Often allows space optimisation (only keep last 1–2 rows).
State = the minimum set of parameters that uniquely identify a subproblem
β–Ά
1D state (one index i) β†’ 1D DP array. 2D state (two indices i,j or index + capacity) β†’ 2D DP table. Choosing the right state is the hardest part.
Space optimisation: if dp[i] only depends on dp[i-1] (or dp[i-1] and dp[i-2])
β–Ά
Replace the entire array with 1–2 variables β€” O(n) space becomes O(1). For 2D, keep only the previous row β€” O(nΓ—m) becomes O(m).
03
Section Three Β· Top-Down

Top-Down β€” Memoisation

Start from the original problem, recurse into subproblems, cache results. The code reads like the mathematical recurrence. Easiest to write β€” start with brute-force recursion, then add a memo array.

Fibonacci β€” Memoised
Pseudocode
memo = {} function fib(n): if n <= 1: return n // base case if n in memo: return memo[n] // cached? return it memo[n] = fib(n - 1) + fib(n - 2) // compute + cache return memo[n] // O(n) time, O(n) space
The 3-Step Recipe for Top-Down DP
Step Question Fibonacci Example
1. State What parameters define a subproblem? n β€” the index
2. Recurrence How does current state relate to smaller states? fib(n) = fib(n-1) + fib(n-2)
3. Base case Smallest state with a known answer? fib(0)=0, fib(1)=1
Top-down is the "lazy" approach:
  • It only computes subproblems that are actually needed.
  • If the subproblem graph is sparse (many states exist but few are reachable), top-down avoids computing unnecessary states.
  • Bottom-up computes ALL states in the table whether needed or not.
04
Section Four Β· Bottom-Up

Bottom-Up β€” Tabulation

Fill a table iteratively from the base cases upward. No recursion, no stack overflow risk. The iteration order must respect dependencies β€” each state should only depend on states already computed. Bottom-up is typically preferred in interviews for its simplicity and speed, and it enables space optimisation.

Fibonacci β€” Tabulated
Pseudocode
function fib(n): dp = new int[n + 1] dp[0] = 0, dp[1] = 1 // base cases for i = 2 to n: dp[i] = dp[i - 1] + dp[i - 2] // recurrence return dp[n] // O(n) time, O(n) space
Space-Optimised (O(1) space)
function fib(n): if n <= 1: return n prev2 = 0, prev1 = 1 for i = 2 to n: curr = prev1 + prev2 prev2 = prev1 prev1 = curr return prev1 // O(n) time, O(1) space
Bottom-up fills table left to right β€” each cell depends only on previous cells
0 dp[0] 1 dp[1] 1 dp[2] 2 dp[3] 3 dp[4] 5 dp[5] β˜… dp[i] = dp[i-1] + dp[i-2] ← each depends on previous two Space opt: only keep prev1, prev2 fill left β†’ right β€” all dependencies already computed
Top-Down (Memoisation)

Recursive + Cache

  • Write natural recursion first
  • Add memo array/map
  • Only computes reachable states
  • Risk: stack overflow on deep recursion
  • Easier to reason about
Bottom-Up (Tabulation)

Iterative + Table

  • Fill table from base cases upward
  • No recursion β€” no stack overflow
  • Computes ALL states in the table
  • Enables space optimisation
  • Preferred in interviews
05
Section Five Β· 1D Problems

1D Dynamic Programming

The state is a single index or value β€” dp[i] represents the answer for the problem considering only the first i elements (or reaching position i). Most beginner DP problems are 1D.

Climbing Stairs (LC 70)
You can climb 1 or 2 stairs at a time. How many distinct ways to reach step n?
Analysis
// State: dp[i] = number of ways to reach step i // Recurrence: dp[i] = dp[i-1] + dp[i-2] // (arrive from step i-1 with 1 step, or from i-2 with 2 steps) // Base: dp[0] = 1, dp[1] = 1 // Answer: dp[n] // Time: O(n), Space: O(1) with optimisation
House Robber (LC 198)
Rob houses along a street β€” can't rob two adjacent houses. Maximise total loot.
Analysis
// State: dp[i] = max loot from houses 0..i // Recurrence: dp[i] = max(dp[i-1], dp[i-2] + nums[i]) // (skip house i, or rob it + best from 2 houses back) // Base: dp[0] = nums[0], dp[1] = max(nums[0], nums[1]) // Answer: dp[n-1] // Time: O(n), Space: O(1) with optimisation
The "take or skip" recurrence is the most common 1D DP pattern:
  • At each element, you either take it (and add its value to the best achievable from a non-conflicting position) or skip it (and carry forward the best achievable so far).
  • Climbing Stairs, House Robber, Coin Change, and Jump Game all follow this structure.
Coin Change (LC 322)
Given coin denominations and a target amount, find the minimum number of coins needed.
Analysis
// State: dp[amount] = min coins to make 'amount' // Recurrence: dp[a] = min(dp[a - coin] + 1) for each coin // (try using each coin β€” take the one that gives fewest total) // Base: dp[0] = 0 (zero coins for zero amount) // Init: dp[1..target] = ∞ (impossible until proven otherwise) // Answer: dp[target] (or -1 if still ∞) // Time: O(amount Γ— coins), Space: O(amount)
06
Section Six Β· 2D Problems

2D Dynamic Programming

The state has two dimensions β€” dp[i][j] might represent "the answer considering the first i items with capacity j" (knapsack) or "the answer for substring s[i..j]" (interval DP) or "the answer at grid position (i,j)" (path problems). 2D DP is common in medium/hard interview problems.

Unique Paths (LC 62)
Count paths from top-left to bottom-right of an mΓ—n grid, moving only right or down.
Analysis
// State: dp[i][j] = number of paths to reach cell (i, j) // Recurrence: dp[i][j] = dp[i-1][j] + dp[i][j-1] // (arrive from above or from the left) // Base: dp[0][j] = 1, dp[i][0] = 1 (only one way along edges) // Answer: dp[m-1][n-1] // Time: O(mΓ—n), Space: O(n) with row optimisation
Longest Common Subsequence (LC 1143)
Find the length of the longest subsequence common to two strings.
Analysis
// State: dp[i][j] = LCS length of s1[0..i-1] and s2[0..j-1] // Recurrence: // if s1[i-1] == s2[j-1]: dp[i][j] = dp[i-1][j-1] + 1 // else: dp[i][j] = max(dp[i-1][j], dp[i][j-1]) // Base: dp[0][j] = 0, dp[i][0] = 0 (empty string β†’ LCS = 0) // Answer: dp[m][n] // Time: O(mΓ—n), Space: O(n) with row optimisation
LCS table for "ABCDE" and "ACE" β€” dp[5][3] = 3
"" A C E "" A B C D E 0 0 0 0 0 1 1 1 0 1 1 1 0 1 2 2 0 1 2 2 0 1 2 3 β˜… Match: diagonal + 1 A=A β†’ dp[1][1] = dp[0][0]+1 = 1 C=C β†’ dp[3][2] = dp[2][1]+1 = 2 E=E β†’ dp[5][3] = dp[4][2]+1 = 3 No match: max(up, left) Bβ‰ A β†’ dp[2][1] = max(dp[1][1],dp[2][0]) LCS = "ACE", length = 3
Space optimisation for 2D DP:
  • If dp[i][j] only depends on row i-1 and row i, you only need to keep two rows (or even one row, filling right-to-left).
  • This reduces O(mΓ—n) space to O(n).
  • For LCS: int[] prev = new int[n+1], curr = new int[n+1] β€” swap after each row.
07
Section Seven Β· Classic Patterns

DP Pattern Families

Pattern State Recurrence Shape Examples
Linear (1D) dp[i] = answer for first i elements dp[i] = f(dp[i-1], dp[i-2], ...) Fibonacci, Stairs, House Robber
Knapsack dp[i][w] = best value with items 0..i, capacity w dp[i][w] = max(skip, take + dp[i-1][w-wt]) 0/1 Knapsack, Coin Change, Partition Equal Subset
Grid paths dp[i][j] = answer at cell (i,j) dp[i][j] = f(dp[i-1][j], dp[i][j-1]) Unique Paths, Min Path Sum
String matching dp[i][j] = answer for s1[0..i], s2[0..j] match β†’ diagonal + 1; no match β†’ max(up, left) LCS, Edit Distance, Interleaving
Interval dp[i][j] = answer for subarray i..j dp[i][j] = f(dp[i][k], dp[k+1][j]) for k in i..j Burst Balloons, Matrix Chain
Subsequence dp[i] = longest ending at index i dp[i] = max(dp[j]+1) for j < i where arr[j] < arr[i] LIS, Max Sum Increasing Subseq
Common Mistake: Confusing "subarray" with "subsequence." A subarray is contiguous (sliding window handles this). A subsequence is not contiguous (DP handles this). "Maximum Subarray" (LC 53, Kadane's) is linear β€” NOT classic DP. "Longest Increasing Subsequence" (LC 300) IS DP because elements can be non-contiguous.
08
Section Eight Β· Implementation

Build It Once

Build each classic DP pattern once. The hardest part is identifying the state and recurrence β€” once you have those, the code is mechanical.

Java β€” DP core patterns
// 1. Climbing Stairs β€” O(n) time, O(1) space int climbStairs(int n) { if (n <= 2) return n; int a = 1, b = 2; for (int i = 3; i <= n; i++) { int c = a + b; a = b; b = c; } return b; } // 2. House Robber β€” O(n) time, O(1) space int rob(int[] nums) { int prev2 = 0, prev1 = 0; for (int num : nums) { int curr = Math.max(prev1, prev2 + num); prev2 = prev1; prev1 = curr; } return prev1; } // 3. Coin Change β€” O(amount Γ— coins) time int coinChange(int[] coins, int amount) { int[] dp = new int[amount + 1]; Arrays.fill(dp, amount + 1); dp[0] = 0; for (int a = 1; a <= amount; a++) for (int c : coins) if (c <= a) dp[a] = Math.min(dp[a], dp[a - c] + 1); return dp[amount] > amount ? -1 : dp[amount]; }
09
Section Nine Β· Java Reference

Use It In Java

IN JAVA β€” DP Idioms & Patterns
1D table int[] dp = new int[n + 1]; β€” most common, 0-indexed or 1-indexed
2D table int[][] dp = new int[m + 1][n + 1]; β€” string/grid problems
Memo map Map<String, Integer> memo = new HashMap<>(); β€” multi-key states
Idiom Use case Note
Arrays.fill(dp, value) Initialise DP array Use Integer.MAX_VALUE for min problems, 0 for count problems
dp[0] = base_value Base case dp[0] = 0 for coin change, dp[0] = 1 for path counting
Math.max / Math.min Recurrence Maximise or minimise at each state
int prev2, prev1, curr Space optimisation Replace dp array when only 2 previous values needed
for (w = W; w >= wt; w--) 0/1 Knapsack (1D) Iterate right-to-left to avoid using same item twice
int[] prev, curr; swap 2D β†’ 1D space opt Keep only the previous row for 2D problems
⚠ Gotcha: In 0/1 Knapsack with a 1D array, iterate the capacity loop RIGHT to LEFT (for w = W down to wt). This ensures each item is used at most once. If you iterate left to right, the same item can be used multiple times β€” that's the unbounded knapsack variant, not 0/1.
⚠ Gotcha: When using Integer.MAX_VALUE as the "impossible" sentinel in min problems (like Coin Change), adding 1 to it causes integer overflow β†’ wraps to Integer.MIN_VALUE. Guard with: if (dp[a-c] != Integer.MAX_VALUE) dp[a] = min(dp[a], dp[a-c]+1). Or use amount + 1 as sentinel instead.
10
Section Ten Β· Practice

Problems To Solve

DP interview problems are pattern-based β€” once you've solved one problem in a family, others in the same family are variations. Start with 1D problems (Climbing Stairs, House Robber), then move to Knapsack variants (Coin Change, Partition Equal Subset), then string DP (LCS, Edit Distance). The hardest step is always defining the state β€” once you have the state and recurrence, the code is mechanical.

Difficulty Pattern Problem Key Insight
Easy 1D linear Climbing Stairs β€” LC 70 dp[i] = dp[i-1] + dp[i-2]. Identical to Fibonacci. Space-optimise to O(1) with two variables.
Medium 1D take/skip House Robber β€” LC 198 dp[i] = max(dp[i-1], dp[i-2] + nums[i]). Take or skip each house. Space-optimise to O(1).
Medium knapsack Coin Change β€” LC 322 dp[a] = min(dp[a-coin]+1) for each coin. Unbounded knapsack variant. Sentinel: amount+1 for impossible.
Medium subsequence Longest Increasing Subsequence β€” LC 300 dp[i] = max(dp[j]+1) for j < i where nums[j] < nums[i]. O(nΒ²). Binary search variant achieves O(n log n).
Medium string Longest Common Subsequence β€” LC 1143 dp[i][j] = match β†’ diagonal+1, else max(up, left). Space-optimise with two rows.
Medium string Edit Distance β€” LC 72 dp[i][j] = min cost to convert s1[0..i] to s2[0..j]. Three operations: insert, delete, replace. Match β†’ free (diagonal).
Interview Pattern:
  • When you see "count the number of ways," "minimum/maximum cost," "can you reach / partition / make amount X" β€” think DP.
  • The three steps: (1) Define the STATE β€” what parameters uniquely identify a subproblem. (2) Write the RECURRENCE β€” how dp[i] depends on smaller states. (3) Set the BASE CASE β€” the smallest state with a known answer.
  • Then code it bottom-up with a for loop. Optimise space if dp[i] only needs the previous 1–2 values.

β†’ See Dynamic Programming β€” 1D problems with full solutions
β†’ See Dynamic Programming β€” 2D / Advanced problems with full solutions