Algorithms

Dynamic Programming Memoization

Dynamic Programming (DP) transforms exponential recursive solutions into polynomial-time algorithms by storing and reusing the results of overlapping subproblems. It applies when a problem has optimal substructure (an optimal solution is built from optimal sub-solutions) and overlapping subproblems (the same subproblem recurs many times). DP is the single most important algorithm paradigm in coding interviews.

01
Section One ยท Foundation

What is Dynamic Programming?

f(5) f(4) f(3) f(3) f(2)โœ“ f(2)โœ“ f(1) โœ“ cached = compute once, reuse

Dynamic programming breaks a problem into smaller overlapping subproblems, solves each once, stores the result in a table (memoisation or DP array), and reuses stored results to avoid redundant computation. The classic naive recursive Fibonacci has O(2โฟ) time because it recomputes f(k) for every path; DP reduces this to O(n) by computing each f(k) exactly once. Two implementation styles exist: top-down memoization (recursive with a cache) and bottom-up tabulation (iterative, filling a table from base cases up).

Analogy: Imagine computing the total cost of all routes through a city. Instead of recalculating the distance from node B to the destination every time you arrive at B by a different route, you write it down the first time. The next time you reach B, you look up the stored answer in O(1). This is exactly what the DP table does โ€” a lookup table of pre-solved subproblem answers.
Prerequisites for DP Applicability
Optimal Substructure
โ–ถ
The optimal solution to the full problem can be constructed from optimal solutions to sub-problems (e.g. shortest path, minimum cost).
Overlapping Subproblems
โ–ถ
The same sub-problems recur โ€” unlike divide-and-conquer (Merge Sort) which creates independent non-overlapping sub-problems.
Define the DP State
โ–ถ
dp[i] or dp[i][j] must fully describe the sub-problem. The state definition is the hardest and most important part of DP.
Recurrence Relation
โ–ถ
Express dp[i] as a function of smaller states (e.g. dp[i] = dp[i-1] + dp[i-2] for Fibonacci). Write this before coding.
Top-Down Memoization
  • Recursive + cache (HashMap or array)
  • Only solves needed subproblems
  • Easier to write from the recursive formulation
  • Stack overflow risk on deep recursion
Bottom-Up Tabulation
  • Iterative + DP table (array)
  • Solves all subproblems in topological order
  • Better cache performance, no recursion overhead
  • Can often optimise space (rolling array)
02
Section Two ยท Core Patterns

DP Patterns & Problem Families

1D DP โ€” Linear Sequence (Fibonacci, Climbing Stairs, House Robber)
State depends on a constant number of previous states. dp[i] = f(dp[i-1], dp[i-2], โ€ฆ). Space can usually be compressed to O(1).
StepAction (Fibonacci)State After
1Define dp[i] = i-th Fibonacci number; base cases dp[0]=0, dp[1]=1dp[0..1] initialized
2Fill dp[i] = dp[i-1] + dp[i-2] for i from 2 to ndp[0..n] fully filled
3Return dp[n]; optimise to O(1) space using two variables prev, currAnswer in O(n) time, O(1) space
Pattern: Whenever you see "number of ways to reach step n" or "max profit with no two adjacent elements selected", it's 1D DP. Recognise the recurrence first, then implement.
2D DP โ€” Grid / Two-Sequence (LCS, Edit Distance, Knapsack, LPS)
State depends on two variables (indices into two strings/sequences, or item+capacity). dp[i][j] = optimal answer for first i elements of A and first j elements of B.
StepAction (LCS โ€” Longest Common Subsequence)State After
1dp[i][j] = LCS length of A[0..i-1] and B[0..j-1]; base: dp[0][j]=dp[i][0]=0First row/column zeroed
2If A[i-1]==B[j-1]: dp[i][j] = dp[i-1][j-1]+1; else dp[i][j] = max(dp[i-1][j], dp[i][j-1])Table filled row by row
3Answer is dp[m][n]; trace back for the actual subsequence if neededO(mn) time, O(mn) space (or O(n) with rolling row)
Pattern: Two-pointer strings โ†’ 2D DP. Edit Distance, LCS, and Longest Palindromic Subsequence all use this exact structure. For 0/1 Knapsack, dimensions are items ร— capacity.
Interval DP โ€” Sub-arrays / Partitioning (Burst Balloons, Matrix Chain, Palindrome Partition)
dp[i][j] = optimal answer for the sub-array/interval [i, j]. Fill by increasing interval length, trying every split point k.
StepAction (Matrix Chain Multiplication)State After
1dp[i][j] = min multiplications for chain from matrix i to j; base: dp[i][i]=0Diagonal zeroed
2For each length L=2..n, for each i, j=i+L-1: try all split points k โˆˆ [i,j-1]dp[i][j] = min(dp[i][k] + dp[k+1][j] + cost)
3Answer at dp[0][n-1]O(nยณ) time, O(nยฒ) space
Pattern: Interval DP is identified by "minimum/maximum cost to reduce an interval" โ€” burst balloons, stone merging, palindrome partitioning. The template is always: for length, for start, for split point.
Space Optimisation โ€” O(nยฒ) โ†’ O(n) โ†’ O(1)
Many DP tables only look back one or two rows; the previous rows can be discarded (rolling array). 1D DP often reduces to two variables.
StepActionState After
1Identify which previous states dp[i][j] depends on (row i-1 only? Column j-1?)Dependency graph clear
2Replace dp[i] with a 1D array; for 0/1 Knapsack iterate capacity backwards to prevent overwriteSpace: O(capacity) instead of O(n*capacity)
3For Fibonacci/Climbing Stairs: two variables prev, curr suffice โ€” O(1) spaceMaximum compression achieved
Tip: For 0/1 Knapsack space optimisation, iterate the capacity dimension backwards (from high to low) to ensure each item is counted at most once. Iterating forwards would allow an item to be picked multiple times (Unbounded Knapsack).
Common Mistakes:
  • Wrong state definition: The state must fully capture the sub-problem. Missing a dimension (e.g. index, remaining capacity) leads to incorrect recurrences.
  • Off-by-one in base cases: dp array size should be n+1 for 1-indexed problems. Initialising dp[0] incorrectly breaks the entire table.
  • Greedy when DP is needed: If the choice at step i affects what's available at step i+k, greedy fails. Use DP when future choices depend on past decisions.
03
Section Three ยท Visuals

Diagrams & Walkthroughs

Fibonacci: Naive Recursion vs Memoization
Naive Recursion: O(2โฟ) f(5) f(4) f(3) f(3) f(2) f(2) f(1) DUP Memoization: O(n) memo table: f(0)=0 f(1)=1 f(2)=1 f(3)=2 f(4)=3 f(5)=5 f(5) f(4) f(3)โœ“ cached! f(3) f(2)โœ“ memo eliminates redundant calls
Coin Change โ€” Bottom-Up DP Table
Coins = [1, 3, 4] โ€” minimum coins to make amount 6 amt: 0 1 2 3 4 5 6 dp: 0 1 2 1 1 2 2 dp[6] = min(dp[5]+1, dp[3]+1, dp[2]+1) = min(3, 2, 3) = 2 Answer: 2 coins (3+3 or 4+1+1? No โ€” 3+3=2 coins โœ“) Recurrence: dp[a] = 1 + min(dp[a-c] for c in coins if a >= c) Base case: dp[0] = 0, dp[a] = โˆž initially
LCS (Longest Common Subsequence) โ€” 2D DP Table
A = "ABCBDAB", B = "BDCABA" โ€” LCS length = 4 ("BCBA" or "BDAB") "" B D C A B A "" A B C B D A 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 1 1 1 2 2 ... ... ... 4 LCS=4
04
Section Four ยท Code

Implementations

Quick-reference โ€” Fibonacci top-down vs bottom-up. Click the button for full implementations of Coin Change, LCS, Knapsack, Edit Distance, and LIS.

// Fibonacci โ€” Top-Down (Memoization) vs Bottom-Up // Top-Down O(n) time, O(n) space Map<Integer, Long> memo = new HashMap<>(); long fib(int n) { if (n <= 1) return n; if (memo.containsKey(n)) return memo.get(n); long res = fib(n-1) + fib(n-2); memo.put(n, res); return res; } // Bottom-Up O(n) time, O(1) space long fibDP(int n) { if (n <= 1) return n; long prev = 0, curr = 1; for (int i = 2; i <= n; i++) { long t = prev + curr; prev = curr; curr = t; } return curr; }
05
Section Five ยท Complexity

Time & Space Complexity

ProblemTimeSpace (full)Space (optimised)Pattern
Fibonacci / Climbing StairsO(n)O(n)O(1) โ€” two variables1D linear
House Robber / Max SubarrayO(n)O(n)O(1)1D linear
Coin Change (min coins)O(n ร— amount)O(amount)O(amount)Unbounded Knapsack
0/1 KnapsackO(n ร— W)O(n ร— W)O(W) โ€” 1D reverse0/1 Subset
Longest Common SubsequenceO(mn)O(mn)O(min(m,n))2D sequence
Edit DistanceO(mn)O(mn)O(n)2D sequence
LIS (O(nยฒ) DP)O(nยฒ)O(n)โ€”1D dependent
LIS (patience sort)O(n log n)O(n)โ€”BS + tails array
Burst Balloons / Matrix ChainO(nยณ)O(nยฒ)โ€”Interval DP
Time = States ร— Transition cost. For 1D DP: n states ร— O(1) transition = O(n). For 2D: mn states ร— O(1) = O(mn). For Interval DP: O(nยฒ) intervals ร— O(n) split points = O(nยณ). Space can always be compressed if the recurrence only looks back a fixed number of rows.
Space Complexity Note

The key insight for space optimisation: identify which "previous" states dp[i][j] depends on. If only the previous row (dp[i-1][*]) is needed, replace the 2D array with a rolling 1D array. For 0/1 Knapsack, iterate the capacity dimension backwards so updates don't overwrite values still needed in the same pass.

06
Section Six ยท Decision Guide

When to Use Dynamic Programming

Use DP When
  • Problem has optimal substructure (optimal solution โ† optimal sub-solutions)
  • Problem has overlapping subproblems (same sub-problem recurs)
  • Problem asks min/max/count/exists for a combinatorial structure
  • Brute force is exponential but the number of distinct states is polynomial
Avoid DP / Use Alternatives When
  • Subproblems are independent โ†’ use Divide & Conquer (Merge Sort)
  • A locally greedy choice is always globally optimal โ†’ use Greedy (activity selection)
  • Problem requires tracking the full path/configuration, not just the value โ†’ Backtracking
  • Input is a graph without DAG structure โ†’ graph algorithms (BFS/DFS/Dijkstra)
DP Pattern โ†’ Problem Mapping
PatternStateSignature Problems
1D Sequentialdp[i]Climbing Stairs, House Robber, Max Subarray (Kadane)
Unbounded Knapsackdp[amount]Coin Change, Perfect Squares, Combination Sum IV
0/1 Knapsackdp[i][w]Target Sum, Partition Equal Subset, Last Stone Weight II
Two-Sequence 2Ddp[i][j]LCS, Edit Distance, Distinct Subsequences, Regex Match
Interval DPdp[i][j]Burst Balloons, Palindrome Partition, Stone Merge
State Machinedp[i][state]Best Time to Buy/Sell Stock (k transactions, cooldown)
DP on Treesdp[node][โ€ฆ]House Robber III, Diameter of Tree, Max Path Sum
โœ“ DP Thinking Process: (1) Define state clearly. (2) Write the recurrence. (3) Identify base cases. (4) Determine fill order (bottom-up). (5) Optimise space if needed. Never skip step 1.
07
Section Seven ยท Interview Practice

LeetCode Problems

DP problems test your ability to identify the optimal substructure, define a clean DP state, and write a recurrence. Group problems by pattern to build intuition โ€” not by difficulty.

  • Easy#70 โ€” Climbing Stairs โ€” 1D DP: dp[i] = dp[i-1] + dp[i-2]. Identical to Fibonacci.
  • Easy#198 โ€” House Robber โ€” 1D DP: dp[i] = max(dp[i-1], dp[i-2] + nums[i]). Non-adjacent selection.
  • Easy#53 โ€” Maximum Subarray โ€” Kadane's: dp[i] = max(dp[i-1] + nums[i], nums[i]). O(1) space.
  • Medium#322 โ€” Coin Change โ€” Unbounded knapsack; BFS also works for shortest path to amount.
  • Medium#300 โ€” Longest Increasing Subsequence โ€” O(nยฒ) DP or O(n log n) patience sorting with binary search.
  • Medium#1143 โ€” Longest Common Subsequence โ€” Classic 2D DP; foundation for Edit Distance and Diff tools.
  • Medium#416 โ€” Partition Equal Subset Sum โ€” 0/1 Knapsack: can we partition into two equal-sum subsets?
  • Medium#152 โ€” Maximum Product Subarray โ€” Track both max and min dp values (negatives flip sign).
  • Hard#72 โ€” Edit Distance โ€” 2D DP; three transitions: insert, delete, replace. Tracing back gives the actual edit sequence.
  • Hard#312 โ€” Burst Balloons โ€” Interval DP; think "last balloon to burst" instead of first, for clean subproblem boundaries.
Interview Pattern: If you see "minimum cost", "maximum value", "number of ways", or "can it be done?" on a sequence or grid โ€” DP is almost certainly the intended approach. Define the state first on paper, write the recurrence, then code. Rushing to code before defining the state is the #1 source of DP bugs in interviews.