Linear

Array Fundamentals

Contiguous memory, O(1) index access, O(n) insert and delete. The structure behind ArrayList and half of Java's collections.

Foundations Β· Array Β· Linked List Β· Stack Β· Queue Β· HashMap Β· Binary Tree Β· BST Β· Heap Β· Trie Β· Graph Β· Sorting Β· Binary Search Β· Two Pointers Β· Sliding Window Β· Dynamic Prog.
01
Section One Β· Foundation

What is Array?

14 [0] 27 [1] 8 [2] O(1) access 35 [3] 19 [4] 42 [5] contiguous memory block

Contiguous memory is the defining property β€” every element sits at a fixed offset from the base address, and the CPU can compute the location of any slot with a single formula: address = base + index Γ— element_size. That formula is why index access costs O(1) regardless of array length. The trade-off: because no gaps exist between elements, inserting or deleting in the middle forces every element after the target to shift one position β€” an O(n) operation. A raw Java array (int[]) has a fixed capacity set at allocation time; ArrayList wraps such an array and doubles capacity when full, giving amortized O(1) appends at the cost of occasional O(n) resize copies. You encounter this structure everywhere in Java β€” ArrayList, HashMap's internal bucket array, and even the call stack itself are all backed by contiguous arrays.

Analogy: A row of numbered parking spaces in a lot. Every space is the same size, numbered from 0, and the attendant can drive straight to space 47 without checking any other space first β€” that is O(1) access. But if a VIP needs space 10 and it is taken, every car from space 10 onward must move one space to the right to make room β€” that is the O(n) insertion cost you pay for the privilege of instant lookup.
02
Section Two Β· Mental Model

How It Thinks

Elements stored at contiguous memory addresses
β–Ά
Index access is O(1) β€” address formula needs no traversal
No gaps allowed between elements
β–Ά
Insert or delete in the middle shifts every element after it β€” O(n)
Size fixed at allocation time (raw array)
β–Ά
ArrayList doubles capacity when full β€” amortized O(1) add, O(n) resize
Elements laid out sequentially in memory
β–Ά
CPU cache loads neighbouring elements automatically β€” iteration is fast
All elements are the same type and size
β–Ά
Random access via formula: address = base + index Γ— element_size
03
Section Three Β· Operations

Core Operations

Access by Index
Retrieves the element at a given position by computing its memory address directly β€” no traversal required.
Pseudocode
function get(array, index): if index < 0 or index >= array.length: throw IndexOutOfBounds // O(1) β€” bounds check return array[index] // O(1) β€” direct address
Access by Index β€” direct jump to index 3
seeking index 3 14 [0] 27 [1] 8 [2] 35 [3] 19 [4] 42 [5] β†’ returns 35 address = base + 3 Γ— 4 bytes
Why O(1) is guaranteed:
  • The CPU calculates the exact memory address using base + index Γ— element_size and jumps there directly β€” it does not matter whether the array has 10 elements or 10 million.
  • The JVM bounds-checks the index first, which is also O(1).
Linear Search
Scans elements one by one from the start until it finds the target or exhausts the array.
Pseudocode
function search(array, target): for i from 0 to array.length - 1: if array[i] == target: return i // found at index i return -1 // O(n) β€” worst case full scan
When linear search beats binary search: Binary search needs a sorted array and costs O(log n) setup thinking β€” for arrays under ~20 elements, linear search is faster in practice because it has no branching overhead and plays well with the CPU cache.
Insert at End
Appends a new element after the last element β€” O(1) when capacity remains, O(n) when the internal array must be resized.
Pseudocode
function addLast(array, value): if size == capacity: resize() // O(n) β€” copy to new array (rare) array[size] = value // O(1) β€” write at next open slot size = size + 1 // amortized O(1)
Amortized O(1) β€” not O(1):
  • Each individual append is O(1) except when a resize triggers β€” then it is O(n).
  • But resizes happen so rarely (each doubles capacity) that the average cost per append over any sequence of n appends is O(1).
  • This is what amortized means.
Insert at Middle
Places a new element at a given index by shifting every subsequent element one position to the right to create a gap.
Pseudocode
function addAt(array, index, value): if size == capacity: resize() for i from size down to index + 1: array[i] = array[i - 1] // shift right array[index] = value size = size + 1 // O(n) β€” dominated by the shift
Insert at Middle β€” insert 25 at index 2
BEFORE insert 25 at index 2 10 [0] 20 [1] 30 [2] ← insert here 40 [3] 50 [4] AFTER 10 [0] 20 [1] 25 [2] β‘‘ write 25 30 [3] 40 [4] 50 [5] β‘  shift right
Always shift right-to-left:
  • The loop must run from the end of the array backward to the insertion index β€” not forward.
  • Shifting forward overwrites elements before copying them, corrupting the array.
  • Every insert-at-index implementation that runs the loop the wrong way loses data silently.
Delete at Middle
Removes the element at a given index by shifting every subsequent element one position to the left to close the gap.
Pseudocode
function removeAt(array, index): for i from index to size - 2: array[i] = array[i + 1] // shift left size = size - 1 array[size] = null // clear last slot (GC hygiene) // O(n) β€” dominated by the shift
Delete at Middle β€” remove index 2 (value 30)
BEFORE delete index 2 (value 30) 10 [0] 20 [1] 30 [2] ← delete 40 [3] 50 [4] AFTER 10 [0] 20 [1] 40 [2] 50 [3] β€” [4] β‘‘ clear slot β‘  shift left
Null the last slot after deletion:
  • After shifting left, the last slot still holds a reference to the old object.
  • If you do not null it out, the array holds a stale reference the GC cannot collect β€” a memory leak that grows with every deletion.
  • ArrayList handles this for you; manual implementations often miss it.
Resize
When the internal array is full, allocates a new array of double the capacity and copies all existing elements into it.
Pseudocode
function resize(array, oldCapacity): newCapacity = oldCapacity * 2 newArray = new Array[newCapacity] // allocate doubled array for i from 0 to oldCapacity - 1: newArray[i] = array[i] // copy each element return newArray // O(n) β€” copy dominates
Resize β€” capacity 4 β†’ 8 (doubling)
RESIZE β€” capacity 4 β†’ 8 old array (full) 10 20 30 40 capacity: 4 | size: 4 copy all β†’ O(n) new array (doubled) 10 20 30 40 ← free slots capacity: 8 | size: 4
Doubling is the key β€” not adding a fixed amount:
  • If ArrayList grew by adding 10 slots each time, a list that reaches 10,000 elements would trigger 1,000 resizes β€” O(nΒ²) total work.
  • Doubling means the number of resizes is O(log n), and the total copy work across all resizes is O(n).
  • That is why append is amortized O(1).
Common Mistake: Using an ArrayList when an int[] would do β€” and paying boxing cost on every operation. List<Integer> stores objects, not primitives. Every add() boxes the int into an Integer object on the heap; every get() unboxes it back. For a loop over a million numbers, this overhead is measurable. When the size is known and the type is primitive, int[] is always faster.
04
Section Four Β· Implementation

Build It Once

Build this from scratch once β€” it makes the mechanics concrete. In any real project or interview, reach for the built-in (Section 05).

Java β€” Array core mechanics
// Build once to understand the mechanics β€” use ArrayList in practice. public class DynamicArray { private int[] data; private int size; public DynamicArray(int capacity) { data = new int[capacity]; size = 0; } // O(1) β€” direct address calculation public int get(int index) { if (index < 0 || index >= size) throw new IndexOutOfBoundsException(); return data[index]; // no traversal β€” direct memory offset } // O(n) β€” must shift every element after index to the right public void addAt(int index, int value) { if (size == data.length) resize(); // shift right-to-left to avoid overwriting for (int i = size; i > index; i--) data[i] = data[i - 1]; data[index] = value; size++; } // O(n) β€” must shift every element after index to the left public int removeAt(int index) { int removed = data[index]; // shift left-to-right to close the gap for (int i = index; i < size - 1; i++) data[i] = data[i + 1]; size--; data[size] = 0; // clear stale slot return removed; } }
05
Section Five Β· Java Built-in

Use It In Java

IN JAVA
Use ArrayList<T> β€” or int[] / T[] for fixed-size cases
Import import java.util.ArrayList;
Method Complexity Note
add(value) O(1) amort Appends; triggers resize when full
add(index, value) O(n) Shifts elements right from index
get(index) O(1) Direct address β€” no traversal
set(index, value) O(1) Overwrites in place
remove(index) O(n) Shifts elements left, nulls last slot
contains(value) O(n) Linear scan from index 0
indexOf(value) O(n) Returns -1 if not found
size() O(1) Stored as a field β€” not counted
⚠ Gotcha: ArrayList<int> is a compile error β€” Java generics do not accept primitives. Use ArrayList<Integer> instead, which boxes every int into an Integer object on the heap. For large numeric lists this boxing cost is real and measurable β€” prefer int[] when the size is known and performance matters.
Choose Use ArrayList when the collection size changes at runtime or when you need contains(), indexOf(), or Collections.sort(). Use int[] / String[] when the size is fixed at creation time β€” raw arrays are faster, use less memory, and avoid boxing entirely.
06
Section Six Β· Decision Guide

Pick The Right Structure

Decision Flowchart
Need fast access by index? Yes No Size fixed at compile time? Yes int[] / T[] zero overhead, no boxing No ArrayList<T> resizable, O(1) access O(1) head & tail? Yes ArrayDeque stack or queue ops No LinkedList frequent middle insert
Comparison Table
Structure Access Insert (middle) Delete (middle) Best For
int[] / ArrayList ← this O(1) O(n) O(n) Random access, iteration
LinkedList O(n) O(1)* O(1)* Frequent head/tail insert/delete
ArrayDeque O(n) O(1) head/tail O(1) head/tail Stack, queue, BFS/DFS
HashMap O(1) avg O(1) avg O(1) avg Key→value lookup, frequency count
07
Section Seven Β· Practice

Problems To Solve

Two patterns unlock most array interview problems: two-pointer (works inward from both ends or uses a fast/slow pointer to track two positions simultaneously) and sliding window (maintains a subarray of variable or fixed size by moving both ends forward). Look for phrases like "pair", "subarray", "contiguous", "maximum window", or "minimum window" β€” they signal one of these two patterns.

Difficulty Pattern Problem Key Insight
Easy two-pointer Two Sum A HashMap lets you check in O(1) whether the complement of each element has already been seen β€” one pass, no nested loop.
Easy two-pointer Contains Duplicate A HashSet detects a duplicate the moment you try to add an element that already exists β€” O(1) per check.
Easy sliding-window Best Time to Buy and Sell Stock Track the minimum price seen so far and the maximum profit achievable from it β€” one pass, two variables.
Medium two-pointer Product of Array Except Self Two passes β€” left-to-right builds prefix products, right-to-left builds suffix products β€” no division needed.
Medium sliding-window Maximum Subarray Kadane's: at each index decide whether to extend the current subarray or start fresh β€” whichever sum is larger.
Medium reversal Rotate Array Three in-place reversals β€” whole array, first k elements, remaining β€” achieves rotation in O(1) extra space.
Interview Pattern:
  • Two-pointer and sliding window solve roughly 70% of array interview problems.
  • Two-pointer works when the answer involves a pair or requires comparing elements from two positions.
  • Sliding window works when the problem asks for a contiguous subarray β€” look for the words "maximum", "minimum", "longest", or "shortest" combined with "subarray" or "window".

β†’ See all Array problems with full solutions