Linear

Array Fundamentals

An array is the most fundamental data structure in computer science โ€” a contiguous block of memory that stores elements sequentially, enabling instant access to any element by its index. Every other data structure either builds on arrays or exists to compensate for their limitations.

01
Section One ยท Foundation

What is Array?

5 12 8 42 3 17 [0] [1] [2] [3] [4] [5]

An array stores elements in a single contiguous block of memory. Every element occupies the same fixed number of bytes โ€” no gaps, no pointers in between. Because the start address and element size are both known, the CPU can jump to any element in one step using address = base + (index ร— element_size). This is why array access is O(1) โ€” it doesn't matter if the array has 10 elements or 10 million.

Arrays come in two flavours:

Static Array

Fixed Size

Size is set at creation and cannot change. Memory is allocated once on the stack or heap.

  • Fast โ€” no resize overhead
  • Predictable memory footprint
  • Typical in C, Java int[], C# int[]
Dynamic Array

Auto-Resize

Grows automatically when full by allocating a larger backing array and copying elements over.

  • Append is O(1) amortized
  • May over-allocate 1.5ร—โ€“2ร— capacity
  • Python list, Java ArrayList, JS Array
Analogy: A row of numbered post office boxes. Every box is the same size, they sit side by side in a straight line, and each has a number painted on it. If you know the box number you want, you walk directly to it โ€” you never check the other boxes first. Adding a box in the middle means physically shifting every box after it to make room.
Key Characteristics
Contiguous memory โ€” no gaps, no pointers
โ–ถ
Cache-friendly & fast iteration
Index-based access via address formula
โ–ถ
O(1) read & write
All elements same type & size
โ–ถ
Address arithmetic works in one step
Insert / delete in middle shifts elements
โ–ถ
O(n) worst case
Dynamic arrays double capacity on resize
โ–ถ
O(1) amortized append
02
Section Two ยท Operations

Core Operations

Random Access
Reads or writes any element in O(1) time by computing its exact memory address from the base pointer and index.
Pseudocode
function access(arr, i): if i < 0 or i >= arr.length: throw "IndexOutOfBounds" address = arr.base + i ร— element_size // O(1) arithmetic return memory[address]
O(1) holds for any index. It doesn't matter if you access index 0 or index 999,999 โ€” the address arithmetic is one multiplication and one addition. This is the defining advantage of an array over every linked structure.
Insert at Index
Places a new element at a given position by shifting all subsequent elements one slot to the right to create space.
Pseudocode
function insert(arr, index, value): if arr.size == arr.capacity: resize(arr) // double capacity, copy elements // shift elements right to make room for i = arr.size down to index + 1: arr[i] = arr[i - 1] // O(n) shifts arr[index] = value arr.size += 1
Append is O(1) amortized. Only insertions at the end avoid the shift. Dynamic arrays double their capacity on resize so the average cost per append is O(1) amortized, even though an individual resize takes O(n).
Delete at Index
Removes an element at a given position by shifting all subsequent elements one slot to the left to close the gap.
Pseudocode
function delete(arr, index): removed = arr[index] // shift elements left to close gap for i = index to arr.size - 2: arr[i] = arr[i + 1] // O(n) shifts arr.size -= 1 return removed
Delete from the end is O(1). If order doesn't matter, swap the target element with the last element and delete from the tail โ€” a common O(1) trick used in heaps and game object pools.
Linear Search
Scans each element from index 0 until the target is found or the array is exhausted.
Pseudocode
function search(arr, target): for i = 0 to arr.size - 1: if arr[i] == target: return i // found โ€” best case O(1) return -1 // not found โ€” O(n)
Sort first if you search repeatedly. A single O(n log n) sort unlocks O(log n) binary search for every subsequent query. If you search more than log n times, sorting pays for itself.
Common Mistake: Off-by-one errors. Array indices run from 0 to length - 1. Accessing arr[length] is out of bounds and causes a runtime crash. Always use i < arr.length (not i <= arr.length) in loop conditions.
03
Section Three ยท Visuals

Diagrams & Memory Layout

Memory Layout
Array in Memory โ€” Contiguous cells with address formula
addr(i) = 0x1000 + i ร— 4 bytes 0x1000 10 20 30 40 50 60 [0] [1] [2] [3] [4] [5] +4 bytes 6 elements ร— 4 bytes each = 24 bytes total
Operation Walkthrough โ€” Insert at Index 2
Inserting 99 at index 2 โ€” elements shift right to make room
BEFORE 10 20 30 40 50 [0] [1] [2] [3] [4] โ† shift right โ†’ AFTER 10 20 99 30 40 50 [0] [1] [2] new [3] [4] [5] 3 elements shifted โ†’ O(n) worst case
Complexity Comparison
Array Operation Complexities โ€” bar chart
Access by index O(1) Append (amortized) O(1)* Linear Search O(n) Insert / Delete mid O(n) Fast Linear Slow (shift)
04
Section Four ยท Code

Java Quick Reference

The most common array operations using Java's built-in ArrayList โ€” the dynamic array you'll use in real code and interviews.

Java โ€” ArrayList Core Operations
import java.util.*; List<Integer> arr = new ArrayList<>(Arrays.asList(10, 20, 30, 40, 50)); // O(1) โ€” access & update by index int val = arr.get(2); // 30 arr.set(2, 99); // [10, 20, 99, 40, 50] // O(1) amortized โ€” append to end arr.add(60); // [10, 20, 99, 40, 50, 60] // O(n) โ€” insert at index (shifts right) arr.add(1, 15); // [10, 15, 20, 99, 40, 50, 60] // O(n) โ€” delete at index (shifts left) arr.remove(3); // removes 99 โ†’ [10, 15, 20, 40, 50, 60] // O(n) โ€” linear search int idx = arr.indexOf(40); // 3 boolean has = arr.contains(40); // true
05
Section Five ยท Complexity

Time & Space Complexity

OperationBest CaseAverage CaseWorst CaseSpace
Access / Update by indexO(1)O(1)O(1)O(1)
Append (dynamic array)O(1)O(1) amortizedO(n) on resizeO(1)
Insert / Delete at indexO(1) at endO(n)O(n)O(1)
Linear SearchO(1)O(n)O(n)O(1)
Why these complexities? Access is O(1) because the memory address is computed arithmetically, not traversed. Insert and delete are O(n) because shifting elements is proportional to how many sit after the target index โ€” inserting at index 0 shifts everything. Dynamic array resize is O(n) per event, but because capacity doubles each time, the amortized cost per append across n operations is O(1).
Space Complexity Note

An array of n elements uses exactly O(n) space for the data itself. There is zero per-element overhead โ€” no pointers, no metadata per node. Dynamic arrays over-allocate by a constant factor (typically 1.5โ€“2ร—) to amortize resize cost, so actual memory may be up to twice the logical size. Auxiliary space for all operations is O(1) since shifts happen in-place.

06
Section Six ยท Decision Guide

When to Use Array

โœ“

Use This When

  • Fast reads by position โ€” dominant operation is index lookup; O(1) access beats every linked structure.
  • Data size is known or stable โ€” no frequent mid-sequence inserts; cache-friendly layout wins.
  • Building another structure โ€” stacks, queues, heaps, and hash tables all use arrays as their backing store.
โœ—

Avoid When

  • Frequent mid-list inserts/deletes โ€” every operation shifts O(n) elements; use a linked list or deque instead.
  • Size is highly unpredictable โ€” large add/remove bursts cause repeated resize overhead; a linked list never wastes capacity.
Comparison vs Alternatives
StructureAccessInsertDeleteBest For
Array โ† this oneO(1)O(n)O(n)Random access, cache performance
Linked ListO(n)O(1) w/ pointerO(1) w/ pointerFrequent mid insertions/deletions
Hash MapO(1) avgO(1) avgO(1) avgKey-value lookup, no ordering needed
07
Section Seven ยท Interview Practice

LeetCode Problems

Array problems in interviews almost always test one of three patterns: two pointers (converging from both ends), sliding window (a moving subarray of fixed or variable size), or prefix sums (precomputed running totals for O(1) range queries). Recognising which pattern applies is more valuable than memorising solutions.

  • Easy Two Sum โ€” Use a HashMap to store each element's index as you scan โ€” transforms O(nยฒ) brute force into a single O(n) pass.
  • Easy Best Time to Buy and Sell Stock โ€” Track the minimum price seen so far and the maximum profit at each index โ€” classic one-pass O(n) greedy.
  • Medium Maximum Subarray โ€” Kadane's algorithm: maintain a running sum and reset to 0 when it goes negative โ€” local decisions yield a global optimum.
  • Medium Product of Array Except Self โ€” Build a left-products prefix array and a right-products suffix array; multiply them โ€” achieves O(n) with no division.
  • Hard Trapping Rain Water โ€” Two-pointer approach: water at each position is bounded by the shorter of the tallest bars to its left and right โ€” scan from both ends.
Interview Pattern: When you see a subarray or contiguous-range problem, immediately ask: "Is a sliding window applicable?" and "Can a prefix sum make range queries O(1)?" These two techniques solve the majority of medium-hard array problems at FAANG-level interviews.