An array stores elements in contiguous memory with O(1) index access β the foundation of ArrayList, HashMap buckets, and most data structures in Java.
Linear
Array Fundamentals
Contiguous memory, O(1) index access, O(n) insert and delete. The structure behind ArrayList and half of Java's collections.
Contiguous memory is the defining property β every element sits at a fixed offset from the base address, and the CPU can compute the location of any slot with a single formula: address = base + index Γ element_size. That formula is why index access costs O(1) regardless of array length. The trade-off: because no gaps exist between elements, inserting or deleting in the middle forces every element after the target to shift one position β an O(n) operation. A raw Java array (int[]) has a fixed capacity set at allocation time; ArrayList wraps such an array and doubles capacity when full, giving amortized O(1) appends at the cost of occasional O(n) resize copies. You encounter this structure everywhere in Java β ArrayList, HashMap's internal bucket array, and even the call stack itself are all backed by contiguous arrays.
Analogy: A row of numbered parking spaces in a lot. Every space is the same size, numbered from 0, and the attendant can drive straight to space 47 without checking any other space first β that is O(1) access. But if a VIP needs space 10 and it is taken, every car from space 10 onward must move one space to the right to make room β that is the O(n) insertion cost you pay for the privilege of instant lookup.
02
Section Two Β· Mental Model
How It Thinks
Elements stored at contiguous memory addresses
βΆ
Index access is O(1) β address formula needs no traversal
No gaps allowed between elements
βΆ
Insert or delete in the middle shifts every element after it β O(n)
Size fixed at allocation time (raw array)
βΆ
ArrayList doubles capacity when full β amortized O(1) add, O(n) resize
Elements laid out sequentially in memory
βΆ
CPU cache loads neighbouring elements automatically β iteration is fast
All elements are the same type and size
βΆ
Random access via formula: address = base + index Γ element_size
03
Section Three Β· Operations
Core Operations
Access by Index
Retrieves the element at a given position by computing its memory address directly β no traversal required.
Pseudocode
functionget(array, index):
if index < 0or index >= array.length:
throw IndexOutOfBounds // O(1) β bounds checkreturn array[index] // O(1) β direct address
Access by Index β direct jump to index 3
Why O(1) is guaranteed:
The CPU calculates the exact memory address using base + index Γ element_size and jumps there directly β it does not matter whether the array has 10 elements or 10 million.
The JVM bounds-checks the index first, which is also O(1).
Linear Search
Scans elements one by one from the start until it finds the target or exhausts the array.
Pseudocode
functionsearch(array, target):
for i from0to array.length - 1:
if array[i] == target:
return i // found at index ireturn -1// O(n) β worst case full scan
When linear search beats binary search: Binary search needs a sorted array and costs O(log n) setup thinking β for arrays under ~20 elements, linear search is faster in practice because it has no branching overhead and plays well with the CPU cache.
Insert at End
Appends a new element after the last element β O(1) when capacity remains, O(n) when the internal array must be resized.
Pseudocode
functionaddLast(array, value):
if size == capacity:
resize() // O(n) β copy to new array (rare)
array[size] = value // O(1) β write at next open slot
size = size + 1// amortized O(1)
Amortized O(1) β not O(1):
Each individual append is O(1) except when a resize triggers β then it is O(n).
But resizes happen so rarely (each doubles capacity) that the average cost per append over any sequence of n appends is O(1).
This is what amortized means.
Insert at Middle
Places a new element at a given index by shifting every subsequent element one position to the right to create a gap.
Pseudocode
functionaddAt(array, index, value):
if size == capacity:
resize()
for i from size down to index + 1:
array[i] = array[i - 1] // shift right
array[index] = value
size = size + 1// O(n) β dominated by the shift
Insert at Middle β insert 25 at index 2
Always shift right-to-left:
The loop must run from the end of the array backward to the insertion index β not forward.
Shifting forward overwrites elements before copying them, corrupting the array.
Every insert-at-index implementation that runs the loop the wrong way loses data silently.
Delete at Middle
Removes the element at a given index by shifting every subsequent element one position to the left to close the gap.
Pseudocode
functionremoveAt(array, index):
for i from index to size - 2:
array[i] = array[i + 1] // shift left
size = size - 1
array[size] = null// clear last slot (GC hygiene)// O(n) β dominated by the shift
Delete at Middle β remove index 2 (value 30)
Null the last slot after deletion:
After shifting left, the last slot still holds a reference to the old object.
If you do not null it out, the array holds a stale reference the GC cannot collect β a memory leak that grows with every deletion.
ArrayList handles this for you; manual implementations often miss it.
Resize
When the internal array is full, allocates a new array of double the capacity and copies all existing elements into it.
Doubling is the key β not adding a fixed amount:
If ArrayList grew by adding 10 slots each time, a list that reaches 10,000 elements would trigger 1,000 resizes β O(nΒ²) total work.
Doubling means the number of resizes is O(log n), and the total copy work across all resizes is O(n).
That is why append is amortized O(1).
Common Mistake: Using an ArrayList when an int[] would do β and paying boxing cost on every operation. List<Integer> stores objects, not primitives. Every add() boxes the int into an Integer object on the heap; every get() unboxes it back. For a loop over a million numbers, this overhead is measurable. When the size is known and the type is primitive, int[] is always faster.
04
Section Four Β· Implementation
Build It Once
Build this from scratch once β it makes the mechanics concrete.
In any real project or interview, reach for the built-in (Section 05).
Java β Array core mechanics
// Build once to understand the mechanics β use ArrayList in practice.public classDynamicArray {
privateint[] data;
privateint size;
publicDynamicArray(int capacity) {
data = newint[capacity];
size = 0;
}
// O(1) β direct address calculationpublicintget(int index) {
if (index < 0 || index >= size)
throw new IndexOutOfBoundsException();
return data[index]; // no traversal β direct memory offset
}
// O(n) β must shift every element after index to the rightpublic voidaddAt(int index, int value) {
if (size == data.length) resize();
// shift right-to-left to avoid overwritingfor (int i = size; i > index; i--)
data[i] = data[i - 1];
data[index] = value;
size++;
}
// O(n) β must shift every element after index to the leftpublicintremoveAt(int index) {
int removed = data[index];
// shift left-to-right to close the gapfor (int i = index; i < size - 1; i++)
data[i] = data[i + 1];
size--;
data[size] = 0; // clear stale slotreturn removed;
}
}
Array β Full Implementation
Java β Complete DynamicArray implementation
import java.util.Arrays;
public classDynamicArray<T> {
private Object[] data;
privateint size;
private static finalint DEFAULT_CAPACITY = 4;
publicDynamicArray() { data = new Object[DEFAULT_CAPACITY]; size = 0; }
publicDynamicArray(int capacity) { data = new Object[capacity]; size = 0; }
// O(1) β direct index access
@SuppressWarnings("unchecked")
publicTget(int index) {
if (index < 0 || index >= size) throw new IndexOutOfBoundsException();
return (T) data[index];
}
// O(1) β overwrite at known positionpublic voidset(int index, T value) {
if (index < 0 || index >= size) throw new IndexOutOfBoundsException();
data[index] = value;
}
// O(1) amortized β append at end, resize when fullpublic voidaddLast(T value) {
if (size == data.length) resize();
data[size++] = value;
}
// O(n) β shift right from index onwardpublic voidaddAt(int index, T value) {
if (index < 0 || index > size) throw new IndexOutOfBoundsException();
if (size == data.length) resize();
for (int i = size; i > index; i--)
data[i] = data[i - 1];
data[index] = value;
size++;
}
// O(n) β shift left to close the gap
@SuppressWarnings("unchecked")
publicTremoveAt(int index) {
if (index < 0 || index >= size) throw new IndexOutOfBoundsException();
T removed = (T) data[index];
for (int i = index; i < size - 1; i++)
data[i] = data[i + 1];
data[--size] = null; // GC hygiene β prevent memory leakreturn removed;
}
// O(n) β linear scanpublicbooleancontains(T value) {
returnindexOf(value) != -1;
}
// O(n) β linear scan from index 0publicintindexOf(T value) {
for (int i = 0; i < size; i++)
if (data[i] != null && data[i].equals(value)) return i;
return -1;
}
// O(1)publicintsize() { return size; }
publicbooleanisEmpty() { return size == 0; }
// O(n) β allocate double, copy all elementsprivate voidresize() {
data = Arrays.copyOf(data, data.length * 2);
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder("[");
for (int i = 0; i < size; i++) {
sb.append(data[i]);
if (i < size - 1) sb.append(", ");
}
return sb.append("]").toString();
}
public static voidmain(String[] args) {
DynamicArray<Integer> arr = new DynamicArray<>(4);
// 1. Add elements
arr.addLast(10); arr.addLast(20); arr.addLast(30);
System.out.println(arr); // [10, 20, 30]// 2. Insert at middle
arr.addAt(1, 15);
System.out.println(arr); // [10, 15, 20, 30]// 3. Remove at index
arr.removeAt(2);
System.out.println(arr); // [10, 15, 30]// 4. Contains check
System.out.println(arr.contains(15)); // true
System.out.println(arr.contains(99)); // false// 5. Trigger resize β capacity was 4, adding 5th element forces doubling
arr.addLast(40); arr.addLast(50);
System.out.println(arr); // [10, 15, 30, 40, 50]
}
}
Python β Built-in list + manual DynamicArray
# Python's list IS a dynamic array β these are the key operations:
nums = [10, 20, 30]
nums.append(40) # O(1) amortized β add at end
nums.insert(1, 15) # O(n) β shifts elements right
nums.pop(2) # O(n) β shifts elements left
nums[0] # O(1) β direct index access15in nums # O(n) β linear scan# βββ Manual implementation for understanding βββclassDynamicArray:
def__init__(self, capacity=4):
self._data = [None] * capacity
self._size = 0def__getitem__(self, index): # O(1)if index < 0or index >= self._size:
raise IndexError("index out of range")
return self._data[index]
defappend(self, value): # O(1) amortizedif self._size == len(self._data):
self._resize()
self._data[self._size] = value
self._size += 1definsert_at(self, index, value): # O(n)if self._size == len(self._data):
self._resize()
for i in range(self._size, index, -1):
self._data[i] = self._data[i - 1]
self._data[index] = value
self._size += 1defremove_at(self, index): # O(n)
removed = self._data[index]
for i in range(index, self._size - 1):
self._data[i] = self._data[i + 1]
self._size -= 1
self._data[self._size] = Nonereturn removed
def__len__(self): # O(1)return self._size
def_resize(self): # O(n)
self._data = self._data + [None] * len(self._data)
def__str__(self):
return str(self._data[:self._size])
05
Section Five Β· Java Built-in
Use It In Java
IN JAVA
UseArrayList<T> β or int[] / T[] for fixed-size cases
Importimport java.util.ArrayList;
Method
Complexity
Note
add(value)
O(1) amort
Appends; triggers resize when full
add(index, value)
O(n)
Shifts elements right from index
get(index)
O(1)
Direct address β no traversal
set(index, value)
O(1)
Overwrites in place
remove(index)
O(n)
Shifts elements left, nulls last slot
contains(value)
O(n)
Linear scan from index 0
indexOf(value)
O(n)
Returns -1 if not found
size()
O(1)
Stored as a field β not counted
β Gotcha:ArrayList<int> is a compile error β Java generics do not accept primitives. Use ArrayList<Integer> instead, which boxes every int into an Integer object on the heap. For large numeric lists this boxing cost is real and measurable β prefer int[] when the size is known and performance matters.
ChooseUse ArrayList when the collection size changes at runtime or when you need contains(), indexOf(), or Collections.sort(). Use int[] / String[] when the size is fixed at creation time β raw arrays are faster, use less memory, and avoid boxing entirely.
06
Section Six Β· Decision Guide
Pick The Right Structure
Decision Flowchart
Comparison Table
Structure
Access
Insert (middle)
Delete (middle)
Best For
int[] / ArrayList β this
O(1)
O(n)
O(n)
Random access, iteration
LinkedList
O(n)
O(1)*
O(1)*
Frequent head/tail insert/delete
ArrayDeque
O(n)
O(1) head/tail
O(1) head/tail
Stack, queue, BFS/DFS
HashMap
O(1) avg
O(1) avg
O(1) avg
Keyβvalue lookup, frequency count
07
Section Seven Β· Practice
Problems To Solve
Two patterns unlock most array interview problems: two-pointer (works inward from both ends or uses a fast/slow pointer to track two positions simultaneously) and sliding window (maintains a subarray of variable or fixed size by moving both ends forward). Look for phrases like "pair", "subarray", "contiguous", "maximum window", or "minimum window" β they signal one of these two patterns.
Three in-place reversals β whole array, first k elements, remaining β achieves rotation in O(1) extra space.
Interview Pattern:
Two-pointer and sliding window solve roughly 70% of array interview problems.
Two-pointer works when the answer involves a pair or requires comparing elements from two positions.
Sliding window works when the problem asks for a contiguous subarray β look for the words "maximum", "minimum", "longest", or "shortest" combined with "subarray" or "window".