A queue enforces first-in first-out (FIFO) order β elements enter at the rear and leave from the front, both in O(1). The backbone of BFS graph traversal, task scheduling, and every system where arrival order must be preserved.
Linear
Queue Fundamentals
A first-in first-out collection β elements enter at the rear and leave from the front, both in O(1). The backbone of BFS graph traversal, task scheduling, and every system where arrival order must be preserved.
Every queue enforces a strict contract: elements enter at one end (the rear) and leave from the other end (the front), with no access to anything in between β a linear collection governed entirely by arrival order. The first element added is guaranteed to be the first element processed, which is the natural model for any system where order of arrival must be respected: print jobs, network packets, customer service lines. There is no random access, no searching, no insertion anywhere except the rear; the constraint is the contract. ArrayDeque uses a resizable circular array internally with head and tail indices β enqueue advances the tail, dequeue advances the head, both in O(1) with no element shifting. BFS graph traversal is the canonical queue algorithm; every level-order tree traversal, shortest-path search on unweighted graphs, and multi-source spread problem uses exactly one queue.
Analogy: The checkout line at a supermarket. Customers join at the back and are served from the front β nobody cuts in, nobody is served out of order. The cashier (the algorithm) always processes the customer who has been waiting longest. That guarantee of arrival-order processing is exactly what makes a queue the right structure for BFS: you process nearby nodes before far ones, level by level.
02
Section Two Β· Mental Model
How It Thinks
Elements enter at rear, leave from front
βΆ
Arrival order is guaranteed β first in, first out, always
Head and tail indices on a circular array
βΆ
Enqueue and dequeue both O(1) β no shifting, no copying
No access to middle elements
βΆ
The constraint enforces fair ordering β no element can jump the queue
Queue processes nodes level by level
βΆ
BFS explores the nearest nodes first β shortest path on unweighted graphs
Empty queue has no front element
βΆ
Always check isEmpty() before dequeue or peek β both throw on empty
03
Section Three Β· Operations
Core Operations
Enqueue
Adds a new element at the rear of the queue, increasing size by one in O(1) time.
On ArrayDeque used as a queue: use offer() or addLast() to enqueue β both add to the tail.
Never use push() which adds to the head (stack semantics).
Declare as Queue<T> rather than ArrayDeque<T> to prevent calling the wrong methods at compile time.
Dequeue
Removes and returns the front element of the queue, decreasing size by one in O(1) time.
Pseudocode
functiondequeue(queue):
if queue is empty:
throw NoSuchElementException // guard β always check firstreturn queue.removeFirst() // O(1) β advances head index
Dequeue β front element departs
Dequeue throws β guard with isEmpty():
removeFirst() on an empty ArrayDeque throws NoSuchElementException. poll() is the null-returning alternative β use it when an empty queue is a valid state your code handles.
For BFS, the standard pattern is while (!queue.isEmpty()) dequeue.
Peek
Returns the front element without removing it, allowing inspection of the next element to be processed in O(1).
Pseudocode
functionpeek(queue):
if queue is empty:
return null// or throw β depends on methodreturn queue.peekFirst() // O(1) β reads head without moving it
peek() vs element() vs peekFirst():
On ArrayDeque: peekFirst() returns null if empty, element() throws if empty.
For a queue declared as Queue<T>, peek() maps to peekFirst().
Use peekFirst() when an empty queue is expected; use element() to fail loudly on a bug.
isEmpty
Returns true if the queue contains no elements, preventing dequeue and peek from throwing on an empty queue.
Pseudocode
functionisEmpty(queue):
return queue.size() == 0// O(1) β size is a stored field
The BFS loop pattern:
The canonical BFS structure is while (!queue.isEmpty()) { process(queue.poll()); enqueue neighbours; }.
The isEmpty() check is the loop condition β never omit it.
Enqueue neighbours inside the loop, not before it, to avoid processing nodes before their level is established.
BFS Using a Queue
Explores a graph or tree level by level by processing each node and enqueuing its unvisited neighbours before moving on.
Pseudocode
functionbfs(graph, start):
queue = new Queue
visited = new Set
enqueue(queue, start)
visited.add(start)
while queue is not empty:
node = dequeue(queue)
process(node)
for each neighbour of node:
if neighbour not in visited:
visited.add(neighbour) // mark on enqueue not dequeueenqueue(queue, neighbour)
// O(V + E) total
BFS β level-by-level traversal using a queue
Mark visited on enqueue β not on dequeue:
If you mark a node visited when you dequeue it, the same node can be enqueued multiple times before it is processed, causing redundant work or infinite loops on cyclic graphs.
Mark visited immediately when you enqueue β before the node is processed, not after.
Common Mistake: Using a Stack instead of a Queue for BFS and getting DFS by accident. Both iterate until empty and both visit every node β but a stack processes the most recently added node next (LIFO), producing depth-first order. A queue processes the earliest-added node next (FIFO), producing breadth-first order. The data structure determines the traversal order, not the loop logic.
A circular queue (ring buffer) is a fixed-capacity queue implemented on a plain array where the tail wraps around to index 0 when it reaches the end β avoiding the O(n) shift that a naive array queue requires. Both head and tail are integer indices that advance modulo capacity: enqueue moves tail = (tail + 1) % capacity, dequeue moves head = (head + 1) % capacity, both in O(1). The trade-off is fixed capacity β a ring buffer cannot grow; overflow must be handled explicitly, either by rejecting new elements or by expanding (which breaks the O(1) space guarantee). Ring buffers are the implementation of choice for bounded queues in operating systems, audio streaming, and network packet buffers where predictable memory usage matters more than unlimited capacity.
Simple Queue (array)
Naive array implementation
Enqueue: O(1) β write to tail index
Dequeue: O(n) β shift all elements left
Or: O(1) dequeue + O(n) periodic compaction
Wasted space at the front accumulates
Circular Queue (ring buffer)
Wrap-around implementation
Enqueue: O(1) β tail = (tail+1) % capacity
Dequeue: O(1) β head = (head+1) % capacity
Fixed capacity β must handle overflow
Zero wasted space β indices wrap around
Ring buffer β 8 slots, head and tail pointers
Wraparound β tail wraps from slot 7 to slot 0
Full vs empty β the size counter solves it:
When head == tail, the ring buffer is either full or empty β you cannot tell from indices alone.
The cleanest solution is a separate size counter: enqueue increments it, dequeue decrements it, full means size == capacity, empty means size == 0.
A deque (pronounced "deck") generalises both queue and stack β O(1) add/remove at both front and rear. A queue restricts you to add-at-back, remove-from-front. A deque opens both ends for both operations. Java's ArrayDeque is the recommended implementation for all queue, stack, and deque use cases β faster than LinkedList (cache-friendly) and faster than the legacy Stack class (no synchronisation overhead).
Restrict toβ¦
Behaviour
Replaces
addFirst / removeFirst only
LIFO β Stack
Legacy Stack class
addLast / removeFirst only
FIFO β Queue
LinkedList as queue
All four operations
True Deque
Sliding window max, monotonic deque, 0-1 BFS
Deque has its own dedicated page:
covering the circular buffer internals, ArrayDeque vs LinkedList benchmarks, the monotonic deque pattern (Sliding Window Maximum), and full Java/Python implementations.
GOTCHA
ArrayDeque does not permit null elements. If you call offer(null) or add(null), it throws NullPointerException immediately. This matters when processing data that may contain nulls β use LinkedList instead, which permits null elements, or filter nulls before enqueuing. This is the only case where LinkedList is preferable to ArrayDeque as a queue.
CHOOSE
Declare as Queue<T> when you only need FIFO operations β the interface prevents accidentally calling addFirst() or other deque methods. Declare as Deque<T> when you need both-ends access. Use ArrayDeque in both cases β it is faster than LinkedList, uses less memory, and has better cache behaviour.
08
Section Eight Β· Decision Guide
Pick The Right Structure
Decision flowchart β choosing the right queue variant
Structure
Enqueue
Dequeue
Ordering
Best For
Queue = ArrayDeque β this
O(1)
O(1)
FIFO
BFS, task queues, level-order traversal
Deque
O(1) both ends
O(1) both ends
FIFO or LIFO
Sliding windows, monotonic deque, 0-1 BFS
Stack
O(1)
O(1)
LIFO
Undo, parsing, DFS, backtracking
PriorityQueue
O(log n)
O(log n)
Priority first
Dijkstra, scheduling by priority, top-k
Circular Queue
O(1)
O(1)
FIFO, fixed capacity
Bounded buffers, embedded systems, audio streams
NoNeed LIFO?YesArrayDeque as Stackpush/pop β replaces legacy Stack
Queue interview problems almost always involve one of two patterns: BFS (use a queue to explore a graph or tree level by level β any problem involving shortest path on unweighted graphs, level-order traversal, or multi-source spreading is a BFS candidate) or monotonic deque (maintain a deque in sorted order to answer range maximum or minimum queries in O(1) as a sliding window moves). Look for the words “level”, “shortest”, “minimum steps”, “spreading”, or “sliding window maximum” as signals that one of these two patterns applies.
Count task frequencies, always schedule the most frequent remaining task β a max-heap (PriorityQueue) on frequencies gives the greedy choice in O(log k) per slot.
A monotonic decreasing deque of indices β remove from the front when the window slides past, remove from the back when a larger element enters.
Interview Pattern:
BFS solves roughly 70% of queue interview problems β whenever you see “shortest path”, “minimum steps”, or “level by level”, reach for a queue and BFS immediately.
The second pattern is monotonic deque: if the problem involves a sliding window and asks for the maximum or minimum in the current window, a deque that evicts smaller (or larger) elements from the rear gives O(1) window queries.