Linear

Queue Fundamentals

A queue is a linear data structure that follows the First-In-First-Out (FIFO) principle โ€” the element that has been waiting the longest is always the first one removed. Enqueue adds to the rear, dequeue removes from the front, both in O(1). Queues power BFS traversal, task schedulers, message brokers, and any system where fairness matters.

01
Section One · Foundation

What is Queue?

front ▼ 10 20 30 40 ▲ rear out

A queue has two active ends: the front (where elements leave) and the rear (where elements arrive). You can only add at the rear (enqueue) and remove from the front (dequeue). There is no access to elements in the middle. This ensures strict fairness โ€” the element that arrived first is always served first, just like a real-world queue.

Queues come in several flavours:

Standard Queue

FIFO

Elements enter at the rear and leave at the front. The simplest and most common variant. Backed by a circular array or linked list.

  • Java ArrayDeque, LinkedList
  • Python collections.deque
  • BFS, task scheduling, buffering
Priority Queue

Min/Max Heap

Elements leave in priority order (smallest or largest first), not arrival order. Backed by a binary heap โ€” O(log n) enqueue/dequeue.

  • Java PriorityQueue
  • Dijkstra's algorithm, top-K problems
  • Not a true FIFO queue
Analogy: A checkout line at a grocery store. The first person who joins the line is the first one served. New customers join at the back. Nobody cuts in line โ€” the order is strictly first-come, first-served. The cashier always serves from the front.
Key Characteristics
Two active ends: front and rear
Enqueue at rear, dequeue at front — both O(1)
FIFO ordering preserved
Oldest element always processed first
No random access to middle elements
Must dequeue everything ahead to reach deeper items
BFS explores level by level
Queue naturally processes nodes in discovery order
Circular array avoids wasted space
front/rear wrap around — no shifting needed
02
Section Two · Operations

Core Operations

Enqueue (Offer)
Adds a new element to the rear of the queue, increasing its size by one.
Pseudocode
function enqueue(queue, value): if queue.size == queue.capacity: resize(queue) // array-backed: double capacity queue[queue.rear] = value // place at rear queue.rear = (queue.rear + 1) % queue.capacity // wrap around queue.size += 1 // O(1)
Circular indexing eliminates shifting. A naïve array queue would shift all elements left on every dequeue — O(n). By using (index + 1) % capacity, both front and rear wrap around the array. No element ever moves. This is how ArrayDeque and collections.deque achieve O(1) for both ends.
Dequeue (Poll)
Removes and returns the element at the front of the queue — the one that has been waiting the longest.
Pseudocode
function dequeue(queue): if queue.isEmpty(): throw "QueueEmpty" value = queue[queue.front] // grab front element queue.front = (queue.front + 1) % queue.capacity // advance front queue.size -= 1 // O(1) return value
Always check isEmpty() before dequeuing. Dequeuing from an empty queue is the most common queue-related crash. In Java poll() returns null while remove() throws an exception. In Python deque.popleft() throws IndexError. Always guard with an emptiness check or use the safe variant.
Peek (Front)
Returns the element at the front of the queue without removing it — a non-destructive look at the next element to be processed.
Pseudocode
function peek(queue): if queue.isEmpty(): throw "QueueEmpty" return queue[queue.front] // O(1) — no mutation
Peek is critical for BFS level processing. In BFS, you frequently need to know the queue size at the start of each level, then process exactly that many elements. Peek lets you inspect the front without advancing, useful for conditional logic before committing to dequeue.
BFS Pattern — Level-Order Traversal
Uses a queue to explore all nodes at distance d before any node at distance d+1, guaranteeing shortest-path discovery in unweighted graphs.
Pseudocode — BFS on a Graph
function bfs(start): queue = [start] visited = {start} while queue is not empty: node = queue.dequeue() process(node) for neighbour in node.neighbours: if neighbour not in visited: visited.add(neighbour) queue.enqueue(neighbour) // O(V + E) — each vertex/edge processed once
BFS with a queue guarantees shortest path in unweighted graphs. Because the queue processes nodes in discovery order (FIFO), the first time you reach a node is always via the shortest path. This is why BFS is the go-to algorithm for "minimum steps" problems — Dijkstra is just BFS with a priority queue for weighted edges.
Common Mistake: Using a LinkedList as a queue without the Queue interface. LinkedList exposes get(i), add(i, val), and other random-access methods that break FIFO semantics. Always declare as Queue<T> q = new LinkedList<>() or better yet, use ArrayDeque which is faster and doesn't permit index access.
03
Section Three · Visuals

Diagrams & Memory Layout

Queue Structure — Linear vs Circular
Linear queue wastes space — circular queue reuses it
LINEAR (WASTEFUL) 30 40 50 front rear wasted CIRCULAR (EFFICIENT) 60 30 40 50 front=2 rear=0 rear wraps to [0] Circular: reuses dequeued slots — no wasted space
Enqueue & Dequeue Walkthrough
Enqueue 50 at rear, then dequeue 10 from front
BEFORE 10 20 30 front rear +50 10 20 30 50 new rear -10 10 20 new front Enqueue at rear O(1) — Dequeue at front O(1)
BFS — Level-Order Traversal
Queue drives BFS: process level by level, shortest path guaranteed
1 2 3 4 5 Level 0: dequeue 1 Level 1: dequeue 2, 3 Level 2: dequeue 4, 5 Queue order: [1] → [2,3] → [4,5] → [] done
04
Section Four · Code

Java Quick Reference

The most common queue operations using Java's ArrayDeque — the recommended Queue implementation (faster and cleaner than LinkedList).

Java — ArrayDeque as Queue
import java.util.*; Queue<Integer> q = new ArrayDeque<>(); // O(1) — enqueue at rear q.offer(10); // [10] q.offer(20); // [10, 20] q.offer(30); // [10, 20, 30] // O(1) — peek at front without removing int front = q.peek(); // 10 // O(1) — dequeue from front int val = q.poll(); // 10 → queue is [20, 30] // O(1) — check emptiness boolean empty = q.isEmpty(); // false // O(1) — size int sz = q.size(); // 2
05
Section Five · Complexity

Time & Space Complexity

OperationBest CaseAverage CaseWorst CaseSpace
EnqueueO(1)O(1) amortizedO(n) on resizeO(1)
DequeueO(1)O(1)O(1)O(1)
PeekO(1)O(1)O(1)O(1)
Search (contains)O(1)O(n)O(n)O(1)
Why these complexities? Enqueue and dequeue operate on opposite ends of the structure — one pointer advances the rear, the other advances the front. With circular indexing, neither operation requires shifting elements. The only O(n) event is array resize on enqueue (capacity doubles, elements are copied), but amortized across n operations this is O(1). Searching is O(n) because there is no index-based access — you must scan from front to rear.
Space Complexity Note

A queue of n elements uses O(n) space. Array-backed circular queues may over-allocate by up to 2× capacity. Linked-list-backed queues use exactly n nodes but add one pointer per node. Auxiliary space for all operations is O(1). BFS over a graph with V vertices uses O(V) queue space in the worst case (e.g., a star graph where all neighbors are enqueued at once).

06
Section Six · Decision Guide

When to Use Queue

Use This When

  • You need FIFO ordering — BFS traversal, task scheduling, print job queues, and message buffers all require processing the oldest item first.
  • Level-order processing — tree level traversal, shortest path in unweighted graphs, and multi-source BFS (e.g., rotting oranges) all rely on queues.
  • Producer-consumer patterns — buffering between a fast producer and slow consumer; blocking queues in concurrent systems.

Avoid When

  • You need LIFO ordering — if the most recent item should be processed first (DFS, undo, parsing), use a Stack instead.
  • Priority matters more than arrival order — if elements should leave by priority (smallest/largest first), use a PriorityQueue / Heap.
Comparison vs Alternatives
StructureAccessInsertDeleteBest For
Queue ← this oneO(1) front onlyO(1) enqueueO(1) dequeueFIFO processing, BFS, scheduling
StackO(1) top onlyO(1) pushO(1) popLIFO processing, DFS, undo, parsing
Priority QueueO(1) min/maxO(log n)O(log n)Dijkstra, top-K, event simulation
07
Section Seven · Interview Practice

LeetCode Problems

Queue interview problems almost always involve one of three patterns: BFS traversal (shortest path, level-order, multi-source flood fill), sliding window with deque (maximum/minimum in a window), or simulation (task scheduling, recent counter). If the problem says "minimum steps", "shortest path in unweighted graph", or "level by level" — reach for a queue.

  • Easy Number of Islands — BFS from each unvisited land cell; enqueue all adjacent land cells and mark visited — O(m×n) grid traversal.
  • Easy Rotting Oranges — Multi-source BFS: enqueue all rotten oranges at once, then spread rot level by level — the number of BFS levels is the answer.
  • Medium Binary Tree Level Order Traversal — Classic BFS: process queue.size() nodes per level, enqueue children — builds the level list naturally.
  • Medium Open the Lock — BFS on a state graph: each lock state has 8 neighbours (4 dials × 2 directions); queue explores states by distance — first to reach target is shortest.
  • Hard Sliding Window Maximum — Monotonic deque: maintain a decreasing deque of indices; the front is always the max in the current window — O(n) total.
Interview Pattern: When a problem asks for "shortest path", "minimum moves", "level-by-level", or "spread/flood" in an unweighted graph or grid, BFS with a queue is almost always the right approach. For "sliding window min/max", combine a deque with the monotonic property — the front of the deque gives you the answer in O(1).