First-In-First-Out (FIFO) structure where elements are added at the rear and removed from the front, enabling O(1) enqueue and dequeue.
Linear
Queue Fundamentals
A queue is a linear data structure that follows the First-In-First-Out (FIFO) principle โ the element that has been waiting the longest is always the first one removed. Enqueue adds to the rear, dequeue removes from the front, both in O(1). Queues power BFS traversal, task schedulers, message brokers, and any system where fairness matters.
01
Section One · Foundation
What is Queue?
A queue has two active ends: the front (where elements leave) and the rear (where elements arrive). You can only add at the rear (enqueue) and remove from the front (dequeue). There is no access to elements in the middle. This ensures strict fairness โ the element that arrived first is always served first, just like a real-world queue.
Queues come in several flavours:
Standard Queue
FIFO
Elements enter at the rear and leave at the front. The simplest and most common variant. Backed by a circular array or linked list.
Java ArrayDeque, LinkedList
Python collections.deque
BFS, task scheduling, buffering
Priority Queue
Min/Max Heap
Elements leave in priority order (smallest or largest first), not arrival order. Backed by a binary heap โ O(log n) enqueue/dequeue.
Java PriorityQueue
Dijkstra's algorithm, top-K problems
Not a true FIFO queue
Analogy: A checkout line at a grocery store. The first person who joins the line is the first one served. New customers join at the back. Nobody cuts in line โ the order is strictly first-come, first-served. The cashier always serves from the front.
Key Characteristics
Two active ends: front and rear
▶
Enqueue at rear, dequeue at front — both O(1)
FIFO ordering preserved
▶
Oldest element always processed first
No random access to middle elements
▶
Must dequeue everything ahead to reach deeper items
BFS explores level by level
▶
Queue naturally processes nodes in discovery order
Circular array avoids wasted space
▶
front/rear wrap around — no shifting needed
02
Section Two · Operations
Core Operations
Enqueue (Offer)
Adds a new element to the rear of the queue, increasing its size by one.
Pseudocode
functionenqueue(queue, value):
if queue.size == queue.capacity:
resize(queue) // array-backed: double capacity
queue[queue.rear] = value // place at rear
queue.rear = (queue.rear + 1) % queue.capacity // wrap around
queue.size += 1// O(1)
Circular indexing eliminates shifting. A naïve array queue would shift all elements left on every dequeue — O(n). By using (index + 1) % capacity, both front and rear wrap around the array. No element ever moves. This is how ArrayDeque and collections.deque achieve O(1) for both ends.
Dequeue (Poll)
Removes and returns the element at the front of the queue — the one that has been waiting the longest.
Pseudocode
functiondequeue(queue):
if queue.isEmpty():
throw"QueueEmpty"
value = queue[queue.front] // grab front element
queue.front = (queue.front + 1) % queue.capacity // advance front
queue.size -= 1// O(1)return value
Always check isEmpty() before dequeuing. Dequeuing from an empty queue is the most common queue-related crash. In Java poll() returns null while remove() throws an exception. In Python deque.popleft() throws IndexError. Always guard with an emptiness check or use the safe variant.
Peek (Front)
Returns the element at the front of the queue without removing it — a non-destructive look at the next element to be processed.
Pseudocode
functionpeek(queue):
if queue.isEmpty():
throw"QueueEmpty"return queue[queue.front] // O(1) — no mutation
Peek is critical for BFS level processing. In BFS, you frequently need to know the queue size at the start of each level, then process exactly that many elements. Peek lets you inspect the front without advancing, useful for conditional logic before committing to dequeue.
BFS Pattern — Level-Order Traversal
Uses a queue to explore all nodes at distance d before any node at distance d+1, guaranteeing shortest-path discovery in unweighted graphs.
Pseudocode — BFS on a Graph
functionbfs(start):
queue = [start]
visited = {start}
while queue is not empty:
node = queue.dequeue()
process(node)
for neighbour in node.neighbours:
if neighbour not in visited:
visited.add(neighbour)
queue.enqueue(neighbour)
// O(V + E) — each vertex/edge processed once
BFS with a queue guarantees shortest path in unweighted graphs. Because the queue processes nodes in discovery order (FIFO), the first time you reach a node is always via the shortest path. This is why BFS is the go-to algorithm for "minimum steps" problems — Dijkstra is just BFS with a priority queue for weighted edges.
Common Mistake: Using a LinkedList as a queue without the Queue interface. LinkedList exposes get(i), add(i, val), and other random-access methods that break FIFO semantics. Always declare as Queue<T> q = new LinkedList<>() or better yet, use ArrayDeque which is faster and doesn't permit index access.
03
Section Three · Visuals
Diagrams & Memory Layout
Queue Structure — Linear vs Circular
Linear queue wastes space — circular queue reuses it
Enqueue & Dequeue Walkthrough
Enqueue 50 at rear, then dequeue 10 from front
BFS — Level-Order Traversal
Queue drives BFS: process level by level, shortest path guaranteed
04
Section Four · Code
Java Quick Reference
The most common queue operations using Java's ArrayDeque — the recommended Queue implementation (faster and cleaner than LinkedList).
Java — ArrayDeque as Queue
import java.util.*;
Queue<Integer> q = newArrayDeque<>();
// O(1) — enqueue at rear
q.offer(10); // [10]
q.offer(20); // [10, 20]
q.offer(30); // [10, 20, 30]// O(1) — peek at front without removingint front = q.peek(); // 10// O(1) — dequeue from frontint val = q.poll(); // 10 → queue is [20, 30]// O(1) — check emptinessboolean empty = q.isEmpty(); // false// O(1) — sizeint sz = q.size(); // 2
using System;
using System.Collections.Generic;
var q = newQueue<int>();
q.Enqueue(10); // O(1)
q.Enqueue(20);
q.Enqueue(30);
int front = q.Peek(); // 10 — O(1)int val = q.Dequeue(); // 10 — O(1)bool empty = q.Count == 0; // false// BFS level-orderstaticIList<IList<int>> LevelOrder(TreeNode root) {
var result = newList<IList<int>>();
if (root == null) return result;
var q = newQueue<TreeNode>();
q.Enqueue(root);
while (q.Count > 0) {
int sz = q.Count;
var level = newList<int>();
for (int i = 0; i < sz; i++) {
var node = q.Dequeue();
level.Add(node.val);
if (node.left != null) q.Enqueue(node.left);
if (node.right != null) q.Enqueue(node.right);
}
result.Add(level);
}
return result;
}
05
Section Five · Complexity
Time & Space Complexity
Operation
Best Case
Average Case
Worst Case
Space
Enqueue
O(1)
O(1) amortized
O(n) on resize
O(1)
Dequeue
O(1)
O(1)
O(1)
O(1)
Peek
O(1)
O(1)
O(1)
O(1)
Search (contains)
O(1)
O(n)
O(n)
O(1)
Why these complexities? Enqueue and dequeue operate on opposite ends of the structure — one pointer advances the rear, the other advances the front. With circular indexing, neither operation requires shifting elements. The only O(n) event is array resize on enqueue (capacity doubles, elements are copied), but amortized across n operations this is O(1). Searching is O(n) because there is no index-based access — you must scan from front to rear.
Space Complexity Note
A queue of n elements uses O(n) space. Array-backed circular queues may over-allocate by up to 2× capacity. Linked-list-backed queues use exactly n nodes but add one pointer per node. Auxiliary space for all operations is O(1). BFS over a graph with V vertices uses O(V) queue space in the worst case (e.g., a star graph where all neighbors are enqueued at once).
06
Section Six · Decision Guide
When to Use Queue
✓
Use This When
You need FIFO ordering — BFS traversal, task scheduling, print job queues, and message buffers all require processing the oldest item first.
Level-order processing — tree level traversal, shortest path in unweighted graphs, and multi-source BFS (e.g., rotting oranges) all rely on queues.
Producer-consumer patterns — buffering between a fast producer and slow consumer; blocking queues in concurrent systems.
✗
Avoid When
You need LIFO ordering — if the most recent item should be processed first (DFS, undo, parsing), use a Stack instead.
Priority matters more than arrival order — if elements should leave by priority (smallest/largest first), use a PriorityQueue / Heap.
Comparison vs Alternatives
Structure
Access
Insert
Delete
Best For
Queue ← this one
O(1) front only
O(1) enqueue
O(1) dequeue
FIFO processing, BFS, scheduling
Stack
O(1) top only
O(1) push
O(1) pop
LIFO processing, DFS, undo, parsing
Priority Queue
O(1) min/max
O(log n)
O(log n)
Dijkstra, top-K, event simulation
07
Section Seven · Interview Practice
LeetCode Problems
Queue interview problems almost always involve one of three patterns: BFS traversal (shortest path, level-order, multi-source flood fill), sliding window with deque (maximum/minimum in a window), or simulation (task scheduling, recent counter). If the problem says "minimum steps", "shortest path in unweighted graph", or "level by level" — reach for a queue.
EasyNumber of Islands —
BFS from each unvisited land cell; enqueue all adjacent land cells and mark visited — O(m×n) grid traversal.
EasyRotting Oranges —
Multi-source BFS: enqueue all rotten oranges at once, then spread rot level by level — the number of BFS levels is the answer.
MediumBinary Tree Level Order Traversal —
Classic BFS: process queue.size() nodes per level, enqueue children — builds the level list naturally.
MediumOpen the Lock —
BFS on a state graph: each lock state has 8 neighbours (4 dials × 2 directions); queue explores states by distance — first to reach target is shortest.
HardSliding Window Maximum —
Monotonic deque: maintain a decreasing deque of indices; the front is always the max in the current window — O(n) total.
Interview Pattern: When a problem asks for "shortest path", "minimum moves", "level-by-level", or "spread/flood" in an unweighted graph or grid, BFS with a queue is almost always the right approach. For "sliding window min/max", combine a deque with the monotonic property — the front of the deque gives you the answer in O(1).