Sorting Algorithm Complexity
Compare Big-O time and space complexity of sorting algorithms.
Covers bubble, merge, quick, heap, and radix sort with best, average, and worst case analysis.
Sorting Complexity Comparison
Time Complexity Table
| Algorithm | Best Case | Average Case | Worst Case | Space |
|---|---|---|---|---|
| Bubble Sort | O(n) | O(n²) | O(n²) | O(1) |
| Selection Sort | O(n²) | O(n²) | O(n²) | O(1) |
| Insertion Sort | O(n) | O(n²) | O(n²) | O(1) |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) |
| Quick Sort | O(n log n) | O(n log n) | O(n²) | O(log n) |
| Heap Sort | O(n log n) | O(n log n) | O(n log n) | O(1) |
| Counting Sort | O(n + k) | O(n + k) | O(n + k) | O(k) |
| Radix Sort | O(nk) | O(nk) | O(nk) | O(n + k) |
Key Concepts
| Property | Meaning |
|---|---|
| Stable | Equal elements keep their original relative order |
| In-place | Uses O(1) extra memory (sorts within the array) |
| Comparison-based | Sorts by comparing pairs of elements |
| k | Range of input values (for counting/radix sort) |
Example 1
You need to sort 1 million records and memory is limited. Which algorithm is best?
Heap sort: O(n log n) time with O(1) space
It sorts in-place without needing extra arrays
Heap sort is ideal when memory is constrained but guaranteed O(n log n) is needed
Example 2
You have a nearly sorted array of 10,000 elements with only a few out of place. Which algorithm is fastest?
Insertion sort has O(n) best case on nearly sorted data
Each out-of-place element only needs a few swaps
Insertion sort is the best choice for nearly sorted data (approaches O(n))
When to Use It
Use this reference when choosing a sorting algorithm:
- Selecting the right sort for your data size and constraints
- Understanding tradeoffs between speed, memory, and stability
- Optimizing code performance in software engineering
- Preparing for technical interviews and algorithm courses
Key Notes
- Comparison sort lower bound: Ω(n log n): Any algorithm that sorts by pairwise comparisons must make at least n log₂n comparisons in the worst case. Proved by the decision-tree argument: a binary tree of n! leaves requires height ≥ log₂(n!), which by Stirling's approximation ≈ n log n.
- Optimal comparison sorts: Merge sort: O(n log n) worst/average, O(n) extra space. Heap sort: O(n log n) worst, O(1) extra space. Timsort (Python, Java): O(n log n) worst, O(n log n) average but highly optimized for real data patterns — used in most modern standard libraries.
- Sub-linear sorts (non-comparison): Counting sort: O(n + k) where k is the value range — fast when k is small relative to n. Radix sort: O(d(n + k)) where d is digits/characters — effective for fixed-width integers and strings. Bucket sort: O(n + k) average for uniformly distributed data. None work for arbitrary comparison-based ordering.
- Quicksort — fast in practice despite O(n²) worst case: Average case O(n log n) with small constants; in-place (O(log n) stack). Worst case (sorted or reverse-sorted input) is O(n²) — mitigated by random pivot selection or median-of-three. Fastest in practice for most real-world datasets.
- Applications: Sorting algorithm selection affects database query performance (ORDER BY), spreadsheet operations, search index construction, genome sequence alignment, and any pipeline where data must be ordered. Understanding time-space trade-offs guides correct algorithm choice for each use case.