Big O Notation Reference
Understand Big O notation for algorithm complexity.
Covers O(1), O(log n), O(n), O(n log n), O(n2), and O(2n) with examples.
What is Big O?
Big O notation describes the worst-case performance of an algorithm. It tells you how the running time or memory usage grows as the input gets larger.
Common Complexities (Fastest to Slowest)
| Big O | Name | Example |
|---|---|---|
| O(1) | Constant | Array index lookup, hash table access |
| O(log n) | Logarithmic | Binary search |
| O(n) | Linear | Scanning through a list |
| O(n log n) | Linearithmic | Merge sort, quick sort (average) |
| O(n²) | Quadratic | Bubble sort, nested loops |
| O(n³) | Cubic | Matrix multiplication (naive) |
| O(2ⁿ) | Exponential | Recursive Fibonacci, power set |
| O(n!) | Factorial | Traveling salesman (brute force) |
Growth Comparison
| n | O(log n) | O(n) | O(n log n) | O(n²) | O(2ⁿ) |
|---|---|---|---|---|---|
| 10 | 3 | 10 | 33 | 100 | 1,024 |
| 100 | 7 | 100 | 664 | 10,000 | 1.27 × 10³⁰ |
| 1,000 | 10 | 1,000 | 9,966 | 1,000,000 | Huge |
| 10,000 | 13 | 10,000 | 132,877 | 100,000,000 | Huge |
Example 1
What is the time complexity of searching for a value in a sorted array using binary search?
Binary search halves the search space each step
After k steps, the remaining space is n / 2ᵏ
It stops when n / 2ᵏ = 1, so k = log₂(n)
Time complexity: O(log n)
Example 2
What is the time complexity of checking all pairs in an array of n elements?
The outer loop runs n times
The inner loop runs up to n times for each outer iteration
Total operations: n × n = n²
Time complexity: O(n²)
When to Use It
Use Big O notation to evaluate and compare algorithms:
- Choosing the right data structure for a given problem
- Predicting performance as data grows from hundreds to millions of records
- Identifying bottlenecks in code during optimization
- Communicating algorithmic efficiency in technical discussions
Key Notes
- Big-O notation describes the upper bound on growth rate: O(f(n)) means the algorithm's running time grows no faster than a constant multiple of f(n) as n → ∞. Constants and lower-order terms are dropped: 3n² + 5n + 2 = O(n²).
- Complexity hierarchy (from best to worst): O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2ⁿ) < O(n!). Examples: array access O(1); binary search O(log n); linear search O(n); merge sort O(n log n); bubble sort O(n²); generating all subsets O(2ⁿ); traveling salesman brute force O(n!).
- Big-O vs Big-Theta vs Big-Omega: O is an upper bound (worst case); Ω is a lower bound (best case); Θ is a tight bound (exact order). In practice, "Big-O" is often used loosely to mean Θ — when someone says "merge sort is O(n log n)" they mean exactly n log n growth, not just an upper bound.
- Space complexity matters too: An algorithm's memory usage also has a Big-O complexity. Merge sort is O(n log n) time but O(n) space; heapsort is O(n log n) time and O(1) space (in-place). Time-space trade-offs are common in algorithm design.
- Applications: Big-O analysis guides algorithm and data structure selection for scalable software: database query optimization (why indexes matter), choosing between sorting algorithms (n log n vs n²), evaluating whether a machine learning model will scale to large datasets, and cryptographic security analysis.