Ad Space — Top Banner

Big O Notation Reference

Understand Big O notation for algorithm complexity.
Covers O(1), O(log n), O(n), O(n log n), O(n2), and O(2n) with examples.

What is Big O?

O(f(n)) describes the upper bound of an algorithm's growth rate as input size n increases

Big O notation describes the worst-case performance of an algorithm. It tells you how the running time or memory usage grows as the input gets larger.

Common Complexities (Fastest to Slowest)

Big ONameExample
O(1)ConstantArray index lookup, hash table access
O(log n)LogarithmicBinary search
O(n)LinearScanning through a list
O(n log n)LinearithmicMerge sort, quick sort (average)
O(n²)QuadraticBubble sort, nested loops
O(n³)CubicMatrix multiplication (naive)
O(2ⁿ)ExponentialRecursive Fibonacci, power set
O(n!)FactorialTraveling salesman (brute force)

Growth Comparison

nO(log n)O(n)O(n log n)O(n²)O(2ⁿ)
10310331001,024
100710066410,0001.27 × 10³⁰
1,000101,0009,9661,000,000Huge
10,0001310,000132,877100,000,000Huge

Example 1

What is the time complexity of searching for a value in a sorted array using binary search?

Binary search halves the search space each step

After k steps, the remaining space is n / 2ᵏ

It stops when n / 2ᵏ = 1, so k = log₂(n)

Time complexity: O(log n)

Example 2

What is the time complexity of checking all pairs in an array of n elements?

The outer loop runs n times

The inner loop runs up to n times for each outer iteration

Total operations: n × n = n²

Time complexity: O(n²)

When to Use It

Use Big O notation to evaluate and compare algorithms:

  • Choosing the right data structure for a given problem
  • Predicting performance as data grows from hundreds to millions of records
  • Identifying bottlenecks in code during optimization
  • Communicating algorithmic efficiency in technical discussions

Key Notes

  • Big-O notation describes the upper bound on growth rate: O(f(n)) means the algorithm's running time grows no faster than a constant multiple of f(n) as n → ∞. Constants and lower-order terms are dropped: 3n² + 5n + 2 = O(n²).
  • Complexity hierarchy (from best to worst): O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(2ⁿ) < O(n!). Examples: array access O(1); binary search O(log n); linear search O(n); merge sort O(n log n); bubble sort O(n²); generating all subsets O(2ⁿ); traveling salesman brute force O(n!).
  • Big-O vs Big-Theta vs Big-Omega: O is an upper bound (worst case); Ω is a lower bound (best case); Θ is a tight bound (exact order). In practice, "Big-O" is often used loosely to mean Θ — when someone says "merge sort is O(n log n)" they mean exactly n log n growth, not just an upper bound.
  • Space complexity matters too: An algorithm's memory usage also has a Big-O complexity. Merge sort is O(n log n) time but O(n) space; heapsort is O(n log n) time and O(1) space (in-place). Time-space trade-offs are common in algorithm design.
  • Applications: Big-O analysis guides algorithm and data structure selection for scalable software: database query optimization (why indexes matter), choosing between sorting algorithms (n log n vs n²), evaluating whether a machine learning model will scale to large datasets, and cryptographic security analysis.

Ad Space — Bottom Banner

Embed This Calculator

Copy the code below and paste it into your website or blog.
The calculator will work directly on your page.