Big O Notation
Understand algorithm time complexity with Big O notation.
Compare how algorithms scale as input size grows.
The Formula
Big O notation expresses how an algorithm's runtime or space usage scales with input size. It focuses on the dominant term and ignores constants and lower-order terms.
Variables
| Symbol | Meaning |
|---|---|
| n | Size of the input |
| O(1) | Constant time — does not depend on input size |
| O(log n) | Logarithmic time — halves the problem each step (e.g., binary search) |
| O(n) | Linear time — examines each element once |
| O(n log n) | Linearithmic time — efficient sorting algorithms |
| O(n²) | Quadratic time — nested loops over the input |
| O(2ⁿ) | Exponential time — doubles with each additional input element |
Example 1
Comparing O(n) vs O(n²) for n = 1,000
O(n): approximately 1,000 operations
O(n²): approximately 1,000,000 operations
The quadratic algorithm does 1,000 times more work
Example 2
Binary search on a sorted array of 1,000,000 elements
Complexity: O(log n)
log₂(1,000,000) ≈ 20
At most about 20 comparisons to find any element
When to Use It
Use Big O notation when:
- Comparing the efficiency of different algorithms
- Predicting how an algorithm will perform on larger inputs
- Choosing the right data structure for a problem
- Identifying performance bottlenecks in code