Ad Space — Top Banner

Big-O Algorithm Complexity Comparison

Compare operations count for O(1), O(log n), O(n), O(n log n), and O(n²) at any input size.
See exactly how fast each complexity class grows.

Operations at n

Big-O notation describes how an algorithm’s runtime scales with input size n. It answers: if the input doubles, how many more operations does the algorithm need?

The five most common complexity classes, from fastest to slowest:

O(1) — Constant time. The algorithm takes the same number of steps regardless of input size. Accessing an array element by index is O(1). Whether n is 10 or 10 million, you need exactly 1 operation.

O(log n) — Logarithmic time. Operations grow proportional to the log of n. Binary search is the classic example. Doubling the input only adds one more comparison. At n = 1,000,000 you need about 20 operations.

O(n) — Linear time. Each element is visited once. Scanning through an unsorted list is O(n). At n = 1,000,000 you need 1,000,000 operations in the worst case.

O(n log n) — Linearithmic time. The most efficient general-purpose sorting algorithms — merge sort, heapsort, and average-case quicksort — all run here. At n = 1,000,000 that is roughly 20 million operations.

O(n²) — Quadratic time. Nested loops over the input. Bubble sort and insertion sort both run in O(n²). At n = 1,000 you already need 1,000,000 operations. At n = 10,000 it becomes 100,000,000.

The gap is why algorithm choice matters so much in practice. Sorting 1 million items with bubble sort takes roughly 50 billion operations; with merge sort it takes about 20 million. That is a 2,500x difference on the same hardware.

O(2^n) and O(n!) grow so fast that n = 50 already exceeds the count of atoms in the observable universe. No practical algorithm runs in these classes for large n.


Ad Space — Bottom Banner

Embed This Calculator

Copy the code below and paste it into your website or blog.
The calculator will work directly on your page.