Quick Guide: Big-O Notation
Big-O notation is our shorthand for how an algorithm’s runtime (or memory use) grows as input size increases. Think of it as a way to talk about "the worst that could happen," without getting down in machine-specific details.
O(1) – Constant time Your algorithm takes the same amount of time no matter how big the input is. Example: Accessing
array[5]
.O(log n) – Logarithmic time Cutting the problem in half (or any fixed fraction) each step. Example: Binary search on a sorted list.
O(n) – Linear time You do one unit of work per element. Example: Scanning an array to find its maximum.
O(n log n) – Linearithmic time Common for efficient sorts (like quicksort or mergesort). Example: Sorting then scanning.
O(n²), O(n³), … – Polynomial time Nested loops—each adds another factor of n. Example: Checking all pairs = O(n²).
O(2ⁿ), O(n!) – Exponential or factorial time These grow super-fast and become impractical very quickly. Example: Brute-forcing all subsets.
Last updated