# Quick Guide: Big-O Notation

Big-O notation is our shorthand for how an algorithm’s runtime (or memory use) grows as input size increases. Think of it as a way to talk about "the worst that could happen," without getting down in machine-specific details.

* **O(1) – Constant time**\
  Your algorithm takes the same amount of time no matter how big the input is.\
  \&#xNAN;*Example:* Accessing `array[5]`.
* **O(log n) – Logarithmic time**\
  Cutting the problem in half (or any fixed fraction) each step.\
  \&#xNAN;*Example:* Binary search on a sorted list.
* **O(n) – Linear time**\
  You do one unit of work per element.\
  \&#xNAN;*Example:* Scanning an array to find its maximum.
* **O(n log n) – Linearithmic time**\
  Common for efficient sorts (like quicksort or mergesort).\
  \&#xNAN;*Example:* Sorting then scanning.
* **O(n²), O(n³), … – Polynomial time**\
  Nested loops—each adds another factor of n.\
  \&#xNAN;*Example:* Checking all pairs = O(n²).
* **O(2ⁿ), O(n!) – Exponential or factorial time**\
  These grow super-fast and become impractical very quickly.\
  \&#xNAN;*Example:* Brute-forcing all subsets.
