Skip to content

Big O Notation

Big O notation is a mathematical notation used to describe the upper bound of an algorithm’s time or space complexity. It helps us understand how an algorithm’s performance scales as the input size grows.

Interactive Complexity Analyzer

Toggle different complexity classes to visualize and compare their growth rates. Adjust the input size to see how many operations each requires.

Complexity Analyzer

Visualize and compare algorithm growth rates

100
Growth Rate Comparison
Input Size (n)Operations
O(1)O(log n)O(n)O(n log n)O(n²)

Operations at n = 100

O(1)1
O(log n)7
O(n)100
O(n log n)664
O(n²)10.0K

Common Operations Reference

OperationStructureComplexity
Array AccessArrayO(1)
Array SearchArrayO(n)
Array Insert (end)ArrayO(1)*
Array Insert (beginning)ArrayO(n)
Hash Table LookupHash TableO(1)*
Hash Table InsertHash TableO(1)*
BST SearchBSTO(log n)*
BST InsertBSTO(log n)*
Linked List SearchLinked ListO(n)
Linked List Insert (head)Linked ListO(1)
Heap InsertHeapO(log n)
Heap Extract Min/MaxHeapO(log n)
Stack Push/PopStackO(1)
Queue Enqueue/DequeueQueueO(1)
Quick SortSortingO(n log n)*
Merge SortSortingO(n log n)
Binary SearchSearchO(log n)
BFS/DFSGraphO(V + E)

Why Big O Matters

Consider two algorithms that both sort a list:

  • Algorithm A takes 0.001 seconds for 100 items
  • Algorithm B takes 0.01 seconds for 100 items

Which is better? It depends! Algorithm A might be O(n²) while Algorithm B is O(n log n). As the list grows:

ItemsAlgorithm A (O(n²))Algorithm B (O(n log n))
1000.001s0.01s
1,0000.1s0.1s
10,00010s1.3s
100,0001000s (~17min)17s

Algorithm B wins at scale despite being slower initially!

Common Complexities

From fastest to slowest:

NotationNameExample
O(1)ConstantArray access by index
O(log n)LogarithmicBinary search
O(n)LinearLoop through array
O(n log n)LinearithmicMerge sort
O(n²)QuadraticNested loops
O(2ⁿ)ExponentialRecursive fibonacci
O(n!)FactorialPermutations

O(1) - Constant Time

The algorithm takes the same time regardless of input size.

def get_first_element(arr):
"""O(1) - Always one operation"""
return arr[0]
def get_element_at_index(arr, i):
"""O(1) - Direct memory access"""
return arr[i]
# Examples
numbers = [10, 20, 30, 40, 50]
print(get_first_element(numbers)) # 10
print(get_element_at_index(numbers, 3)) # 40

O(log n) - Logarithmic Time

The algorithm halves the problem space with each step.

def binary_search(arr, target):
"""O(log n) - Halves search space each iteration"""
left, right = 0, len(arr) - 1
while left <= right:
mid = (left + right) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid + 1
else:
right = mid - 1
return -1
# Example
sorted_arr = [1, 3, 5, 7, 9, 11, 13, 15]
print(binary_search(sorted_arr, 7)) # 3
print(binary_search(sorted_arr, 6)) # -1

O(n) - Linear Time

The algorithm examines each element once.

def find_max(arr):
"""O(n) - Must check every element"""
max_val = arr[0]
for num in arr:
if num > max_val:
max_val = num
return max_val
def linear_search(arr, target):
"""O(n) - Worst case checks all elements"""
for i, num in enumerate(arr):
if num == target:
return i
return -1
# Examples
numbers = [3, 1, 4, 1, 5, 9, 2, 6]
print(find_max(numbers)) # 9
print(linear_search(numbers, 5)) # 4

O(n²) - Quadratic Time

Usually involves nested loops over the data.

def bubble_sort(arr):
"""O(n²) - Nested loops"""
n = len(arr)
for i in range(n):
for j in range(0, n - i - 1):
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]
return arr
def find_pairs(arr):
"""O(n²) - Check all pairs"""
pairs = []
for i in range(len(arr)):
for j in range(i + 1, len(arr)):
pairs.append((arr[i], arr[j]))
return pairs
# Examples
numbers = [64, 34, 25, 12, 22]
print(bubble_sort(numbers.copy())) # [12, 22, 25, 34, 64]
small = [1, 2, 3]
print(find_pairs(small)) # [(1,2), (1,3), (2,3)]

Rules for Calculating Big O

1. Drop Constants

O(2n) → O(n) O(500) → O(1)

2. Drop Lower Order Terms

O(n² + n) → O(n²) O(n + log n) → O(n)

3. Different Inputs = Different Variables

def print_all_pairs(arr1, arr2):
# O(n * m), not O(n²) since different arrays
for a in arr1: # O(n)
for b in arr2: # O(m)
print(a, b)

4. Sequential Steps Add

def do_stuff(arr):
for x in arr: # O(n)
print(x)
for x in arr: # O(n)
print(x)
# Total: O(n) + O(n) = O(2n) = O(n)

5. Nested Steps Multiply

def do_stuff(arr):
for x in arr: # O(n)
for y in arr: # O(n)
print(x, y)
# Total: O(n) * O(n) = O(n²)

Practice: Analyze These Functions

Try to determine the Big O before revealing the answer.

Function 1
def mystery1(n):
result = 0
for i in range(n):
for j in range(n):
for k in range(n):
result += 1
return result

Answer: O(n³) - Three nested loops, each running n times.

Function 2
def mystery2(n):
result = 0
i = n
while i > 0:
result += i
i = i // 2
return result

Answer: O(log n) - The value halves each iteration.

Function 3
def mystery3(arr):
n = len(arr)
for i in range(n):
for j in range(i, n):
print(arr[i], arr[j])

Answer: O(n²) - Even though inner loop starts at i, it’s still O(n²). The total operations are n + (n-1) + (n-2) + … + 1 = n(n+1)/2 = O(n²).

Key Takeaways

  1. Big O describes worst-case scaling behavior
  2. Constants don’t matter at scale - O(2n) = O(n)
  3. Focus on the dominant term - O(n² + n) = O(n²)
  4. Know the common patterns:
    • Loop = O(n)
    • Nested loops = O(n²)
    • Halving = O(log n)
    • Recursion - analyze carefully

Next Steps