Let’s Talk About Big O Notation
📢 Note: This article kicks off a deep dive into algorithms and data structures. Keep an eye out for more insights and examples as we explore Big O and beyond! If you’re keen to learn more, check out The Last Algorithms Course You’ll Need on Frontend Masters. For the rest of this series, see the first post: Working Through an Algorithms Course
Table of Contents
Open Table of Contents
Introduction
Understanding Big O notation is key to grasping how algorithms perform. It’s a way to measure and compare the efficiency of algorithms, predicting how they’ll handle larger inputs. Big O focuses on how algorithms scale with input size, looking at both space (memory usage) and time (execution time) complexities. Knowing these complexities helps you choose the right algorithm for the job, especially when dealing with large datasets where performance matters.
Breaking Down Big O Notation
Big O notation is written as O(f(n)), where n
is the size of the input, and f(n)
describes how the algorithm’s run time or space requirements grow. Here’s a rundown of the most common Big O notations:
- O(1): Constant time. The algorithm’s execution time doesn’t change with the input size.
- O(log n): Logarithmic time. The run time grows logarithmically as the input size increases.
- O(n): Linear time. The run time increases proportionally with the input size.
- O(n log n): Linearithmic time. Found in efficient sorting algorithms like mergesort and heapsort.
- O(n²): Quadratic time. Run time grows proportionally to the square of the input size. Common in simple sorting algorithms like bubble sort.
- O(2ⁿ): Exponential time. The run time doubles with each additional input element. Seen in algorithms that generate all subsets of a set.
- O(n!): Factorial time. Run time grows factorially with input size. Seen in algorithms that generate all permutations of a set.
In Big O notation, we ignore constants and lower-order terms. So O(2n) is considered O(n), and O(3n²) is considered O(n²). This is because we’re interested in how the algorithm scales with large inputs, not the exact run time.
Feeling a bit overwhelmed? Don’t worry, examples are on the way!
Imagine you’re manufacturing widgets:
- O(1): It takes the same amount of time to make 10 widgets as it does to make 1,000. The production time is constant, no matter how many you make.
- O(n): Production time increases linearly with the number of widgets. If it takes 1 hour to make 1 widget, it’ll take 2 hours for 2 widgets, 3 hours for 3 widgets, and so on.
- O(n²): Time increases quadratically. If it takes 1 hour to make 1 widget, it’ll take 4 hours for 2 widgets, 9 hours for 3 widgets, and so forth.
You can see why aiming for O(1) is preferable over O(n), which is better than O(n²). Even reducing from O(n²) to O(n log n) can make a big difference with large inputs. If you’re dealing with O(2ⁿ) or O(n!), it’s time to rethink your approach.
Big O Examples
- O(1) - Constant Time Complexity
- Accessing an element in an array by index
- Retrieving a value from a hash map by key
- O(log n) - Logarithmic Time Complexity
- Binary search in a sorted array
- Searching in a balanced binary search tree
- O(n) - Linear Time Complexity
- Iterating over all elements in an array
- Finding the minimum or maximum value in an unsorted list
- O(n log n) - Linearithmic Time Complexity
- Sorting algorithms like mergesort or quicksort
- Certain divide and conquer algorithms
- O(n²) - Quadratic Time Complexity
- Nested loops over the same dataset
- Simple sorting algorithms like bubble sort or selection sort
We’ll dive into detailed implementations and analyses of these examples, along with others, in upcoming posts.