Merge Sort
A stable, divide-and-conquer sorting algorithm with guaranteed O(n log n) performance.
Visualization
Interactive visualization for Merge Sort
🔵 Blue: Merging | 🟢 Green: Sorted
Interactive visualization with step-by-step execution
Implementation
1function mergeSort(arr: number[]): number[] {
2 if (arr.length <= 1) return arr;
3
4 const mid = Math.floor(arr.length / 2);
5 const left = mergeSort(arr.slice(0, mid));
6 const right = mergeSort(arr.slice(mid));
7
8 return merge(left, right);
9}
10
11function merge(left: number[], right: number[]): number[] {
12 const result: number[] = [];
13 let i = 0, j = 0;
14
15 while (i < left.length && j < right.length) {
16 if (left[i] < right[j]) result.push(left[i++]);
17 else result.push(right[j++]);
18 }
19
20 return result.concat(left.slice(i)).concat(right.slice(j));
21}Deep Dive
Theoretical Foundation
Merge Sort is a divide-and-conquer algorithm that divides the input array into two halves, recursively sorts them, and then merges the two sorted halves. The merge operation is key - it takes two sorted arrays and produces a single sorted array in O(n) time. Merge Sort's main advantage is its guaranteed O(n log n) time complexity in all cases (best, average, and worst), making it highly predictable. Unlike Quick Sort, it's stable (maintains relative order of equal elements) and works well for linked lists. The algorithm was invented by John von Neumann in 1945. Its divide phase recursively splits the array until reaching individual elements (base case), then the conquer phase merges these elements back together in sorted order. The recurrence relation is T(n) = 2T(n/2) + O(n), which solves to O(n log n) by the Master Theorem.
Complexity
Time
O(n log n)
O(n log n)
O(n log n)
Space
O(n)
Applications
Industry Use
External sorting for massive datasets that don't fit in RAM
Python's sorted() and list.sort() use TimSort (hybrid of Merge Sort)
Java's Collections.sort() for objects (requires stability)
Android SDK sorting for stable requirements
Solving inversion count problems and smaller-on-right counting
Union and intersection of sorted arrays
Sorting linked lists (O(1) space possible)
Parallel processing and distributed systems
Database query optimization for ORDER BY clauses
Version control systems for merging file changes
Use Cases
Related Algorithms
Quicksort
A highly efficient, in-place sorting algorithm that uses divide-and-conquer strategy. Invented by Tony Hoare in 1959, it remains one of the most widely used sorting algorithms due to its excellent average-case performance and cache efficiency.
Bubble Sort
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping adjacent elements if they are in the wrong order. This process is repeated until the entire array is sorted. Named for the way larger elements 'bubble' to the top (end) of the array.
Insertion Sort
Insertion Sort is a simple, intuitive sorting algorithm that builds the final sorted array one element at a time. It works similarly to how people sort playing cards in their hands - picking each card and inserting it into its correct position among the already sorted cards. Despite its O(n²) time complexity, Insertion Sort is efficient for small datasets and nearly sorted arrays, making it practical for real-world applications.
Selection Sort
Selection Sort is a simple comparison-based sorting algorithm that divides the array into sorted and unsorted regions. It repeatedly selects the smallest (or largest) element from the unsorted region and moves it to the end of the sorted region. The algorithm performs the minimum number of swaps (n-1 at most), making it useful when memory writes are expensive operations.