Merge Sort is a classic divide and conqueralgorithm that splits an array into halves, recursively sorts each half, and merges them back together. It's one of the most reliable sorting algorithms — consistent, stable, and predictable.
How it works
The algorithm follows a simple recursive pattern. At each level, the problem is cut in half until you reach arrays of size 0 or 1 — which are trivially sorted. Then the merge step combines them back in order.
- 1Divide. Split the array at the midpoint into left and right halves.
- 2Recurse. Call sortArray on the left half, then the right half.
- 3Base case. If the array has length ≤ 1, return it — already sorted.
- 4Merge. Compare elements from both halves one by one, building a sorted result.
- 5Return. The merged array bubbles back up the call stack.
Time & Space Complexity
CaseTimeSpace
BestO(n log n)O(n)
AverageO(n log n)O(n)
WorstO(n log n)O(n)
Space is O(n) because the merge step needs a temporary array. Unlike Quick Sort, there is no worst-case degradation — it's always O(n log n).
Key Properties
Stable — Equal elements keep their original order — important when sorting objects by one field.
Not in-place — Requires O(n) extra memory. Not suitable for memory-constrained environments.
Consistent — Always O(n log n) — no pathological inputs can degrade it like Quick Sort.
Parallelizable — Left and right halves are independent — great for multi-threaded or distributed sorting.
When to use it
Use Merge Sort when you need a guaranteed, stable O(n log n)sort and can afford extra memory. It's the go-to for sorting linked lists, large external datasets, or anywhere where predictability matters more than memory.
- →Sorting linked lists — works without random access.
- →External sorting — large files that don't fit in RAM.
- →When stability is required — e.g. sorting database records.
- →When worst-case guarantees matter more than average-case speed.