← Back to Visualizer

Documentation

Theme

Overview

VisualSorting is an interactive tool designed to help you understand how different sorting algorithms work. Watch as arrays are sorted in real-time, with visual feedback showing comparisons (yellow), swaps (red), and sorted elements (green).

Quick Start

  1. Select an algorithm from the dropdown
  2. Adjust the array size and speed using the sliders
  3. Click "New Array" to generate a fresh dataset
  4. Click "Sort!" to begin the visualization

Features

Advanced Options

Expand the Advanced Options panel to access additional settings:

Array Distribution

Choose how the initial array is generated:

  • Random: Completely random values
  • Nearly Sorted: ~90% sorted with some random elements
  • Reversed: Sorted in descending order (worst case)
  • Few Unique: Only 5 distinct values
  • Already Sorted: Perfectly sorted (best case)

Sound Effects

Enable audio feedback during sorting. The pitch corresponds to the value being processed—higher values produce higher tones.

Show Bar Values

Display the numeric value above each bar for better understanding of what's happening during the sort.

Benchmarking

The Benchmark tab allows you to compare algorithm performance objectively. It uses Web Workers to run algorithms in a background thread, ensuring the UI remains responsive even during heavy computations. Unlike the visualizer which adds delays for visibility, benchmarks run algorithms at full speed.

How It Works

  1. Set the array size (larger = more meaningful results)
  2. Set the number of test runs (more runs = more accurate averages)
  3. Select which algorithms to include
  4. Click "Run Benchmark"

Scoring System

Each algorithm receives a score from 0-100 based on weighted metrics:

Metric Weight Description
Time 50% Execution time in milliseconds
Comparisons 25% Number of element comparisons
Swaps 25% Number of element swaps/writes

Bubble Sort

O(n²) O(1)

Bubble Sort repeatedly steps through the list, compares adjacent elements, and swaps them if they're in the wrong order. The algorithm gets its name because smaller elements "bubble" to the top of the list.

How It Works

  1. Compare adjacent elements starting from the beginning
  2. Swap if the left element is greater than the right
  3. After each pass, the largest unsorted element is in its final position
  4. Repeat until no swaps are needed

✓ Pros

  • Simple to understand and implement
  • Stable sort (maintains relative order)
  • Adaptive (fast on nearly-sorted data)

✗ Cons

  • Very slow for large datasets
  • O(n²) even for average cases

Selection Sort

O(n²) O(1)

Selection Sort divides the array into sorted and unsorted regions. It repeatedly finds the minimum element from the unsorted region and places it at the end of the sorted region.

How It Works

  1. Find the minimum element in the unsorted portion
  2. Swap it with the first unsorted element
  3. Move the boundary between sorted/unsorted one position right
  4. Repeat until the entire array is sorted

✓ Pros

  • Simple implementation
  • Minimal memory usage
  • Fewest swaps possible (O(n))

✗ Cons

  • Always O(n²), even for sorted arrays
  • Not stable

Insertion Sort

O(n²) O(1)

Insertion Sort builds the final sorted array one element at a time. It's similar to how you might sort playing cards in your hand.

How It Works

  1. Start with the second element
  2. Compare it with elements to its left
  3. Shift larger elements right to make space
  4. Insert the element in its correct position
  5. Repeat for all remaining elements

✓ Pros

  • Excellent for small datasets
  • Very fast on nearly-sorted data
  • Stable and in-place
  • Online (can sort as data arrives)

✗ Cons

  • Slow for large datasets
  • Many shifts for reversed arrays

Merge Sort

O(n log n) O(n)

Merge Sort is a divide-and-conquer algorithm that recursively splits the array in half, sorts each half, and then merges them back together.

How It Works

  1. Divide: Split the array into two halves
  2. Conquer: Recursively sort each half
  3. Combine: Merge the sorted halves

✓ Pros

  • Guaranteed O(n log n) performance
  • Stable sort
  • Parallelizable
  • Good for linked lists

✗ Cons

  • Requires O(n) extra space
  • Not in-place
  • Overkill for small arrays

Quick Sort

O(n log n)* O(log n)

Quick Sort picks a "pivot" element and partitions the array around it—elements smaller than the pivot go left, larger go right. It then recursively sorts the partitions.

How It Works

  1. Choose a pivot element
  2. Partition: rearrange so all smaller elements come before pivot
  3. Recursively apply to left and right partitions

*Note: Worst case is O(n²) for already-sorted or reverse-sorted arrays with poor pivot selection. Our implementation uses the last element as pivot.

✓ Pros

  • Very fast in practice
  • In-place (low memory)
  • Cache-friendly

✗ Cons

  • O(n²) worst case
  • Not stable
  • Recursive (stack overflow risk)

Dual-Pivot Quick Sort

O(n log n) O(log n)

Dual-Pivot Quick Sort (Yaroslavskiy's algorithm) uses two pivots to partition the array into three parts. It's the default sorting algorithm in Java since JDK 7.

How It Works

  1. Choose two pivots P1 and P2 where P1 ≤ P2
  2. Partition into three regions: elements < P1, elements between P1 and P2, elements > P2
  3. Place pivots in their final positions
  4. Recursively sort the three partitions

Industry Standard: This is Java's Arrays.sort() for primitive types since 2011.

✓ Pros

  • ~10% faster than single-pivot Quick Sort
  • Better cache utilization
  • Fewer comparisons on average

✗ Cons

  • Still O(n²) worst case
  • Not stable
  • More complex implementation

IntroSort

O(n log n) O(log n)

IntroSort (Introspective Sort) is a hybrid algorithm that starts as Quick Sort but switches to Heap Sort when the recursion depth exceeds a limit. It's the default in C++ std::sort.

How It Works

  1. Set depth limit = 2 × log₂(n)
  2. Use Quick Sort partitioning
  3. If recursion exceeds depth limit, switch to Heap Sort
  4. Use Insertion Sort for small subarrays (≤16 elements)

Best of Both Worlds: Quick Sort's speed with Heap Sort's guaranteed O(n log n) worst case.

✓ Pros

  • Guaranteed O(n log n) worst case
  • Fast Quick Sort average case
  • Used in C++ standard library

✗ Cons

  • Not stable
  • Complex implementation (3 algorithms)
  • Visible behavior change during sort

Heap Sort

O(n log n) O(1)

Heap Sort uses a binary heap data structure. It first builds a max-heap from the array, then repeatedly extracts the maximum element.

How It Works

  1. Build a max-heap from the array
  2. Swap the root (maximum) with the last element
  3. Reduce heap size by one
  4. Heapify the root to restore max-heap property
  5. Repeat until heap size is 1

✓ Pros

  • Guaranteed O(n log n)
  • In-place
  • No worst case scenarios

✗ Cons

  • Not stable
  • Poor cache performance
  • Slower than Quick Sort in practice

Shell Sort

O(n log² n) O(1)

Shell Sort is an optimization of Insertion Sort that allows swapping elements that are far apart. It uses a decreasing sequence of gaps.

How It Works

  1. Start with a large gap (n/2)
  2. Sort elements that are 'gap' positions apart using insertion sort
  3. Reduce the gap by half
  4. Repeat until gap is 1 (final insertion sort)

✓ Pros

  • Much faster than Insertion Sort
  • In-place with O(1) space
  • Good for medium-sized arrays

✗ Cons

  • Not stable
  • Complexity depends on gap sequence
  • Slower than O(n log n) algorithms

Tim Sort

O(n log n) O(n)

Tim Sort is a hybrid algorithm derived from Merge Sort and Insertion Sort. It's the default sorting algorithm in Python and Java. It exploits natural runs (pre-sorted sequences) in the data.

How It Works

  1. Divide the array into small runs (32-64 elements)
  2. Sort each run using Insertion Sort
  3. Merge runs using a modified Merge Sort
  4. Use a stack to track runs and merge efficiently

✓ Pros

  • Extremely fast on real-world data
  • Stable sort
  • Adaptive (O(n) for sorted data)

✗ Cons

  • Complex implementation
  • O(n) extra space

Radix Sort (LSD)

O(d × n) O(n + k)

Radix Sort sorts numbers digit by digit, from least significant to most significant (LSD variant). It uses Counting Sort as a subroutine for each digit.

How It Works

  1. Find the maximum number to know the number of digits
  2. For each digit position (starting from ones place):
  3. Use Counting Sort to sort by that digit
  4. Repeat for all digit positions

Note: d = number of digits in max element, k = range of each digit (10 for decimal)

✓ Pros

  • Linear time for fixed digit counts
  • Stable sort
  • Excellent for integers with known range

✗ Cons

  • Only works for integers
  • Requires extra space
  • Slow if max value has many digits

Radix Sort (MSD)

O(d × n) O(n + k)

MSD (Most Significant Digit) Radix Sort processes digits from left to right, recursively sorting buckets. Unlike LSD, the sorted order emerges visually from left to right.

How It Works

  1. Sort by the most significant digit first
  2. Recursively sort each bucket by the next digit
  3. Continue until all digits are processed

Visual Pattern: Unlike LSD (chaotic until the end), MSD shows sorted order emerging progressively from the largest values down.

✓ Pros

  • Can terminate early for buckets
  • Works well for variable-length strings
  • Satisfying left-to-right visualization

✗ Cons

  • More complex than LSD
  • Recursive overhead
  • Extra space for buckets

Bucket Sort

O(n + k) O(n)

Bucket Sort is a distribution sort that scatters elements into "buckets" based on their value ranges, sorts each bucket, then gathers them back together.

How It Works

  1. Create empty buckets covering the value range
  2. Scatter: place each element in its corresponding bucket
  3. Sort each bucket (using Insertion Sort or similar)
  4. Gather: concatenate all buckets in order

Visualization: Elements are color-coded by bucket assignment, creating a rainbow effect before the final gather phase.

✓ Pros

  • O(n) for uniformly distributed data
  • Stable if bucket sort is stable
  • Great scatter-gather visualization

✗ Cons

  • O(n²) if all elements in one bucket
  • Requires knowledge of data distribution
  • Extra space for buckets

Counting Sort

O(n + k) O(k)

Counting Sort is a non-comparison sorting algorithm that counts the occurrences of each unique value and uses arithmetic to determine positions. It achieves linear time for small integer ranges.

How It Works

  1. Find the range (min to max) of values
  2. Create a count array of size (max - min + 1)
  3. Count occurrences of each value
  4. Reconstruct the sorted array from counts

Speed Demon: With small integer ranges, this achieves O(n) linear time—faster than any comparison-based sort's theoretical O(n log n) limit!

✓ Pros

  • O(n) linear time complexity
  • Beats all comparison-based sorts
  • Simple implementation

✗ Cons

  • Only works for integers
  • Memory grows with value range
  • Impractical for large value ranges

Gravity (Bead) Sort

O(n × max) O(n × max)

Gravity Sort simulates beads falling on vertical poles. Imagine an abacus turned on its side—beads fall due to gravity, naturally sorting by count.

How It Works

  1. Represent each number as a row of beads
  2. Let beads "fall" due to gravity
  3. Count beads in each column to get sorted values

✓ Pros

  • Intuitive physical analogy
  • O(n) with hardware implementation
  • Educational and fun to visualize

✗ Cons

  • Huge memory usage (O(n × max))
  • Only works for positive integers
  • Impractical in software

Bitonic Sort

O(n log² n) O(1)

Bitonic Sort first creates a "bitonic sequence" (ascending then descending) and then merges it. It's designed for parallel processing.

How It Works

  1. Recursively build bitonic sequences
  2. Merge bitonic sequences in alternating directions
  3. Compare-and-swap pairs at decreasing distances

Note: Works best when array size is a power of 2. Our implementation pads the array if needed.

✓ Pros

  • Highly parallelizable
  • Consistent performance
  • Great for GPU/hardware sorting

✗ Cons

  • Not stable
  • Requires power-of-2 sized arrays
  • More swaps than necessary

Comb Sort

O(n²)* O(1)

Comb Sort improves on Bubble Sort by comparing elements separated by a gap that shrinks over time. The shrink factor is typically 1.3.

How It Works

  1. Start with gap = n/1.3
  2. Compare and swap elements 'gap' apart
  3. Shrink gap by dividing by 1.3
  4. Continue until gap = 1 and no swaps occur

✓ Pros

  • Much faster than Bubble Sort
  • Simple implementation
  • In-place with O(1) space

✗ Cons

  • Not stable
  • Worst case still O(n²)
  • Slower than O(n log n) sorts

Gnome Sort

O(n²) O(1)

Gnome Sort is similar to Insertion Sort but simpler. It's called "Gnome Sort" because it works like a garden gnome sorting flower pots.

How It Works

  1. If current element ≥ previous, move forward
  2. If current element < previous, swap and move back
  3. When position 0 is reached, move forward
  4. Continue until end of array is reached

✓ Pros

  • Extremely simple code
  • Stable sort
  • Adaptive for nearly sorted data

✗ Cons

  • O(n²) time complexity
  • Slow for large arrays
  • No practical advantage over Insertion Sort

Cocktail Shaker Sort

O(n²) O(1)

Cocktail Shaker Sort (bidirectional Bubble Sort) alternates between scanning left-to-right and right-to-left, addressing the "turtle" problem in regular Bubble Sort.

How It Works

  1. Bubble forward (left to right)
  2. Bubble backward (right to left)
  3. Repeat, shrinking the unsorted range from both ends

✓ Pros

  • Faster than Bubble Sort
  • Handles "turtles" (small values at end)
  • Stable sort

✗ Cons

  • Still O(n²) complexity
  • Slow for large arrays
  • Outclassed by efficient algorithms

Odd-Even Sort

O(n²) O(1)

Odd-Even Sort alternates between comparing/swapping odd-indexed pairs and even-indexed pairs. It's designed for parallel processing.

How It Works

  1. Compare all (odd, odd+1) pairs, swap if needed
  2. Compare all (even, even+1) pairs, swap if needed
  3. Repeat until no swaps occur in a full pass

✓ Pros

  • Easily parallelizable
  • Simple implementation
  • Stable sort

✗ Cons

  • O(n²) time complexity
  • Slow on single processor
  • No advantage for sequential execution

Cycle Sort

O(n²) O(1)

Cycle Sort minimizes the number of writes to memory. It places each element in its correct position by counting smaller elements.

How It Works

  1. For each element, count how many elements are smaller
  2. Place the element at that position
  3. Take the displaced element and repeat
  4. Continue until the cycle returns to the start

Use Case: Optimal for flash memory or SSDs where write cycles are limited.

✓ Pros

  • Minimum possible writes
  • Optimal for flash/EEPROM
  • In-place with O(1) space

✗ Cons

  • O(n²) time complexity
  • Very slow comparisons
  • Not stable

Pancake Sort

O(n²) O(1)

Pancake Sort uses only one operation: flip the first k elements. It's like sorting a stack of pancakes using a spatula.

How It Works

  1. Find the largest unsorted pancake
  2. Flip it to the top
  3. Flip it to its correct position
  4. Repeat for the remaining unsorted pancakes

✓ Pros

  • Only uses prefix reversals
  • Fun theoretical problem
  • In-place with O(1) space

✗ Cons

  • O(n²) time complexity
  • Not practical for real sorting
  • Not stable

Strand Sort

O(n²) O(n)

Strand Sort repeatedly extracts sorted "strands" (ascending subsequences) from the input and merges them into the output. It's optimized for data with existing sorted runs.

How It Works

  1. Extract a "strand": take first element, then add any following elements that are larger
  2. Merge the strand into the output (like Merge Sort's merge step)
  3. Repeat until all elements are extracted

Visual Pattern: Looks like untangling a knot—sorted strands are pulled out and woven together.

✓ Pros

  • O(n) for already sorted data
  • Stable sort
  • Efficient for data with natural runs

✗ Cons

  • O(n²) worst case
  • Requires extra space
  • Linked list manipulation overhead

Stooge Sort

O(n^2.7) O(log n)

Stooge Sort is a notoriously inefficient recursive algorithm. It's slower than Bubble Sort and is mainly of theoretical interest.

How It Works

  1. If first element > last element, swap them
  2. If there are 3+ elements:
  3. Stooge sort the first 2/3
  4. Stooge sort the last 2/3
  5. Stooge sort the first 2/3 again

Warning: Extremely slow! Use small arrays only.

✓ Pros

  • Interesting recursive structure
  • Educational value
  • Always terminates (unlike Bogo)

✗ Cons

  • O(n^2.7) time complexity
  • Slower than Bubble Sort
  • No practical use whatsoever

Bogo Sort

O((n+1)!) O(1)

Bogo Sort (also called "stupid sort" or "slow sort") randomly shuffles the array until it happens to be sorted. It's the ultimate example of what NOT to do.

How It Works

  1. Check if the array is sorted
  2. If not, randomly shuffle all elements
  3. Repeat until sorted (could be never!)

Warning: Expected time grows factorially. A 10-element array averages 3.6 million shuffles!

✓ Pros

  • Extremely simple concept
  • O(1) space complexity
  • Great for teaching what NOT to do

✗ Cons

  • O((n+1)!) average time
  • May never complete
  • The worst sorting algorithm ever

Circle Sort

O(n log n) O(log n)

Circle Sort compares pairs of elements equidistant from the center, recursively splitting and comparing until sorted.

How It Works

  1. Compare and swap pairs: first with last, second with second-to-last, etc.
  2. Recursively apply to left and right halves
  3. Repeat until no swaps occur in a full pass

✓ Pros

  • O(n log n) average performance
  • In-place sorting
  • Interesting recursive pattern

✗ Cons

  • Not stable
  • May need multiple passes
  • Less predictable than Merge Sort

Algorithm Race

The Compare tab allows you to race multiple algorithms against each other on the same array data. This provides a direct visual comparison of their behavior and efficiency.

Features

  • Up to 25 Racers: Add as many algorithms as you want (grid layout adjusts automatically).
  • Identical Data: All algorithms sort a copy of the exact same initial array.
  • Real-time Stats: Watch the comparison and swap counters tick up live.
  • Leaderboard: See who finished first once the race is complete.

Custom Algorithms

The Custom tab gives you the power to implement your own sorting algorithms using JavaScript! Your code runs in a sandboxed environment and controls the visualizer directly.

API Reference

You have access to the following variables and functions:

  • arr: The array of numbers to sort (0-100). Valid indices: 0 to arr.length - 1.
  • await compare(i, j): Highlights bars at indices i and j (yellow). returns nothing.
  • await swap(i, j): Swaps elements at indices i and j and highlights them (red). Updates the visualizer.
  • await sleep(ms): Pauses execution for ms milliseconds.
  • stats.comp: Counter for comparisons (increments automatically by compare()).
  • stats.swap: Counter for swaps (increments automatically by swap()).

Example: Bubble Sort

let n = arr.length;
for (let i = 0; i < n - 1; i++) {
    for (let j = 0; j < n - i - 1; j++) {
        // Visualize comparison
        await compare(j, j + 1);
        
        if (arr[j] > arr[j + 1]) {
            // Swap and visualize
            await swap(j, j + 1);
        }
    }
}