Consider sorting \(n\) numbers stored in array \(A\) by first finding the smallest element of \(A\) and exchanging it with the element in \(A[1]\). Then find the second smallest element of \(A\), and exchange it with \(A[2]\). Continue in this manner for the first \(n-1\) elements of \(A\). Write pseudocode for this algorithm, which is known as selection sort. What loop invariant does this algorithm maintain? Why does it need to run for only the first \(n-1\) elements, rather than for all \(n\) elements? Give the best-case and worst-case running times of selection sort in \(\Theta\)-notation.
SELECTION-SORT (A)
1 for i = 1 to A.length - 1
:
2 smallest_val = A[i]
3 smallest_pos = i
4 for j = i+1 to A.length
:
5 if A[j] < smallest_val:
6 smallest_val = A[j]
7 smallest_pos = j
8 A[smallest_pos] = A[i]
9 A[i] = smallest_val
Loop Invariant At the start of each iteration of the loop in lines 1-9, the subarray \(A \langle 1, ..., i-1 \rangle\) contains the smallest elements of \(A\) in increasing order. At the start of each iteration of the loop in lines 5-7, the smallest_val
variable contains the smallest element of the subarray \(A \langle i, ..., j-1 \rangle\) and the smallest_pos
variable contains that value’s position in the array.
We only need to run this algorithm for the first \(n-1\) elements of the array because it is ‘forward-looking’. The \(n\)th element has nothing to be compared against and is therefore trivially the smallest element remaining (which is to say, the largest element).
Both the best and worst case running times are \(\Theta (n^2)\).