Exercise 5.1-3

Suppose that you want to output 0 with probability \(1/2\) and 1 with probability \(1/2\). At your disposal is a procedure BIASED-RANDOM, that outputs either 0 or 1. It outputs 1 with some probability \(p\) and 0 with probability \(1-p\), where \(0 < p < 1\), but you do not know what \(p\) is. Give an algorithm that uses BIASED-RANDOM as a subroutine, and returns an unbiased answer, returning 0 with probability \(1/2\). What is the expected running time of your algorithm as a function of \(p\)?

The trick is to call BIASED-RANDOM twice which has four possible outcomes with the following probabilities (as per the law of the lazy statistician):

\[\begin{array} {|r|r|} \hline \text{Outcome} & \text{Probability} \\ \hline (0, 0) & (1-p)^2 \\ \hline (1, 1) & p^2 \\ \hline (1, 0) & p(1-p) \\ \hline (0, 1) & (1-p)p \\ \hline \end{array}\]

We can take advantage of the fact that the last two possibilites have an equal probability of occurring by repeatedly calling BIASED-RANDOM twice until we get one of those two outcomes. For convenience, we will simply take the first value of the outcome as our result.

UNBIASED-RANDOM():

1 x = BIASED-RANDOM()

2 y = BIASED-RANDOM()

3 if x == y:

4      return UNBIASED-RANDOM()

5 else:

6      return x

Calculating the expected runtime of UNBIASED-RANDOM is best done by treating the procedure as a sequence of Bernoulli trials (described in appendix C.4). The success case is when our two coin filps differ which has a combined probability of \(2p(1-p)\) of occurring. Therefore the expected runtime can be calculated by way of Formula C.32 and is \(\Theta\left(\frac{1}{2p(1-p)}\right)\).