One of the central problems in the Theory of (feasible) Computation is to ascertain to what extent, if any, randomization makes algorithms more powerful. This broad question is the motivation behind several very challenging open problems such as - whether randomization expands the class of problems solvable in polynomial time- or - whether randomization helps to compute problems fast in parallel in the PRAM model. While these problems appear today to be as unfathomable as they were the first time that they were formulated a couple of decades ago, the question of the power of randomization can be settled in the more restricted setting of distributed computation.
We saw in the last lecture that no 1-resilient protocol for consensus exists. If we use randomization however, consensus can be solved. In fact, we shall give an f-resilient protocol for f<n/2! This is an instance where randomization makes a computation model more powerful. As usual, we shall consider only crash failures.