In this lecture we will continue our study of the
maximal (as opposed to the *maximum* )
independent set problem (henceforth MIS).
We will still consider the synchronous, message-passing model but
the underlying topology will be that of
any graph *G*=(*V*, *E*), not just the ring.

It is worth pointing out that the problem is absolutely trivial in a sequential setting, as the following linear time greedy algorithm shows: pick one vertex, put it into the independent set, remove its neighbors; pick another vertex, put it into the independent set, remove its neighbours, and so on.

In some sense we would like to parallelize this simple algorithm.
Perhaps surprisingly, this turns out to be extremely hard to do.
In fact, this is hard to do even in the PRAM model, where this
problem was originally studied. This seemingly simple
problem is so challenging that in order to solve it it was necessary
to develop a new and very interesting method
called *derandomization*, to which we return.
The solution we will develop makes use of randomization;
whether there exists deterministic algorithms for MIS running
in poly-logarithmic many rounds is a challenging open problem.

The algorithm we discuss is known as Luby's algorithm, after his inventor [26]. Essentially the same algorithm and analysis, however, can be found in [1]. Here we follow the crisp presentation of [22].

Although Luby's algorithm was originally developed in the PRAM model, it is also a bonafide distributed algorithm. We now describe the algorithm.

The algorithm runs in stages. During each stage the algorithm
computes an independent set *I* and removes the set
. If the leftover graph is non empty it becomes
the input to the next stage, otherwise the algorithm stops.
The output of the algorithm is the union of all sets *I*.
It is easy to see that this gives a MIS.

A stage of the algorithm is as follows.

- 1.
- Each vertex
*u*, in parallel, enters a set*S*of candidates with probability 1/2*d*(*u*). - 2.
- If both
*u*and one of its neighbours are in*S*, discard the lower degree vertex (ties are resolved arbitrarily). The resulting set is*I*.

The basic problem is to show that during each stage enough progress is made. In particular we want to show that the expected number of stages is . To do this, we will show that each stage removes an expected constant fraction of all the edges of the current graph. We define

and call it the set of low degree neighbours of
**Definition 13080**

A vertex *u* is *good* if .

Intuitively, a good vertex has a lot of low degree neighbours and hence has a good chance of making it into .

**Fact 13084**

**Proof. **

**Definition 13088**

An edge *e* = (*u*,*v*) is good if at least one of its neighbors is good.
A bad edge is an edge which is not good.

With these definitions at hand we make our plan more specific.
First, we show that in any graph, at least half of the edges are good.
Second, we show that the probability that a good edge is removed
during a stage is a constant fraction. This means that the expected
number of good edges which are removed at each stage is a constant
fraction of the total and this implies that
the expected number of stages is , where *n* is the number
of vertices of the input graph.
A more sophisticated analysis actually shows that the running time
is with high probability [13,20].

**Lemma 13095**

Let *E*_{G} and *E*_{B} denote the sets of good
and bad edges respectively. Then

**Proof. **We define a mapping such that

The mapping is defined as follows.
First, direct each edge of *G* toward the lower degree vertex,
resolving ties arbitrarily.
Focus now on a bad vertex. This vertex will have some
edges pointing inward and some outward. By definition of bad
vertex, at least one
third of its neighbours have strictly higher degree.
Therefore, there are at least
twice as many edges pointing outward than edges pointing inward.
This enables us to associate a pair of outward edges to every inward edge,
each time using two brand new outward edges.
Notice now that, again by definition, all inward edges are bad.
Hence, this defines a mapping .

Does *f* satisfies the properties we want?
By construction each *f*(*e*) is a pair of different edges.
To see that for any two edges , we consider two cases. If *e _{1}* and

What remains to be shown is that, in expectation, a constant fraction of good edges is removed at each stage. To prove this we will make use of the following two facts.

**Fact 13122**

**Proof. **We upper bound the complementary event .A vertex does not make it into *I* if and only if
a neighbour whose degree is higher or equal to that of *u* enters *S*.
Therefore,

(11) |

It is convenient at this point to make a very important

**Remark:** Equation (11.1) still holds if the assumption of
the independence of the unbiased coin flip is replaced with the
much weaker one of *pairwise independence*.
A collection of events are pairwise independent if, for all
distinct events we have .

**Fact 13134**

**Proof. **Using Fact 11.2,

(12) |

**Lemma 13139**

Let *e*=(*u*,*v*) be a good edge. Then, .

**Proof. **Assume without loss of generality that *u* is good. Then,
is lower bounded by
which we now estimate.
There are two cases to consider.
The first case is: .By Fact 11.3,

**Remark:**
once again, notice that the proof of Lemma 11.2
hinges only on pairwise independence. Specifically, this is used in
Equation (11.4).

Combining the last two lemmata shows that the expected fraction of edges which are deleted per stage is 1/72, which concludes the proof.