A critical section of code is framed by an entry section at the beginning and an exit section at the end; these sections act to grab and release the ``lock'' on that section. To simplify the model, we will assume that there is only one critical section, and not multiple sections of code that have to be mutually exclusive. It is easy to see that this simplification does not weaken the model.
When discussing distributed and concurrent algorithms, there are two kinds of properties with which we must be concerned. A safety property is one which guarantees that a bad thing will not happen. These are fairly easy to prove about algorithms because only the static state of the system at any particular time is taken into account. The other kind is a progress property, which guarantees that a good thing will eventually happen. This is much more difficult to prove since time must be taken into account. Proving the correctness of programs is essential with concurrent algorithms because our intuitive notion of program flow is violated by the fact that the value of a variable may not be the same from one statement to the next, even if the process we are tracing does not modify it. Only through formal proof techniques can we be guaranteed that the above properties will be satisfied.
With the problem of mutual exclusion, one safety property is that of mutual exclusion; that is, not more than one process should have its program counter (PC) in the critical code at the same time. To prove this, it suffices to show that one process's PC cannot leave the entry section while other process's PC is between the entry section and the exit section. Another desirable safety property is that of freedom from deadlock. This property guarantees that not all processes will be stuck in the entry section waiting for another process to enter the critical section.
There are two progress properties that are desirable with mutual-exclusion algorithms. The first is that of freedom from livelock. This property can be phrased as, ``If some process wants to enter the critical section, then some process will eventually enter the critical section.'' The other progress property, freedom from starvation, is stronger than the first and is phrased as, ``If some process wants to enter the critical section, then that process will eventually enter the critical section.'' The latter property implies the former, but is more difficult to guarantee. Notice also that freedom from livelock implies freedom from deadlock.
The terms process and processor are used interchangeably in this paper, since we assume that only one kernel-level process is running on each processor. Multiple processes on a single processor can be handled in a traditional way ( e.g., by disabling interrupts).
Lastly, throughout most of this paper, we assume that no processor will halt ( i.e., crash) while inside the critical section. This restriction is necessary because it is generally not possible for a process waiting on a lock to know whether the process that has the lock has crashed or is simply slow in its use of the shared resource. Halting in the critical section, then, would cause all other processes to block indefinitely in the entry section. The last section of this paper investigates algorithms that are resilient to arbitrary processor crashes.