Operating Systems: Three Easy Pieces [Concurrency Ch.7: DeadLock]

·

11 min read

Operating Systems: Three Easy Pieces [Concurrency Ch.7: DeadLock]

Common Concurrency Problems

we can classify that there are 2 types of bugs in concurrency code

1) Non-Deadlock Bugs

2) Deadlock bugs

Non-deadlock

atomicity violation bugs and order violation bugs

Atomicity-Violation Bugs

The first type of problem encountered is referred to as an atomicity violation. Here is a simple example, Before reading the explanation, try figuring out what the bug is.

 Thread 1::
   if (thd -> proc_info) {
     fputs(thd -> proc_info, ...);
   }

 Thread 2::
   thd -> proc_info = NULL;

In the example, two different threads access the field proc info in the structure thd. The first thread checks if the value is non-NULL and then prints its value; the second thread sets it to NULL. Clearly, if the first thread performs the check but then is interrupted before the call to fputs, the second thread could run in-between, thus setting the pointer to NULL; when the first thread resumes, it will crash, as a NULL pointer will be dereferenced by fputs.

We can say that

atomic violation: “The desired serializability among multiple memory accesses is violated (i.e. a code region is intended to be atomic, but the atomicity is not enforced during execution).”

Finding a fix for this type of problem is often (but not always) straightforward. Can you think of how to fix this code?

 pthread_mutex_t proc_info_lock = PTHREAD_MUTEX_INITIALIZER;

 Thread 1::
   pthread_mutex_lock( & proc_info_lock);
 if (thd -> proc_info) {
   fputs(thd -> proc_info, ...);
 }
 pthread_mutex_unlock( & proc_info_lock);

 Thread 2::
   pthread_mutex_lock( & proc_info_lock);
 thd -> proc_info = NULL;
 pthread_mutex_unlock( & proc_info_lock);

In this solution, we simply add locks around the shared variable references, ensuring that when either thread accesses the proc info field, it has a lock held (proc info lock). Of course, any other code that accesses the structure should also acquire this lock before doing so.

Order-Violation Bugs

 Thread 1::
   void init() {
     mThread = PR_CreateThread(mMain, ...);
   }

 Thread 2::
   void mMain(...) {
     mState = mThread -> State;
   }

As you probably figured out, the code in Thread 2 seems to assume that the variable mThread has already been initialized (and is not NULL);

The more formal definition of an order violation is the following: “The desired order between two (groups of) memory accesses is flipped (i.e., A should always be executed before B, but the order is not enforced during execution)”

The fix to this type of bug is generally to enforce ordering. As discussed previously, using condition variables is an easy and robust way to add this style of synchronization into modern code bases. In the example above, we could thus rewrite the code as seen in here

 pthread_mutex_t mtLock = PTHREAD_MUTEX_INITIALIZER;
 pthread_cond_t mtCond = PTHREAD_COND_INITIALIZER;
 int mtInit = 0;

 Thread 1::
   void init() {
     ...
     mThread = PR_CreateThread(mMain, ...);

     // signal that the thread has been created...
     pthread_mutex_lock( & mtLock);
     mtInit = 1;
     pthread_cond_signal( & mtCond);
     pthread_mutex_unlock( & mtLock);
     ...
   }

 Thread 2::
   void mMain(...) {
     ...
     // wait for the thread to be initialized...
     pthread_mutex_lock( & mtLock);
     while (mtInit == 0)
       pthread_cond_wait( & mtCond, & mtLock);
     pthread_mutex_unlock( & mtLock);

     mState = mThread -> State;
     ...
   }

Deadlock Bugs

Deadlock occurs, for example, when a thread (say Thread 1) is holding a lock (L1) and waiting for another one (L2); unfortunately, the thread (Thread 2) that holds lock L2 is waiting for L1 to be released.

Thread 1:               Thread 2:
pthread_mutex_lock(L1); pthread_mutex_lock(L2);
pthread_mutex_lock(L2); pthread_mutex_lock(L1);

Note that if this code runs, deadlock does not necessarily occur; rather, it may occur, if, for example, Thread 1 grabs lock L1 and then a context switch occurs to Thread 2. At that point, Thread 2 grabs L2, and tries to acquire L1. Thus we have a deadlock, as each thread is waiting for the other and neither can run.

Why Do Deadlocks Occur?

One reason is that in large code bases, complex dependencies arise between components. Take the operating system, for example. The virtual memory system might need to access the file system in order to page in a block from disk; the file system might subsequently require a page of memory to read the block into and thus contact the virtual memory system. this must be carefully done to avoid deadlock in the case of circular dependencies that may occur naturally in the code.

Another reason is due to the nature of encapsulation "hide details of implementations" Unfortunately such modularity does not mesh well with locking.

For example, take the Java Vector class and the method AddAll(). This routine would be called as follows:

Vector v1, v2;
v1.AddAll(v2);

Internally, because the method needs to be multi-thread safe, locks for both the vector being added to (v1) and the parameter (v2) need to be acquired. The routine acquires said locks in some arbitrary order (say v1 then v2) in order to add the contents of v2 to v1. If some other thread calls v2.AddAll(v1) at nearly the same time, we have the potential for deadlock, all in a way that is quite hidden from the calling application "encapsulation"

Conditions for Deadlock

Four conditions need to hold for a deadlock to occur

• Mutual exclusion: Threads claim exclusive control of resources that they require (e.g., a thread grabs a lock).

• Hold-and-wait: Threads hold resources allocated to them (e.g., locks that they have already acquired) while waiting for additional resources (e.g., locks that they wish to acquire).

• No preemption: Resources (e.g., locks) cannot be forcibly removed from threads that are holding them.

• Circular wait: There exists a circular chain of threads such that each thread holds one or more resources (e.g., locks) that are being requested by the next thread in the chain. "same as what we did in forks and philosophers problem"

If any of these four conditions are not met, deadlock cannot occur.

we first explore techniques to prevent deadlock; each of these strategies seeks to prevent one of the above conditions from arising and thus is one approach to handling the deadlock problem

Circular Wait

Probably the most practical prevention technique (and certainly one that is frequently employed) is to write your locking code such that you never induce a circular wait. The most straightforward way to do that is to provide a total ordering on lock acquisition For example if there are only two locks in the system (L1 and L2), you can prevent deadlock by always acquiring L1 before L2. Such strict ordering ensures that no cyclical wait arises; hence, no deadlock

Finally, lock ordering requires a deep understanding of the code base, and how various routines are called; just one mistake could result in the “D” word (Deadlock)

Hold-and-wait

The hold-and-wait requirement for deadlock can be avoided by acquiring all locks at once, atomically. In practice, this could be achieved as follows:

pthread_mutex_lock(prevention); // begin acquisition
pthread_mutex_lock(L1);
pthread_mutex_lock(L2);
...
pthread_mutex_unlock(prevention); // end

By first grabbing the lock prevention, this code guarantees that no untimely thread switch can occur in the midst of lock acquisition and thus deadlock can once again be avoided.

it requires that any time any thread grabs a lock, it first acquires the global prevention lock. For example, if another thread was trying to grab locks L1 and L2 in a different order, it would be OK, because it would be holding the prevention lock while doing so.

Note that the solution is problematic for a number of reasons. As before, encapsulation works against us: when calling a routine, this approach requires us to know exactly which locks must be held and to acquire them ahead of time.

No Preemption

Because we generally view locks as held until unlock is called, multiple lock acquisition often gets us into trouble because when waiting for one lock we are holding another.

Many thread libraries provide a more flexible set of interfaces to help avoid this situation. Specifically, the routine pthread mutex trylock() either grabs the lock (if it is available) and returns success or returns an error code indicating the lock is held; in the latter case, you can try again later if you want to grab that lock.

Such an interface could be used as follows to build a deadlock-free, ordering-robust lock acquisition protocol:

 top:
   pthread_mutex_lock(L1);
 if (pthread_mutex_trylock(L2) != 0) {
   pthread_mutex_unlock(L1);
   goto top;
 }

Note that another thread could follow the same protocol but grab the locks in the other order (L2 then L1) and the program would still be deadlock free. One new problem does arise, however: livelock: that two threads could both be repeatedly attempting this sequence and repeatedly failing to acquire both locks. In this case, both systems are running through this code sequence over and over again

One point about this solution: The first problem that would likely exist again arises due to encapsulation: if one of these locks is buried in some routine that is getting called, the jump back to the beginning becomes more complex to implement.If the code had acquired some resources (other than L1)

along the way, it must make sure to carefully release them as well; for example, if after acquiring L1, the code had allocated some memory, it would have to release that memory upon failure to acquire L2, before jumping back to the top to try the entire sequence again.

Mutual Exclusion

The final prevention technique would be to avoid the need for mutual exclusion at all. idea that one could design various data structures without locks at all

The idea behind these lock-free (and related wait-free) approaches here is simple: using powerful hardware instructions, you can build data structures in a manner that does not require explicit locking

As a simple example, let us assume we have a compare-and-swap instruction, which as you may recall is an atomic instruction provided by the hardware that does the following:

 int CompareAndSwap(int * address, int expected, int new) {
   if ( * address == expected) {
     * address = new;
     return 1; // success
   }
   return 0; // failure
 }

Imagine we now wanted to atomically increment a value by a certain amount, using compare-and-swap. We could do so with the following simple function:

 void AtomicIncrement(int * value, int amount) {
   do {
     int old = * value;
   } while (CompareAndSwap(value, old, old + amount) == 0);
 }

Instead of acquiring a lock, doing the update, and then releasing it, we have instead built an approach that repeatedly tries to update the value to the new amount and uses the compare-and-swap to do so.

no lock is acquired, and no deadlock can arise (though livelock is still a possibility)

Let us consider a slightly more complex example: list insertion. Here is code that inserts at the head of a list:

 void insert(int value) {
   node_t * n = malloc(sizeof(node_t));
   assert(n != NULL);
   n -> value = value;
   n -> next = head;
   head = n;
 }

This code performs a simple insertion, but if called by multiple threads at the “same time”, has a race condition. Of course, we could solve this by surrounding this code with a lock acquire and release:

void insert(int value) {
  node_t * n = malloc(sizeof(node_t));
  assert(n != NULL);
  n -> value = value;
  pthread_mutex_lock(listlock); // begin critical section
  n -> next = head;
  head = n;
  pthread_mutex_unlock(listlock); // end critical section
}

In this solution, we are using locks in the traditional manner2 . Instead, let us try to perform this insertion in a lock-free manner simply using the compare-and-swap instruction. Here is one possible approach:

 void insert(int value) {
   node_t * n = malloc(sizeof(node_t));
   assert(n != NULL);
   n -> value = value;
   do {
     n -> next = head;
   } while (CompareAndSwap( & head, n -> next, n) == 0);
 }

The code here updates the next pointer to point to the current head, and then tries to swap the newly-created node into position as the new head of the list. However, this will fail if some other thread successfully swapped in a new head in the meanwhile, causing this thread to retry again with the new head.

Of course, building a useful list requires more than just a list insert, and not surprisingly building a list that you can insert into, delete from, and perform lookups on in a lock-free manner is non-trivial.

Deadlock Avoidance via Scheduling

Instead of deadlock prevention, in some scenarios deadlock avoidance is preferable. Avoidance requires some global knowledge of which locks various threads might grab during their execution, and subsequently schedules said threads in a way as to guarantee no deadlock can occur.

For example, assume we have two processors and four threads which must be scheduled upon them. Assume further we know that Thread 1 (T1) grabs locks L1 and L2 (in some order, at some point during its execution), T2 grabs L1 and L2 as well, T3 grabs just L2, and T4 grabs no locks at all. We can show these lock acquisition demands of the threads in tabular form:

T1 T2 T3 T4
L1 yes yes no no
L2 yes yes yes no

A smart scheduler could thus compute that as long as T1 and T2 are not run at the same time, no deadlock could ever arise. Here is one such schedule:

CPU 1 T3 T4
CPU 2 T1 T2

Note that it is OK for (T3 and T1) or (T3 and T2) to overlap.

Even though T3 grabs lock L2, it can never cause a deadlock by running concurrently with other threads because it only grabs one lock.

Unfortunately, they are only useful in very limited environments, for example, in an embedded system where one has full knowledge of the entire set of tasks that must be run and the locks that they need. Further, such approaches can limit concurrency, as we saw in the second example above. Thus, avoidance of deadlock via scheduling is not a widely-used general-purpose solution.

Detect and Recover

One final general strategy is to allow deadlocks to occasionally occur, and then take some action once such a deadlock has been detected.

Many database systems employ deadlock detection and recovery techniques. A deadlock detector runs periodically, building a resource graph and checking it for cycles. In the event of a cycle (deadlock), the system needs to be restarted.

References

Operating Systems: Three Easy Pieces