Concurrency Control

Paul Krzyzanowski

October 2015


When discussing transactions, we alluded to schedules, or valid orders of execution. Basically, if we have two transactions executing concurrently, we cannot have one of them use the intermediate (pre-commited) results the other one. This enforces the “I” (Isolated) part of ACID semantics. We can play it safe and use mutual exclusion on a transactions themselves to ensure that only one transaction is executing at any time. However, this is usually overkill and does not allow us take advantage of the concurrency that we can get in distributed systems. What we would really like is to allow multiple transactions to execute simultaneously but keep them out of each other’s way and ensure serializability. This is called concurrency control.


One mechanism that we can use to serialize transactions is grabbing an exclusive lock on a resource. A process will lock any data that it needs to use within its transaction. Resource locking in a distributed system can be implemented using mutual exclusion algorithms. A common approach is to use a lock manager, which is essentially the same as the centralized implementation for mutual exclusion. One process serves as a lock manager. Processes request a lock for a resource (e.g., a file or a shared data object) and then they either are granted a lock or wait for it to be granted when another process releases it.

Two-phase locking

Getting and releasing locks precisely can be tricky. Done improperly, it can lead to inconsistency and/or deadlocks. We need to ensure that a transaction can always commit without violating the serializability invariant. For transactions to be serial, all access to data must be serialized with respect to accesses by other transactions. To ensure that conflicting operations of multiple transactions are executed in the same order, a restriction is imposed: a transaction is not allowed to obtain new locks once it has released a lock. This restriction is called two-phase locking. The first phase is known as the growing phase, in which a transaction acquires all the locks it needs. The second phase is known as the shrinking phase, where the process releases the locks. If a process fails to acquire all the locks during the first phase, then it is obligated to release all of them, wait, and start over. It has been proved (Eswaren, et al., 1976) that if all transactions use two-phase locking, then all schedules formed by interleaving them are serializable.

Strict two-phase locking

A risk with two-phase locking is that another transaction may access data that was modified by a transaction that has not yet committed: the shrinking phase, where locks on resources are released, happens before the transaction commits. If this happens, any transactions that accessed that data have to be aborted since the data they accessed will be rolled back to its previous state. This is a phenomenon known as cascading aborts.

To ensure that a transaction not access data until another transaction that was manipulating the data has either committed or aborted, locks may be held until the transaction commits or aborts. This is known as strict two-phase locking. This is similar to two-phase locking except that in this case, the shrinking phase takes place during the commit or abort process. One effect of strict two-phase locking is that by placing the second phase at the end of a transaction, all lock acquisitions and releases can be handled by the system without the transaction’s knowledge.

Locking granularity

A typical system will have many objects and typically a transaction will access only a small amount of data at any given time and it will frequently be the case that a transaction will not clash with other transactions. The granularity of locking affects the amount of concurrency we can achieve. If we can have a smaller granularity by locking smaller objects or pieces of objects then we can generally achieve higher concurrency.

For example, suppose that all of a bank’s customers are locked for any transaction that needs to modify a single customer datum. Concurrency is severely limited because any other transactions that need to access any customer data will be blocked. If, however, we use a customer record as the granularity of locking, transactions that access different customer records will be capable of running concurrently.

Read and write locks

If a process imposes a read lock on a resource, other processes will still be able to request read locks on that same resource. However, a request for a write lock would fail or be blocked until all read locks are released[1]. If a process imposes a write lock on a resource, then neither read nor write locks will be granted until that process releases the resource.

Locking can be optimized to yield better resource usage by distinguishing read locks from write locks. To support this, we introduce the use of two locks per object: read locks and write locks. Read locks are also known as shared locks (since they can be shared by multiple transactions) If a transaction needs to read an object, it will request a read lock from the lock manager. If a transaction needs to modify an object, it will request a write lock from the lock manager. If the lock manager cannot grant a lock, then the transaction will wait until it can get the lock (after the transaction with the lock committed or aborted)[1]. To summarize lock granting:

If a transaction has: another transaction may obtain:
no locks read lock or write lock
read lock read lock (wait for write lock)
write lock wait for read or write locks

Two-version locking

Two-version locking is a somewhat optimistic concurrency control scheme that allows one transaction to write tentative versions of objects while other transactions read from committed versions of the same objects. Read operations only wait if another transaction is currently committing the same object. This scheme allows more concurrency than read-write locks, but writing transactions risks a wait (or rejection) when they attempt to commit. Transactions cannot commit their write operations immediately if other uncommitted transactions have read the same objects. Transactions that request to commit in this situation have to wait until the reading transactions have completed.

Two-version locking requires three types of locks: read, write, and commit locks. Before an object is read, a transaction must obtain a read lock. Before an object is written, the transaction must obtain a write lock (the same as with two-phase locking). Neither of these locks will be granted if there is a commit lock on the object. When the transaction is ready to commit:

  • All of the transaction’s write locks are changed to commit locks.
  • If any objects used by the transaction have outstanding read locks, the transaction must wait until the transactions that set these locks have completed and those locks are released.

If we compare the performance difference between two-version locking and strict two-phase lockings that uses read/write locks, we find:

  • Read operations in two-version locking are delayed only while transactions are being committed rather than during the entire execution of transactions. The commit protocol usually takes far less time than the time to perform the transaction.

  • Operations of one transaction in two-version locking can cause a delay in the committing of other transactions.

Optimistic concurrency control

Locking is not without problems. Locks have an overhead associated with maintaining and checking them and having to run a lock managers. Even transactions performing read-only operations on objects must request locks. The use of locks can result in deadlock. We will need to have software in place to detect or avoid deadlock. Locks can also decrease the potential concurrency in a system by having a transaction hold locks for the duration of the transaction until a commit or abort takes place. Concurrency is reduced becuase the transactions hold on to locks longer than needed (as in strict two-phase locking). An alternative proposed by Kung and Robinson in 1981 is optimistic concurrency control, which tells the transaction to just go ahead and do what it has to do without worrying about what someone else is doing. It is based on the observation that, in most applications, the chance of two transactions accessing the same object is low. So why introduce overhead? When a conflict does arise, then the system will have to deal with it.

We will allow transactions to proceed as if there were no possibility of conflict with other transactions: a transaction does not have to obtain or check for locks. This is the working phase. Each transaction creates a tentative version of the objects it updates: a copy of the most recently committed version. Write operations record new values as tentative values.

When a transaction is ready to commit, a validation is performed on all the data items to see whether the data conflicts with operations of other transactions. This is the validation phase. If the validation fails, then the transaction will have to be aborted and restarted later (again, optimistically we hope these conflicts will be few and far between). If it succeeds, then the tentative versions are made permanent. This is the update phase. Optimistic control is clearly deadlock free (no locking or waiting on resources) and allows for maximum parallelism (since no process has to wait for a lock, all can execute in parallel).

Timestamp ordering

Another approach to concurrency control is the use of timestamp ordering, developed by Reed in 1983. In this algorithm, we assign a timestamp to a transaction when it begins. The timestamp has to be unique with respect to the timestamps of other transactions (this is easily accomplished; for example, Lamport’s algorithm can be used). Every object in the system has a read and a write timestamp associated with it, two timestamps per object, identifying which committed transaction last read it and which committed transaction last wrote it. Note that the timestamps are obtained from the transaction timestamp: the start of that transaction.

Normally, when a process tries to access a file, the file’s read and write timestamps will be older than the current transaction’s. This implies good ordering. If this is not the case, and the ordering is incorrect, this means that a transaction that started later than the current one accessed the file and committed. In this case the current transaction is too late and has to abort: the rule here is that the lower numbered transaction always goes first.

The rule of timestamp ordering is:

  • If a transaction wants to write an object, it compares its own timestamp with the object’s read and write timestamps. If the object’s timestamps are older, then we have good ordering.

  • If a transaction wants to read an object, it compares its own timestamp with the object’s write timestamp. If the object’s write timestamp is older than the current transaction, then the ordering is good.

  • If a transaction attempts to access an object and does not detect proper ordering, then we have bad ordering. The transaction is aborted and restarted (improper ordering means that a newer transaction came in and modified data before the older one could access the data or read data that the older one wants to modify).

For example, suppose there are three transactions: a, b, and c. Transacion a ran a long time ago and used every object needed by b and c. Transactions b and c start concurrently with b receiving a smaller (younger) timestamp than c (Tb < Tc).

Case 1 (proper ordering):
Transaction b writes a file (assume the read timestamp of the file is TR and the write timestamp is TW). Unless c has already committed, TR and TW are Ta, which is less than Tb. The write is accepted, performed tentatively, and made permanent on commit. Tb is recorded as the tentative TW.
Case 2 (improper ordering):
If c has either read or written the object and committed, then the object’s timestamp(s) has been modified to Tc. In this case, when b tries to access the object and compares its timestamp with that of the object, it sees that ordering is incorrect because b is an older transaction trying to modify data committed by a younger transaction (c). Transaction b must be aborted. It can restart, get a new timestamp, and try again.
Case 3 (delaying execution):
Suppose transaction b has written the object but not yet committed. The write timestamp of the object is a tentative Tb. If c now wants access to the object, we have a situation in which ordering is correct but the timestamp is in a tentative state. Transaction c must now wait for b to finish before it can access the object.

Concurrency control with timestamps is different than locking in one important way. When a transaction encounters a later timestamp, it aborts. With locking, it would either wait or proceed immediately.

This document is modified from an original version published on November 4, 2012.

  1. This is similar to DFS and its use of tokens in granting file access permissions.  ↩