Commit Protocols | Database Management System

Commit Protocols | Database Management System

Commit Protocols : To ensure the property of atomicity, a distributed transaction should either COMMIT at all participating sites Or it should ABORT at all sites.


Consider Transaction T initiate data Site Si having Transaction Coordinator Ci.

Phase 1.

  • Ci adds record <prepare Tc to the log, and forces the log onto stable storage.
  • Ci sends prepare T message to all those sites at which T is executing.
  • On receiving a prepare T message at a participating site, the transaction manager at the participating site determines whether it is willing to COMMIT its portion of Tor not. If the answer is NO, it adds a record <no Tc to the log and then responds by sending an ‘abort T. message to Ci. However, if the answer is YES, it adds a record <ready T>  to the log and forces the log (with all log records pertaining to T) onto stable storage and responds with a ‘ready T message to Ci.

Phase 2.

When Ci receives responses to the prepare T message from all participating sites, or when a pre-specified time-interval has elapsed since sending of ‘prepare T message (i.e., Time-out condition occurs), Ci determines whether  T is to be COMMITTED or ABORTED. The decision to COMMIT or ABORT T is made as follows:

b) Transaction T is to COMMITTED if <ready T> message has been received from ALL sites participating in execution of T.

Else transaction T is to be aborted.

Depending on the verdict of Ci, a record <commit T>is or <abort T> is added to the log and the log is forced onto stable storage. At this point, the fate of transaction is sealed.

Coordinator sends ‘commit T or ‘abort T. message to all participating sites.

When a participating site receives this message, it records the message in the log. Each participating site follows the decision of Ci to either COMMIT or ABORT.

In Some implementations, each participating site then sends an ‘acknowledge T message to Ci. When Ci has received acknowledge T message from all participating sites, it adds a record a <complete T> to the log.

Handling of Failures

Failure of a participating site Sk.

(a) If a site fails before responding to <prepare To message, Ci assumes abort T from that site and thus decides to abort T.

(b) If a site fails after Ci has received <ready Tc from that site, Ci executes the COMMIT protocols normal. When the site Sk recovers later on, it must examine its log to determine the fate of those transactions which were in the mid of execution when failure occurred. Possible situations for a transaction Tare:

  1. The log Contains a <Commit T> record. In this case, the site S executes redo (T) i.e., sets the values of all data items updated by transaction T to new values.
  2. The log contains an <abort T> record. In this case, the site S executes undo (T) i.e., restores the values of all the data items updated by transaction Tito the old values.
  3. The log Contains <ready T> record. In this case, Sk must consult Ci to determine the fate of T i.e., whether T Committed or aborted. If C is up, it will notify S. regarding fate of T and S will follow the decision. However, if C is down, S will query other sites regarding fate of T by sending query-status T message. All sites, which participated in execution of T will have necessary information of fate of T. All such sites, which are up, will respond to the query of Sk and S will act accordingly. If no site with desired information is up, S can neither commit nor abort T. In this Case, decision regarding T is postponed and S will keep on periodically sending “query-status T’ message to all sites, till it obtains necessary information to recover T.
  4.  The log contains no (commit, ready or abort) records regarding T. It implies S. failed before responding to <prepare T> message. So, Ci must have aborted T and <abort T> message might have been lost. Thus, in this case, S executes undo(T).

Failure of the Coordinator Ci

If Coordinator fails in the midst of execution of COMMIT Protocol of T, then participating sites must decide the fate of T. However, in situations when participating sites cannot decide the fate of T, they must it for recovery of Ci. Various possibilities are:

  • If an active site contains a <commits record in its log, then T must be committed.
  •  If an active site contains an < abort To record in its log, then T must be aborted.
  •  if some active site does not contain a <ready To recording its log, then Ci must not have decided to Commit T. So, it must be decided to abort T.
  •  if none of the above cases holds, then all participating sites will have <ready To record in their respective logs but no participating site has <Commit T> or <abort T> record in their logs. It implies Ci failed after sending <prepare T> message but before making decision to Commit Tor abort T. In this case, participating sites must wait for recovery of Ci in this case, if some sites are holding locks on data, they may continue to hold such locks for a long time till Ci recovers and final decision regarding Committing or aborting of T is implemented. This is called Blocking problem, since T is blocked for the recovery of Ci.

Network Partition

There are two possibilities:

(a) The Coordinator Ci and participants of Tremain in one partition. In this case, failure has no effect.

(b) The coordinator and participants of Tbelong to several partitions. To the sites, which are not in the portioning of C, it appears as if C has failed. The case is discussed above. The Coordinator views those sites, which are not its partition as if they have failed. The case has been discussed above.

The main limitation of 2PC is the Blocking problem

Three-Phase Commit (3PC)

It is an extension of 2PC, which resolves the Blocking problem. In this protocol, multiple sites (say k Sites) are involved in decision-making regarding commit Tor abort T. The Coordinator, instead of directly noting the COMMIT decision in its persistent storage, first ensures that at least k other sites are notified about its intention to COMMIT a transaction. If the Coordinator fails, the surviving sites first select a new Coordinator. The Coordinator checks the status from other sites. If the failed Coordinator had decided to COMMIT, the decision would be known to the Surviving sites, Out of the set of k sites to whom the failed Coordinator had intimated before failing. The new Coordinator will restart the third phase of the protocol if some surviving site confirmed that failed coordinator had decided to commit T else it aborts T, thus obviating the Blocking problem, unless the complete set of k sites (to whom the failed coordinator intimated the decision to Commit T) had also failed along with the Coordinator. So, unless k sites fail (which may occur in case of a network partitioning), blocking problem will not occur.

Locking Protocols

In distributed Database Systems there are two schemes of locking protocols:

(a) Single Centralized Lock Manager

The System maintains a single Centralized lock manager, which resides in a single chosen site-say S. All lock and unlock requests are made to that site S. Whenever, a transaction needs to lock a data item, it sends a request for lock to site S. As a lock request is received at S, the lock manger checks whether the requested lock can be granted immediately, else the request is queued. Whenever the request is granted, a message is sent to the originator of the request. After a request to access a data item is granted, the access will be as follows:

  1. For Read, the data item can be read from any of the sites, where replica of the data item exists.
  2. For Write, all replicas of the data item have to be updated.

For unlock, only a single message is required to be sent from the site releasing a lock to the site S.


(a) Simple implementation.

(b) Simple deadlock handling, since all data items are centrally controlled.


(a) Site Si becomes a bottle-neck, since all requests have to be processed there.

(b) Vulnerability: If site Si fails, the lock manager would be non-existent, thus requiring mechanism for backup and recovery.

(c) Distributed Lock Manager : The lock manager function is distributed over several sites. Each site administers the lock management of data items stored at that site. Here, since the lock manager function is distributed over a large number of sites, the approach is free of the limitations of a centralized approach. However, deadlock handling is Complex in this case.