Archive for September, 2015

Difference between Sequential Consistency, Serializability and Linearizability

September 7, 2015

I have been watching distributed systems videos from dotScale conference lately. I got tripped up when I watched Neha Narula’s talk and right afterwards watched Kyle Kingsbury’s talk. Later on during the day, I was thinking about what they were saying and realized that I got linearizability and serializability confused when I initially watched the videos. Then I read Kyle Kingsbury’s post on strong consistency models and got even more confused about what serializable consistency means exactly, since it sounds like something completely different from the serializability Neha was talking about. Looks like this topic has been confused many times.

Serializability is sometimes confused with Sequential Consistency, for instance, in the wikipedia page on linearizability. I was able to locate a book by Maurice Herlihy, the same main author that the Wikipedia article cites as the the person, along with Wing, who introduced Linearizability.  Similarly to the Wikipedia article, Herlihy introduces the concept of a history of executions, where the history is a set of finite sequence of all possible method invocation and response events. A method call is a pair where an invocation matches a response event. A sequential history has the history start with a method invocation, and each method invocation is followed immediately by a response event.

So for instance, if we have a register which we are concurrently reading and writing to, and the requests are buffered in a lock-based queue, from two separate hosts, A and B, the calls might look like this:

linearization_fig1

 

This history is not sequential since host A’s invocation is not immediately followed by host A’s response event. Herlihy’s book, The Art of Multiprocessor Programming, two principles are stated

Principle 3.3.1 Method calls should appear to happen in a one-at-a-time sequential order.

Principle 3.4.1 Method calls should appear to take effect in program order.

Principle 3.3.1 and 3.4.1 essentially says that if there was only one host writing, when it writes a value, it expects that value to follow the sequential logic order of its program. So, for instance, in the following example, host A would expect to see the value 1 when in fact the value 2 is written. In the following example, both principles are violated since the queue is not functioning properly. It should have serviced A’s request before B. When both principles are preserved, the correctness property is called sequential consistency. The following diagram preserves sequential consistency:

linearization_fig2

 

The distinction here is that the actual execution of the write calls may happen after the response event, however the writes will still happen in the correct program order with regard to each host and each register. In effect, the queue is functioning correctly since it services Host A before servicing Host B. The following diagram illustrates the notion of writing to and reading from two different registers, r1 and r2, which have two corresponding queues, q1 and q2:

linearization_fig3

Assuming that the registers are all initialized with the value 0, there could be four possible histories from each host’s perspective, H1a, H1b, H2a, and H2b respectively. I have written them out in the appendix. The case when H1a or H1b could happen corresponds to a case where both q1 and q2 have the write request enqueued and not yet executed. Even though H1a in itself is sequentially consistent and H1b in itself sequentially consistent, their composition, H1ab is not, since it does not satisfy Principle 3.4.1. H2ab, where H2a and H1b are combined, H3ab, where H1a and H2b are combined, and H4ab where H2a and H2b are combined, are all sequentially consistent. In order for their composition to make sense, a stronger notion of consistency is defined, called linearization, where Principle 3.4.1 is changed to

Principle 3.5.1 Each method call should appear to take effect instantaneously at some moment between its invocation and response.

So the requirement imposed now is that when a method invocation is received by R, it must be immediately executed before a response can be sent back, ie the response is not an acknowledgement that the method invocation was successfully scheduled, but that it was actually executed successfully. In the period that the execution happens, no other method invocations must come in and interrupt the execution of the previous execution. Essentially, the method call diagram would appear as

linearization_fig4

 

Now, H1a and H1b are impossible histories, since the execution happens right away and nothing is enqueued for later. The only possible history for Host A is H2a. For Host B, the only possible history is H2b. Their composition, H4ab is both linearizable and sequentially consistent. This means that a linearizable system is always sequentially consistent, but a sequentially consistent system is not necessarily linearizable.

Now, as Peter Bailis mentioned in his previous post, serializability was used in database systems to refer to Isolation in ACID transactions. Since the notions of sequential consistency and and linearization are terms coming from distributed systems, it looks like serializability and sequential consistency are oftentimes taken to mean the same thing. As Kyle Kingsbury mentioned in his post, serializability is sometimes taken to mean linearization, and is not always clear. For now, I will treat it as an even weaker consistency model than sequential consistency, but will keep in mind that they way it was defined originally may not be “consistent” with the way it is being used now.

Appendix

 

SubHistory of Executions:

History Outcome
H1a A writes (1) to r1, A reads from r2 -> 0
H2a A writes (1) to r1, A reads from r2 -> 2
H1b B writes (2) to r2, b reads from r1 -> 0
H2b B writes (2) to r2, A reads from r1 -> 1

Composition of Histories:

History Effect
H1ab A writes (1) to r1, B writes(2) to r2,

A reads from r2 -> 0, B reads from r1 -> 0

H2ab A writes (1) to r1, B writes(2) to r2,

A reads from r2 -> 0, A reads from r1 -> 1

H3ab A writes (1) to r1, B writes(2) to r2,

A reads from r2 -> 2, A reads from r1 -> 0

H4ab A writes (1) to r1, B writes(2) to r2,

A reads from r2 -> 2, A reads from r1 -> 1