Asynchronous Versus Synchronous: A Performance Perspective

I just had my coffee from my favorite coffee shop down the corner. I’m a regular at that coffee shop that the baristas know me and I’m able to sit down after I order and do something else while they make my coffee and serve to me when ready.  If I order from other branches of this coffee shop, I would have to wait for my coffee in a corner and not being able to do anything productive.  So while I was seated comfortable waiting for my coffee, a thought came to me. This is a great example of synchronous versus asynchronous service invocation.

A synchronous service invocation is just like buying your coffee and having to wait in the corner for the baristas to complete your coffee before you can even sit down and do something useful. Asynchronous service invocation is having to get seated and do something else while waiting for your coffee to be served.  My instinct tells me that asynchronous invocation is better than synchronous in terms of performance especially when it takes a long for the service to complete and the waiting time is much better used doing something else. But how can we show that this is the case?

We can model the synchronous/asynchronous invocation as a Markov Chain and compute a few metrics from our model. If you’re not familiar with this approach, you can refer to this article MODELING QUEUING SYSTEMS USING MARKOV CHAINS.

First, we model the asynchronous invocation. We can identify each state of our system by  2 slots each of which contains a number. The first slot is the number of responses waiting for the client to process and the second slot is the number of requests waiting for the server to process.  To simplify the modeling, we assume the following:

  1. The maximum number of requests that a server can handle at any given time is 2. Which means that a client cannot anymore send a request if the second number is equal to 2.
  2. When the server responds, the first number increments by 1. Once a client receives a response, it will stop whatever it is doing and process that response. When the client finishes processing the response, the first number goes down to zero. As a consequence, this assumption says that the maximum value of slot number 1 is 1 at any given time.

With these assumptions, we can draw the Markov Diagram of the asynchronous system to come up with the below diagram:

Explanation of the Diagram

The system initially starts at 00 where there are no requests and no responses. It will then transition to state 01 when the client will send a  request which increments the server’s pending requests to 1. From this state, the system can transition to either state 02 or state 10. State 02 occurs when the client sends another request without waiting for a response. State 10 is when the server is able to process the request and return a response, thereby decrementing it’s pending items and incrementing the client’s number of responses to process.

At state 02, the client cannot anymore send requests since the maximum requests that the server can handle is 2. At this state, it can only transition to state 11, where the server has processed one of the requests (thus decrementing the second slot from 2 to 1 and incrementing the first slot from 0 to 1).

State 11 can only transition to 01 since the client will stop whatever it is doing to process the single response it has received (thus decrementing the first slot from 1 to 0).

The numbers a,b,c are the rates at which the transitions will occur. For example, the rate at which the system will transition from state 00 to state 01 is b.

In the remainder of this article, we assume that the client can request at the rate of 60 requests per unit time and can process the response at 60 per unit time. We also assume that the server can process requests at 30 per unit time. We therefore have the following:

a=60,  b=60,  c=30

Computing the Probabilities

At steady state, the flow going into each state is equal to the flow out of that state. For example, for state 00, the following is true:

\displaystyle bP_{00} = aP_{10}

Doing this for all states, we get the following balance equations:

\begin{array}{rlr}  \displaystyle  bP_{00} &= aP_{10} & (1)\\  bP_{00} + a P_{11} &= bP_{01} + cP_{01} & (2)\\  bP_{01} &= cP_{02} & (3)\\  aP_{10} &= cP_{01} & (4)\\  aP_{11} &= cP_{02} & (5)  \end{array}

Since the sum of the probabilities should be equal to 1, we have our last equation:

P_{00} + P_{01} + P_{02} + P_{10} + P_{11} = 1

We have 6 equations in 5 unknowns, one of the equations above is actually redundant. In fact, you can show that equation 2 above is redundant.

We can form the matrix equation of the system of equations above and solve for the probabilities:

\begin{bmatrix}  -b & 0 & 0 & a & 0 \\  0 & b & -c & 0 & 0 \\  0 & -c & 0 & a & 0 \\  0 & 0 & -c & 0 & a \\  1 & 1 & 1 & 1 & 1  \end{bmatrix}  \begin{bmatrix}  P_{00}\\  P_{01}\\  P_{02}\\  P_{10}\\  P_{11}  \end{bmatrix}  =  \begin{bmatrix}  0\\  0\\  0\\  0\\  1  \end{bmatrix}

Solving this matrix equation when a=60, b=60 and c=30, we can find the probabilities:

\begin{bmatrix}  P_{00}\\  P_{01}\\  P_{02}\\  P_{10}\\  P_{11}  \end{bmatrix}  =  \displaystyle  \frac{1}{ab^2+abc+ac^2+b^2c+bc^2}  \begin{bmatrix}  ac^2\\  abc\\  ab^2\\  bc^2\\  b^2c  \end{bmatrix}  =  \begin{bmatrix}  1/10 \\ 1/5 \\2/5\\ 1/10\\ 1/5  \end{bmatrix}

Utilization and Throughput: Asynchronous Case

The client utilization is defined to be the probability that the client is busy. In the same way, the server utilization is the probability that the server is busy. Looking at the diagram, the client is busy sending requests at state 00 and state 01. It is busy processing responses at state 10 and 11.  On the other hand, the server is busy processing requests at state 01 and 02.

Therefore, the client utilization is  equal to

\begin{array}{rl}  U_{\text{client}} &= P_{00} + P_{01} + P_{10} + P_{11}\\  &= 1/10 + 1/5 + 1/10 + 1/5\\  &= 3/5\\  &= 60\%  \end{array}

The server utilization is equal to

\begin{array}{rl}  U_{\text{server}} &= P_{01} + P_{02}\\  &= 1/5 + 2/5 \\  &= 3/5 \\  &= 60\%  \end{array}

The system througput is the number of requests the client is able to submit and is equal to

\begin{array}{rl}  X &= 60P_{00} + 60 P_{01} \\  &= 60*1/10 + 60 * 1/5 \\  &= 18  \end{array}

Comparison with Synchronous Invocation

For the synchronous invocation, the client will submit a request at state 00. The server will then receive this request and process it immediately at state 01. The client will get the response and process it at state 10 and do the loop again at state 00. Please see the Markov diagram below describing this process.

We can solve the probabitlies of this Markov Chain by solving the balance equation

\begin{bmatrix}  b & 0 & -a \\  0 & c & -a \\  1 & 1 & 1  \end{bmatrix}  \begin{bmatrix}  P_{00}\\  P_{01}\\  P_{10}  \end{bmatrix}  =  \begin{bmatrix}  0\\0\\1  \end{bmatrix}

Solving for the probabilities, we get

\begin{bmatrix}  P_{00}\\  P_{01}\\  P_{10}  \end{bmatrix}  =  \displaystyle  \frac{1}{ab + ac + bc}  \begin{bmatrix}  ac\\  ab\\  bc  \end{bmatrix}  =  \begin{bmatrix}  1/4\\ 1/2\\ 1/4  \end{bmatrix}

At state 00, the client is busy sending requests at the rate of 60 requests per unit time, therefore the system throughput is

\begin{array}{rl}  X &= 60 P_{00} \\  &= 60/4 \\  &= 15  \end{array}

which is less than the Asynchronous case. In addition, the client utilization is

\begin{array}{rl}  U_{\text{client}} &= P_{00} + P_{10}\\  &= 1/4 + 1/4\\  &= 1/2  \end{array}

This is lower than the Asynchronous case where the client utilization is 60%.

Therefore, the asynchronous service invocation is indeed better compared to the synchronous case. It has a much higher throughput and a much higher utilization compared to the synchronous case.

P.S. I used Mathics to solve the matrix equations. Mathics is an open source alternative to Mathematica. It features Mathematica-compatible syntax and functions so if you’re a user of Mathematica you can run basic commands in Mathics. If you’re a Mathics user, you can easily switch to Mathematica as well. Please visit http://mathics.github.io/ to know more about Mathics.

Advertisements

When Average Is Not Enough: Thoughts on Designing for Capacity

Designing a system from scratch to handle a workload you don’t know is a challenge. If you put to much hardware, you might be wasting money. You put little, then your users will complain of how slow the system is.

If you’re given only a rate, like 6000 hits/hour, you don’t know how these are distributed in a minute by minute or per second interval. We can make a guess and say that there are about 100 hits per minute or 1.67 hits/sec. If hits come uniformly at that rate, then we can design a system that can handle 2 hits/sec and all users will be happy since all requests will be served quickly and no queueing of requests. But we know it’s not going to happen. There will be some interval where the number of hits is less than 3 and some more than 3.

Theoretically, requests to our server come randomly. Let’s imagine 60 bins represented by seconds in one minute. We also imagine that requests are like balls we throw into the bins. Each bin is equally likely to be landed by a ball. It’s possible that all balls land on only one bin!

throw_balls_random

After throwing the balls into bins, let’s see what we have.

simulation

As you can see, some bins have more than 2 balls (which is the average number of balls in a bin). Therefore if we design our system based on the average, 50% of our users will have a great experience while the other 50% will have a bad experience. Therefore we need to find how many requests per second our server needs to handle so that our users will have a good experience (without overspending).

To determine how many requests per second we need to support, we need to get the probability of getting 4, 5, 6 or more request per second. We will compute the probability starting from 3 requests per second and increment by one until we can get a low enough probability. If we design the system for a rate that has a low probability, we are going to spend money for something that rarely occurs.

Computing the Probability Distribution

We can view the distribution of balls into bins in another way. Imagine labeling each ball with a number from 1 to 60. Each number has an equal chance to be picked. The meaning of this labeling is this: the number that was assigned to the ball is the bin (time bucket) it belongs to. After labeling all balls, what you have is a distribution of balls into bins.

label_balls
Since each ball can be labeled in 60 different ways and there are 100 balls, the number of ways we can label 100 different balls is therefore

\displaystyle 60^{100}

Pick a number from 1-60. Say number 1. Assume 2 balls out of 100 are labeled with number 1. In how many ways can you do this ? Choose the first ball to label. There are 100 ways to choose the ball. Choose the second ball. Now there are 99 ways to choose the second ball. We therefore have 990 ways to select 2 balls and label them 1. Since we don’t really care in what order we picked the ball, we divide 990 with the number of possible arrangements of ball 1 and ball 2, which is 2! (where the exclamation mark stands for “factorial”). So far, the number of ways to label 2 balls with the same number is

\displaystyle \frac{100 \times 99}{2!}

Since these are the only balls with label 1, the third ball can be labeled anything except number 1. In that case, there are 59 ways to label ball 3. In the same way, there are 59 ways to label ball 4. Continuing this reasoning until ball 100, the total ways we can label 2 balls with number 1 and the rest with anything else is therefore:

\displaystyle \frac{100 \times 99}{2!} \times 59^{98}

Notice that the exponent of 59 is 98 since there are 98 balls starting from ball 3 to ball 100.

Therefore, the probability of having two balls in the same bin is

\displaystyle \frac{100 \times 99}{2!} \times \frac{59^{98}}{60^{100}} = 0.2648

We can also write this as

\displaystyle \frac{100!}{2! \times 98!} \times \frac{(60-1)^{98}}{60^{100}} = \binom{100}{2} \frac{(60-1)^{98}}{60^{100}}

In general, if m is the number of balls, n the number of bins and k the number of balls with the same label, then the probability of having k balls within the same bin is given by

\displaystyle \binom{m}{k} \frac{(n-1)^{m-k}}{n^{m}}

,

where

\displaystyle \binom{m}{k} = \frac{m!}{k!(m-k)!}

is the binomial coefficient.

It turns out that this is a probability distribution since the sum of all probabilities from k=0 to k=m is equal to 1. that is

\displaystyle \sum_{k=0}^{n} \binom{m}{k} \frac{(n-1)^{m-k}}{n^{m}} = 1

To see this, recall from the Binomial Theorem that

\displaystyle \big( x + y \big)^n = \sum_{k=0}^{n} \binom{n}{k} x^{n-k}y^k

If we let x=n-1 and y=1, we can write the above equation as

\displaystyle  \begin{array}{ll}  \displaystyle \sum_{k=0}^{m} \binom{m}{k} \frac{(n-1)^{m-k}}{n^{m}} &= \displaystyle \sum_{k=0}^{m} \binom{m}{k} \frac{(n-1)^{m-k}\cdot 1^k}{n^{m}}\\  &= \displaystyle\frac{(n-1+1)^m}{n^{m}}\\  &= \displaystyle\frac{n^m}{n^m}\\  &= \displaystyle 1  \end{array}

Here is a graph of this probability distribution.

probability_distribution

Here’s the plot data:


probability
1 0.315663315854
2 0.264836171776
3 0.146632456689
4 0.060268424995
5 0.019612775592
6 0.005263315484
7 0.001197945897
8 0.000236035950
9 0.000040895118
10 0.000006307552

We can see that for k=9, the probability of it occurring is .004%. Anything beyond that we can call rare and no need to spend money with.

Just For Fun

What’s the probability that a given bin is empty, that is, there are no balls in it?

Other Probability Distributions

Our computation above was based on a uniform probability distribution. However, there are other distributions that are more suitable for arrival of requests. One of the most widely used is called the Poisson Distribution where you can read from here.

R Code

The R code to generate the simulation:

par(mfrow=c(4,4))
f=function(){
  x=c()
  for(i in 1:100){
    x=c(x,sample(seq(1,60),1,replace=T))
  }
  plot(tabulate(x),type="h", ylab="tx", xlab="secs")
}

for(i in 1:16){
 f()
}

The R code to generate the probability distribution:

p=function(m,n,s){
 prod(seq(m,m-s+1))/factorial(s)*(n-1)^(m-s)
}

tt=c()
for(i in 1:10){
 tt=c(tt,p(100,60,i)/60^100)
}
plot(tt,type="h",xlab="Number of Balls",ylab="Probability")

Dedication

This post is dedicated to my friend Ernesto Adorio, a mathematician. He loves combinatorial mathematics.

Rest in peace my friend! I miss bouncing ideas with you.