## Quantum Computing: Programming the Quantum Dice

The power of quantum computing comes from the superposition of states which allows us to do computation in parallel. From an input of 1 qubit, we can do parallel computation of 2 integers (0 and 1). From an input of 2 qubits, we can do parallel computation of 4 numbers (0, 1, 2, 3) and so on. In general, for an $n$ number, we can do parallel computation of $2^n$;

A quantum computer allows us to simulate easily the roll of a dice. Imagine, for the sake of simplicity, say we have a 4-faced honest die with faces 0, 1, 2 ,3. Since there are 4 faces, the probability of getting any face is 1/4.

Using the quantum circuit below, we can show the probabilities using a quantum computer.

The circuit is constructed very easily and in my opinion is the hello work of quantum computing. We start with 2 qubits. Each qubit is initialized to 0. Then using a Hadamard gate, we can create a superposition of all states from 0 to 3:

The hadamard gates action on the qubits gives you this state:

$\displaystyle \mathbf{H}^{\otimes n} |0\rangle = \displaystyle \frac{1}{\sqrt{2}^n} \sum_{x=0}^{2^n-1} |x\rangle$

Then doing a measurement on the circuits 1000 times we get a probability distribution. The graph below shows that the probability of getting any state is more of less the same.

## 2-Dice Game

What if we toss 2 dice and get the sum of the faces that show up? The following numbers are the possible outcomes: 0, 2, 3, 3, 4, 5, 6. What number are you going to bet on to maximize your winnings?
If we plot a table of possible outcomes, we would end up with the following table:

$\begin{tabular}{|c|cccc|} \hline & 0 & 1 & 2 & 3\\\hline 0 & 0 & 1 & 2 & 3\\ 1 & 1 & 2 & 3 & 4\\ 2 & 2 & 3 & 4 & 5\\ 3 & 3 & 4 & 5 & 6\\\hline \end{tabular}$

We can see that the sum = 3 has a much greater probability of coming up compared to any other number. In fact, the sum 3 comes up 4 times out of 16, which means that the probability of getting a sum of 3 is 4/16 or 0.25.

We can also show the same probability distribution using quantum computing.

First we need to create a circuit for adding two 2-qubit numbers. Do do this, we can use of the full-adder circuit for adding 1 bit numbers:

The $a_0$ and $b_0$ refer to the input qubits, the $C_{\mathrm{i}0}$ refer to the input carry bit from the addition of the lesser significant bit. The $s_0$ refer to the sum of the 2 bits. The $C_{\mathrm{o}0}$ refer to the carry bit of the current addition operation and will be input to the addition of the next significant bit. For more information about binary adders refer to this wikipedia article.

Since there are two bits we need to add, we need two of these circuits stringed together as shown below:

## Translating to a Quantum Circuit

Using the circuit diagram above, we can translate it to a quantum circuit by using the following rules:

1. For each input bit in the XOR gate, create a CNOT gate such that the input of the CNOT gate is the input bit and the target is the output of the XOR gate:

2. For each AND gate, create a Control-Control-NOT gate (also known as Toffoli gate) such that the input bits of the Control-Control-NOT gate is the input bits of the AND gate and the target bit is the output bit of the AND gate.

Using these 2 rules, we can translate Figure 3 to the following quantum circuit:

## Simulating the 2-dice Betting Game

Now that we have created the adder circuit, we can now use quantum computing to simulate the betting game. Thanks to the Hadamard gate, it’s very easy to create a superposition of states from the input qubits. We only have to apply hadamards to each input qubit. The full quantum circuit is now shown below:

Here is the QISKit code to do this:

# Import the Qiskit SDK
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import execute, Aer
from qiskit.tools.visualization import circuit_drawer, plot_histogram

# Create an input Quantum Registers
qa = QuantumRegister(2, name="a")
qb = QuantumRegister(2, name="b")

# Create the intermediate registers
qci = QuantumRegister(1, name="ci")
qco = QuantumRegister(1, name="co")
qd0 = QuantumRegister(3, name="d0")
qd1 = QuantumRegister(3, name="d1")

# Create the output registers

qs = QuantumRegister(3, name="s")

# Create a Classical Register with 3 bits.
c = ClassicalRegister(3, name="cl")
# Create a Quantum Circuit
qc = QuantumCircuit(qa, qb, qci, qco, qd0, qd1, qs, c)

# Flip the output qubit and apply Hadamard gate.
qc.h(qa[0])
qc.h(qa[1])
qc.h(qb[0])
qc.h(qb[1])

qc.barrier()

qc.cx(qa[0],qd0[0])
qc.cx(qb[0],qd0[0])
qc.cx(qd0[0],qs[0])
qc.cx(qci[0],qs[0])

qc.ccx(qci[0],qd0[0],qd0[1])
qc.ccx(qa[0],qb[0],qd0[2])
qc.cx(qd0[1],qco[0])
qc.cx(qd0[2],qco[0])

#### 2nd bit
qc.cx(qa[1],qd1[0])
qc.cx(qb[1],qd1[0])
qc.cx(qd1[0],qs[1])
qc.cx(qco[0],qs[1])

qc.ccx(qco[0],qd1[0],qd1[1])
qc.ccx(qa[1],qb[1],qd1[2])
qc.cx(qd1[1],qs[2])
qc.cx(qd1[2],qs[2])

qc.barrier()
qc.measure(qs, c)

# Compile and run the Quantum circuit on a simulator backend
backend_sim = Aer.get_backend('qasm_simulator')
shots = 100

job_sim = execute(qc, backend=backend_sim, shots=shots)
result_sim = job_sim.result()

# Test
# 0+0 = 0 check
# 0+1 = 1 check
# 0+2 = 2 check
# 0+3 = 3 check
# 1+0 = 1 check
# 1+1 = 2 check
# 1+2 = 3 check
# 1+3 = 4 check
# 2+0 = 2 check
# 2+1 = 3 check
# 2+2 = 4 check
# 2+3 = 5 check
# 3+0 = 3 check
# 3+1 = 4 check
# 3+2 = 5 check
# 3+3 = 6 check



As usual, the code for drawing the circuit is as follows:

from qiskit.tools.visualization import matplotlib_circuit_drawer as drawer, qx_color_scheme
my_scheme=qx_color_scheme()
my_scheme['plotbarrier'] = False
drawer(qc, style=my_scheme)


And here’s the resulting probability distribution generated for 50 simulation runs.

## Why it Works

Suppose we have 2 n-qubit numbers. We apply superposition of states to both numbers and an Identity to the output qubit:

$\mathbf{H}^{\otimes n}\otimes \mathbf{H}^{\otimes n}\otimes \mathbf{I} |0\rangle |0\rangle |0\rangle$

to get

$\displaystyle \Big(\displaystyle \frac{1}{\sqrt{2}^n} \sum_{x=0}^{2^{n}-1} |x\rangle\Big)\otimes \Big( \displaystyle \frac{1}{\sqrt{2}^n} \sum_{y=0}^{2^{n}-1} |y\rangle\Big)\otimes |0\rangle$

Next, we apply the addition operator defined as

$\mathrm{U}_{f} |xy\rangle |0\rangle = |xy\rangle |x+y\rangle$

where

$|xy\rangle = |x\rangle \otimes |y\rangle$

to give

$\displaystyle \frac{1}{2^n}\sum_{x=0}^{2^{n}-1} \sum_{y=0}^{2^{n}-1} \mathrm{U}_f |xy\rangle \otimes |0\rangle = \frac{1}{2^n}\sum_{x=0}^{2^{n}-1} \sum_{y=0}^{2^{n}-1} |xy\rangle \otimes |x+y\rangle$

The right hand side can be written as:

$\displaystyle \frac{1}{2^n}\sum_{i=0}^{2\cdot(2^n-1)}\Big(\sum_{j=0}^i|j\rangle |i-j\rangle\Big) |i\rangle$

If we measure the output qubit, it will collapse to the state

$\Psi = \displaystyle \left(\frac{1}{2}\right)^n \sum_{j=0}^i|j\rangle |i-j\rangle\Big) |i\rangle$

with probability

$\Psi^*\Psi = \displaystyle \left(\frac{1}{2}\right)^{2n} \sum_{j=0}^i 1 = (i+1) \left(\frac{1}{2}\right)^{2n}$

In our example, we have $n=2$, the probability of getting a sum of i=3 is:

$\displaystyle (i+1)\cdot \left(\frac{1}{2}\right)^{2n} = (3+1)\left(\frac{1}{2}\right)^{4} = 4\cdot \left(\frac{1}{16}\right) = \frac{1}{4} = 0.25$

## Conclusion

We have shown the power of quantum computing by allowing us to represent inputs as the superposition of all possible inputs to be computed in parallel. We have shown that this can be done using a Hadamard gate. Using an adder circuit we were able to modify the probabilities so that they are not evenly distributed but follow some distribution which favors some outcomes rather than others. You can just imagine what other possibilities we can use quantum computing for!

## An Interview Question: Using Integer Programming

We can solve the Interview Question using a mathematical technique called Integer Programming. Let $d_1, d_2, \ldots, d_N$ be the variables representing diskette 1, diskette 2, diskette 3, etc. The values of the $d_k$ variables can only be 0 or 1. A 0 means the diskette is not used while a 1 means that it is used.

Each file is saved to a certain diskette. We want to know to what diskette $d_i$ a given file $f_j$ is assigned. To represent this, we assign the variable $a_{ij}$ a value of 1 if file $f_j$ is assigned to diskette $d_i$.

We will normalize the file sizes so that if $s_i$ is the size of $f_i$, the $s_i \le 1$. We do this by simply dividing all file sizes by the size of the diskette. For a given diskette $d_i$, the following constraint should be satisfied:

$d_i - s_1a_{i1} - s_2a_{i2} - \ldots - s_N a_{iN} \ge 0$

for diskette $i = 1, 2, \ldots, N$ and $s_i$ are the normalized file sizes of file $f_i$ for $i=1,2,\ldots,N$.

Since each file $f_j$ can only be assigned to one diskette, we have the following constraint:

$a_{1j} + a_{2j} + \ldots + a_{Nj} = 1$

where $a_{1j}$ is the variable representing the “file $f_j$ is in diskette $d_1$“, etc.

Finally, we have to constrain the value of $d_i$ to be either 0 or 1, that is,

$d_i \le 1$

for all $i=1,2,\ldots,N$.

## Integer Programming Formulation

Given the above information, we can formulate the Integer Programming problem as

Minimize:

$d_1 + d_2 + d_3 + \ldots + d_N$

subject to

$\begin{array}{rl} d_1 - s_1a_{11} - s_2a_{12} - s_3a_{13} - \ldots - s_Na_{1N} &\ge 0\\ d_2 - s_1a_{21} - s_2a_{22} - s_3a_{23} - \ldots - s_Na_{2N} &\ge 0\\ :\\ d_N - s_1a_{N1} - s_2a_{N2} - s_3a_{N3} - \ldots - s_Na_{NN} &\ge 0\\ a_{11} + a_{21} + a_{31} + \ldots + a_{N1} &= 1\\ a_{12} + a_{22} + a_{32} + \ldots + a_{N2} &= 1\\ :\\ a_{1N} + a_{2N} + a_{3N} + \ldots + a_{NN} &= 1\\ d_1 &\le 1\\ d_2 &\le 1\\ :\\ d_n &\le 1 \end{array}$

## Solving the Problem

We will use R to solve this Integer Programming Formulation. Please see code below:

library("lpSolve")
NUMFILES=4

# Generate random file sizes between 1 and 10
FileSizes=ceiling(10*runif(NUMFILES))
x = -1*FileSizes/10
l=length(x)

# Each files can be in any of the diskettes. Suppose there are N files,
# to determine if a file j is in diskette i, the value of variable x_ij will
# 1 if file j is in diskette i, and 0 otherwise.
# Here we construct the coefficients of variables x_ij which are the
# sizes of the files (normalized to 1)
zz=c()
for(i in 1:(l-1)){
zz=c(zz,x,rep(0,l*l))
}
zz=c(zz,x)

# Construct the coefficients of the indicator variables representing the
# diskettes d_i
zzmatrix=matrix(zz,ncol=l*l,byrow=T)
CoefficientsOfDiskettes=c();
for(i in 1:l){
ttt=rep(0,l)
ttt[i] = 1
CoefficientsOfDiskettes= c(CoefficientsOfDiskettes,ttt,zzmatrix[i,])
}

# Construct the coefficients of x_ij for constant j. These variables
# satisfy the equation \sum_{i=1}^N x_{ij}
SumOfFileAcrossDiskettes=c()
for(i in 1:l){
ttt=rep(0,l)
ttt[i]=1
SumOfFileAcrossDiskettes=c(SumOfFileAcrossDiskettes,rep(ttt,l))
}

# Prepend Coefficients of variables d_i. The value of these coefficients is 0.
SumOfFileAcrossDiskettesMatrix=matrix(SumOfFileAcrossDiskettes,ncol=l*l,byrow=T)
PrependCoefficientsOfDiskettes=c()
for(i in 1:l){
PrependCoefficientsOfDiskettes=c(PrependCoefficientsOfDiskettes,c(rep(0,l),SumOfFileAcrossDiskettesMatrix[i,]))
}

# Construct coefficients of d_i to construct constraint d_i <= 1
DisketteConstraints=c()
for(i in 1:l){
ttt=rep(0,l)
ttt[i]=1
DisketteConstraints=c(DisketteConstraints,ttt,rep(0,l*l))
}

# Construct matrix input of lpSolve
const.mat=matrix(c(CoefficientsOfDiskettes,PrependCoefficientsOfDiskettes,DisketteConstraints),ncol=l*(l+1),byrow=T)

print("Matrix Coefficients:")
print(const.mat)

# Construct inequalities/equalities
const.dir=c(rep(">=",l),rep("=",l),rep("<=",l))

# Construct Right-Hand side
const.rhs=c(rep(0,l),rep(1,l),rep(1,l))

# Construct Objective Function
objective.in=c(rep(1,l),rep(0,l*l))

# Invoke lpSolve
mylp=lp(direction="min",objective.in=objective.in,const.mat=const.mat,const.dir=const.dir,const.rhs=const.rhs,all.int=T)

# Print Results
print(paste("Number of Diskettes: ", sum(mylp$solution[1:l]))) tz=matrix(mylp$solution,ncol=l,byrow=T)
print("File Sizes: ")
print(FileSizes)
for(i in 2:(l+1)){
files = which(tz[i,] == 1)
if(length(files) > 0){
print(paste("Files in diskette ", i-1))
print(files)
}
}



Most of the code above is setting up the matrix of coefficients. The line 70 then calls on lpSolve to compute the optimal values of the variables

## Program Output

Running this code we get the output

[1] "Matrix Coefficients:"
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14] [,15] [,16] [,17] [,18] [,19] [,20]
[1,]    1    0    0    0   -1 -0.2 -0.1 -0.1    0   0.0   0.0   0.0     0   0.0   0.0   0.0     0   0.0   0.0   0.0
[2,]    0    1    0    0    0  0.0  0.0  0.0   -1  -0.2  -0.1  -0.1     0   0.0   0.0   0.0     0   0.0   0.0   0.0
[3,]    0    0    1    0    0  0.0  0.0  0.0    0   0.0   0.0   0.0    -1  -0.2  -0.1  -0.1     0   0.0   0.0   0.0
[4,]    0    0    0    1    0  0.0  0.0  0.0    0   0.0   0.0   0.0     0   0.0   0.0   0.0    -1  -0.2  -0.1  -0.1
[5,]    0    0    0    0    1  0.0  0.0  0.0    1   0.0   0.0   0.0     1   0.0   0.0   0.0     1   0.0   0.0   0.0
[6,]    0    0    0    0    0  1.0  0.0  0.0    0   1.0   0.0   0.0     0   1.0   0.0   0.0     0   1.0   0.0   0.0
[7,]    0    0    0    0    0  0.0  1.0  0.0    0   0.0   1.0   0.0     0   0.0   1.0   0.0     0   0.0   1.0   0.0
[8,]    0    0    0    0    0  0.0  0.0  1.0    0   0.0   0.0   1.0     0   0.0   0.0   1.0     0   0.0   0.0   1.0
[9,]    1    0    0    0    0  0.0  0.0  0.0    0   0.0   0.0   0.0     0   0.0   0.0   0.0     0   0.0   0.0   0.0
[10,]    0    1    0    0    0  0.0  0.0  0.0    0   0.0   0.0   0.0     0   0.0   0.0   0.0     0   0.0   0.0   0.0
[11,]    0    0    1    0    0  0.0  0.0  0.0    0   0.0   0.0   0.0     0   0.0   0.0   0.0     0   0.0   0.0   0.0
[12,]    0    0    0    1    0  0.0  0.0  0.0    0   0.0   0.0   0.0     0   0.0   0.0   0.0     0   0.0   0.0   0.0
[1] "Number of Diskettes:  2"
[1] "File Sizes: "
[1] 10  2  1  1
[1] "Files in diskette  1"
[1] 2 3 4
[1] "Files in diskette  2"
[1] 1



## Interpreting the Result

Lines 2-14 of the output gives you the matrix of coefficients. Line 15 prints the number of diskettes needed to store the files. Line 17 prints the randomly generated file sizes from 1 to 10. Finally lines 18-21 prints which diskettes contain which files.

The space complexity of this solution is quite substantial. Given $N$ files, we need to specify $N^2 + N$ variables by $3\times N$ equations for a total of $(N^2 + N)\times 3N$ memory space for coefficients.

## Basic Portfolio Optimization

Everyone would like to make a profit out of the money they have. Unless the sum of money is small, putting it in the bank is not a wise choice. The rate of interest given by the bank is so small that inflation will just eat up most of the profit. The most profitable investment but riskier one is on the stock market. Investing on the stock market is more of an art than a science. An investor will initially take a look at the fundamental data of a company to see how it is performing. You don’t want to put your money in a company that will close in a month.

After identifying potential companies, the next question is how much of each security should one purchase in order to maximize your profit while at the same time minimizing the risk. There is a risk involved when buying such securities. The price per share of a security changes frequently in a day. As an example, if you purchased 100 shares of security A at a price of $1 per share and suppose that on the tenth day the price rose to$1.50. If you decide to sell all your shares on that day, your profit will be $50 or 50% of your initial investment. If however, the price per share declined to$0.9, you will lose $10 or 10% of your initial capital. In this discussion, we consider the closing prices. Risk and Expected Return How then do we calculate the risk involved in holding shares of a security? The risk depends on how long you are going to keep those shares before selling them. Table 1 lists the price/share of a traded security for the last 10 days. Assuming you bought 1 share at day 1. If you sell it on day 10, your profit will be 0.6%. Selling it on day 6, your return will be 3.7%. The average return is 0.6%. The risk in investing in this security is represented by the standard deviation $\sigma = 2.04\%$. Let $\bar R_i$ be the $ith$ return of security $i$ and $X_i$ be the fraction of the investor’s fund invested in asset $i$, then the expected value of the portfolio $P$ consisting of these assets is give by $\bar R_p = \sum^N_{i=1} X_i\bar R_i$ where $N$ is the number of assets in the portfolio. The variance of portfolio $P$ is given by $\sigma_P^2 = \sum_{j=1}^N X_j^2\sigma_j^2 + \sum_{j=1}^N\sum_{k\neq j}^N X_jX_k \sigma_{jk}$ where $\sigma_{jk}$ is the covariance between asset $j$ and $k$. The covariance is the expected value of the product of two deviations: the deviations of the returns on asset $j$ from its mean and the deviations of asset 2 from its mean. Simple Example Let us illustrate portfolio optimization using a simple example. Suppose the current interest rates when investing in Treasury Bills is 8%. Treasury Bills are risk free investments in that if you will get 8% more of your initial money at the end of the holding period. The holding period is the length of time your money is in the possesion of the borrower before it matures. Assuming that the inflation rate at the end of the holding period is 3%, the real amount of money you earned in investing in Treasury Bills is $8-3=5\%$. In order to make more money, we decide to invest some of our money in stocks and bonds which are more risky but the returns are high enough to justify the risk. Let $A$ and $B$ be two securities of a portfolio $P$ with the following parameters: We assume a $\sigma_{ab}= 0.20$. The average return of portfolio $P$ is $r_p = 0.10w_a + 0.17 w_b$ and the average risk is $\sigma_p^2 = (0.12w_a)^2 + (0.25w_b)^2 + 2(0.20)(0.12)(0.25) w_a w_b$ where $w_a$ and $w_b$ are the fraction of your money which you invest in security $A$ and $B$ respectively. We require that $w_a + w_b = 1$, that is , we invest all our allocated money in this two securities. The table below shows some values of $\sigma$ versus $e_r$ for various weights. Figure 1 shows a plot of risk versus expected return for various combinations of $w_a$ and $w_b$. The line shown is called the Capital Allocation Line. The slope of this line is called the reward-to-variability ratio and is give by $\label{reward.variability.ratio}S = \frac{E(r_p) - r_f}{\sigma_p}$ The y-intercept of this line is the risk-free rate, which in our example is 8%. The slope is the amount of return you get per unit increase in the amount of risk. The goal of portfolio optimizatio is to find the combinations of$lalex w_a$and $w_b$ that maximizes this quantity. Substituting the given values of the risks and expected returns we get the optimization problem we have to solve: Maximize $S_p = \frac{E(r_p) - 0.08}{\sigma_p}$ subject to $r_p = 0.10 w_a + 0.17 w_b$ $\sigma_p^2 = (0.12w_a)^2 + (0.25w_b)^2 + 2(0.20)(0.12)(0.25) w_a w_b$ $1 = w_a + w_b$ In the next post, we will use genetic algorithms to solve this optimization problem. ## Using Linear Programming to Solve the Call Center Problem Agents cost money. Not only will you pay for their salary but also other benefits in you compensation package. So the less agents you have to do a job, the less overhead, the more profit for you. In the last article, we solve the call center scheduling using a manual approach. The question now is this: Does our solution give us the least possible cost to operate the call center? We cannot be sure. However, there is a technique which we can use in order to calculate the least possible cost of our schedule subject to the constraints of the required agents per time slot. This technique is called Integer Linear Programming. Formulation of the Problem Let $x_i$, $0 \le i \le 335$, be the number of call center agents who comes to work at time slot i. Using this notation, we can calculate the number of call center agents current present in the office at time slot 1: $S_1=x_0 + 0 + \ldots+ 0 + x_{320} + x_{321} + \ldots + x_{335}$ How did we arrive with this equation? If an agent starts at time slot 320, then he/she will still be in the office at time slot 1. This is however his/her last time slot for the shift. All agents that start after 320 until 336 will also be present at time slot 1. And so is the agent that starts at time slot. Similarly, the number of agents in time slot 2 is given by $S_2= x_0 + x_1 + 0 + \ldots + 0 + x_{321} + x_{322} + \ldots + x_{335}$ A similar pattern holds for i from 2 to 335. Let $\vec{req}$ be the vector of required number of agents per time slot. Then we can specify for each time slot i that the number of agents be greater than or equal to req[i]. The linear programming problem can now be specified by: Minimize $\sum_{i=0}^{335} x_i$ subject to the following contraints: We can create programmatically the matrix of these constraints using this algorithm. The code is written in R, which is an open source statistical software. x=seq(1:336) y=(x+17) %% 336 bigmatrix=c(); for( i in 1:336){ z=(y - i) %% 336; tmp=rep(0,336) for( j in 1:336){ if(z[j] < 18 ){ tmp[j] = 1; } } bigmatrix=c(bigmatrix,tmp); } Notice that we start our index from 1 rather than 0. This is because R is a 1-based programming language. Using the lpSolve package of R, we can compute the minimum of the objective function using the lp function of the library. bigmatrix=matrix(bigmatrix,nrow=336,byrow=T) f.obj=rep(1,336); f.con=bigmatrix; f.dir=rep(">=",336) f.rhs=mydata2 print("computing lp"); mylp=lp ("min", f.obj, f.con, f.dir, f.rhs, int.vec=1:336) After executing the code above, we get the following result: The value of the objective function is given by: > mylp$objval
[1] 47

If we compare this to the manual method, the value of the objective function is just the sum of the components of z:

> sum(z)
[1] 48

This means that using the Linear Programming technique, we were able to come up with a solution with the least cost.

## Manual Approach to Call Center Scheduling

In the last post, we defined the problem of call center scheduling. In this post, we are going to present a manual way of solving this problem.

Let us first define some terminology. A time slot is a 30 minute interval, e.g. 12:00 am – 12:30 am. Each time slot has a required number of agents that must be in the office. Since there are 24 hours in a day, then there are 48 time slots in a day. each time slot is independent from each other. Therefore, the total number of independent time slots is 48*7 = 336. Let us define the following vectors

• a vector $\vec{req}$ of length 336 whose components define the required number of agents in the ith slot.
• a vector $\vec{v}$ of length 48*7=336. Each component of this vector represents the number of agents current in the office.
• a vector $\vec{z}$ of length 336. This vector represents the number of call center agents that will start in that time slot. A value of 0 means that no agent is to begin working on that time slot. A value of n means n call center agents are expected to begin working on that time slot.

Here is how we compute the vector $\vec{v}$[/tex] and [tex]$\vec{z}$.

for(i in 1:length(req)){
if(v[i]  48*7 = 336

z[i] = z[i] + x;
# since an agent works 9 hours in a day,
# we have 9*2=18 slots
for(j in 0:17){
k=i + j;
l=k %% 337;
v[l] = v[l] + x;
}
}

The vector v is initialized to 0. For each component i of v, we compare it with the value req[i], which is the required number of agents for time slot i. If the value of v[i] is less than req[i], then we add the number of agents to start at that time slot, which is x=req[i] - v[i]. We update the vector z with this value, z[i]=z[i]+x. We then update the vector v by adding the value of x to all components of v starting at i until i+17 because an agent starting at timeslotiwill work 9 hours until i+17.

The code above is actually a code snippet from the program I created using R programming language. To know more about R, click this link.

Running this code in R, we get the following result:

The corresponding value of vector v is:

The difference of v from the required staffing req is :

The matrix form is more convenient:

This means that for slot 1, we 2 more agents rather than the required 1 agent at that time slot. These two agents will not be doing anything productive as far as their job is concerned. They are just there because they need to complete the 9 hours of work as stated in their contracts.

Now, is there a way for us to minimize this unproductivity while still meeting the required number of agents in the center? We will answer this in the next post.

## Call Center scheduling Problem

This problem I got from someone working in a call center. The call center operation in question operates 24 hours a day, 7 days a week. Every 30 minutes, there is a projected number of agents that need to be present in order to handle calls. Here is an example of such a requirement:

The data shown is truncated. The should be 48 rows corresponding to 48 half-hour intervals in a day. The whole data can be downloaded from here.

Assuming that an agent works 9 hours per day, how do you schedule your staff in such a way that you can meet the required number of agents required per 30 minute interval using a minimum number agents.

Approach to solving this problem

Imagine for a moment that you are the manager of this call center and you want to solve this problem manually. Looking at the first row and first column of the data, you’ll see that at 12:00 am to 12:30 am on Mon you need 1 agent to be in the office. Since an agent works 9 hours per day, this agent’s duty will end at 9:00 am. If n is the total number of agents in the office at 9:00, then by the next interval, there will be $n-1$ agents left ( Assuming there is no new agent who comes to work).

Looking down the row under Monday, you will see that a new person is required to be in the office at 6:00 am. You then assign a new person to come to the office at this time.

Further down, another person is required at 8:30 am. However, at 9:00 am, the first person’s shift is up but the requirement is still 3 persons. So you need to assign another person to come to office a 9 am to meet the requirement. Continuing the process in this way, you can ultimately come up with a schedule of your call center agents.

Complication

You have to notice that people working at 3:30 pm will intersect the next days shift. They will leave the office at 12:30 am the next day. The same is true for all agents who come to the office between 3:30 to 12:30 pm. To make matters worse, those agents who come to work Sunday starting at 3:30 pm will have to intersect monday’s shift. This means that the one person you initially scheduled above will not be alone, but will have a companion coming from the sunday’s shift.

Big Question

The big question therefore is this: Is the schedule you came up with using the manual approach the best schedule in terms of minimizing the number of staff?

We will answer this question in our next article.