## An Interview Question: Using Integer Programming

We can solve the Interview Question using a mathematical technique called Integer Programming. Let $d_1, d_2, \ldots, d_N$ be the variables representing diskette 1, diskette 2, diskette 3, etc. The values of the $d_k$ variables can only be 0 or 1. A 0 means the diskette is not used while a 1 means that it is used.

Each file is saved to a certain diskette. We want to know to what diskette $d_i$ a given file $f_j$ is assigned. To represent this, we assign the variable $a_{ij}$ a value of 1 if file $f_j$ is assigned to diskette $d_i$.

We will normalize the file sizes so that if $s_i$ is the size of $f_i$, the $s_i \le 1$. We do this by simply dividing all file sizes by the size of the diskette. For a given diskette $d_i$, the following constraint should be satisfied:

$d_i - s_1a_{i1} - s_2a_{i2} - \ldots - s_N a_{iN} \ge 0$

for diskette $i = 1, 2, \ldots, N$ and $s_i$ are the normalized file sizes of file $f_i$ for $i=1,2,\ldots,N$.

Since each file $f_j$ can only be assigned to one diskette, we have the following constraint:

$a_{1j} + a_{2j} + \ldots + a_{Nj} = 1$

where $a_{1j}$ is the variable representing the “file $f_j$ is in diskette $d_1$“, etc.

Finally, we have to constrain the value of $d_i$ to be either 0 or 1, that is,

$d_i \le 1$

for all $i=1,2,\ldots,N$.

## Integer Programming Formulation

Given the above information, we can formulate the Integer Programming problem as

Minimize:

$d_1 + d_2 + d_3 + \ldots + d_N$

subject to

$\begin{array}{rl} d_1 - s_1a_{11} - s_2a_{12} - s_3a_{13} - \ldots - s_Na_{1N} &\ge 0\\ d_2 - s_1a_{21} - s_2a_{22} - s_3a_{23} - \ldots - s_Na_{2N} &\ge 0\\ :\\ d_N - s_1a_{N1} - s_2a_{N2} - s_3a_{N3} - \ldots - s_Na_{NN} &\ge 0\\ a_{11} + a_{21} + a_{31} + \ldots + a_{N1} &= 1\\ a_{12} + a_{22} + a_{32} + \ldots + a_{N2} &= 1\\ :\\ a_{1N} + a_{2N} + a_{3N} + \ldots + a_{NN} &= 1\\ d_1 &\le 1\\ d_2 &\le 1\\ :\\ d_n &\le 1 \end{array}$

## Solving the Problem

We will use R to solve this Integer Programming Formulation. Please see code below:

library("lpSolve")
NUMFILES=4

# Generate random file sizes between 1 and 10
FileSizes=ceiling(10*runif(NUMFILES))
x = -1*FileSizes/10
l=length(x)

# Each files can be in any of the diskettes. Suppose there are N files,
# to determine if a file j is in diskette i, the value of variable x_ij will
# 1 if file j is in diskette i, and 0 otherwise.
# Here we construct the coefficients of variables x_ij which are the
# sizes of the files (normalized to 1)
zz=c()
for(i in 1:(l-1)){
zz=c(zz,x,rep(0,l*l))
}
zz=c(zz,x)

# Construct the coefficients of the indicator variables representing the
# diskettes d_i
zzmatrix=matrix(zz,ncol=l*l,byrow=T)
CoefficientsOfDiskettes=c();
for(i in 1:l){
ttt=rep(0,l)
ttt[i] = 1
CoefficientsOfDiskettes= c(CoefficientsOfDiskettes,ttt,zzmatrix[i,])
}

# Construct the coefficients of x_ij for constant j. These variables
# satisfy the equation \sum_{i=1}^N x_{ij}
SumOfFileAcrossDiskettes=c()
for(i in 1:l){
ttt=rep(0,l)
ttt[i]=1
SumOfFileAcrossDiskettes=c(SumOfFileAcrossDiskettes,rep(ttt,l))
}

# Prepend Coefficients of variables d_i. The value of these coefficients is 0.
SumOfFileAcrossDiskettesMatrix=matrix(SumOfFileAcrossDiskettes,ncol=l*l,byrow=T)
PrependCoefficientsOfDiskettes=c()
for(i in 1:l){
PrependCoefficientsOfDiskettes=c(PrependCoefficientsOfDiskettes,c(rep(0,l),SumOfFileAcrossDiskettesMatrix[i,]))
}

# Construct coefficients of d_i to construct constraint d_i <= 1
DisketteConstraints=c()
for(i in 1:l){
ttt=rep(0,l)
ttt[i]=1
DisketteConstraints=c(DisketteConstraints,ttt,rep(0,l*l))
}

# Construct matrix input of lpSolve
const.mat=matrix(c(CoefficientsOfDiskettes,PrependCoefficientsOfDiskettes,DisketteConstraints),ncol=l*(l+1),byrow=T)

print("Matrix Coefficients:")
print(const.mat)

# Construct inequalities/equalities
const.dir=c(rep(">=",l),rep("=",l),rep("<=",l))

# Construct Right-Hand side
const.rhs=c(rep(0,l),rep(1,l),rep(1,l))

# Construct Objective Function
objective.in=c(rep(1,l),rep(0,l*l))

# Invoke lpSolve
mylp=lp(direction="min",objective.in=objective.in,const.mat=const.mat,const.dir=const.dir,const.rhs=const.rhs,all.int=T)

# Print Results
print(paste("Number of Diskettes: ", sum(mylp$solution[1:l]))) tz=matrix(mylp$solution,ncol=l,byrow=T)
print("File Sizes: ")
print(FileSizes)
for(i in 2:(l+1)){
files = which(tz[i,] == 1)
if(length(files) > 0){
print(paste("Files in diskette ", i-1))
print(files)
}
}



Most of the code above is setting up the matrix of coefficients. The line 70 then calls on lpSolve to compute the optimal values of the variables

## Program Output

Running this code we get the output

[1] "Matrix Coefficients:"
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14] [,15] [,16] [,17] [,18] [,19] [,20]
[1,]    1    0    0    0   -1 -0.2 -0.1 -0.1    0   0.0   0.0   0.0     0   0.0   0.0   0.0     0   0.0   0.0   0.0
[2,]    0    1    0    0    0  0.0  0.0  0.0   -1  -0.2  -0.1  -0.1     0   0.0   0.0   0.0     0   0.0   0.0   0.0
[3,]    0    0    1    0    0  0.0  0.0  0.0    0   0.0   0.0   0.0    -1  -0.2  -0.1  -0.1     0   0.0   0.0   0.0
[4,]    0    0    0    1    0  0.0  0.0  0.0    0   0.0   0.0   0.0     0   0.0   0.0   0.0    -1  -0.2  -0.1  -0.1
[5,]    0    0    0    0    1  0.0  0.0  0.0    1   0.0   0.0   0.0     1   0.0   0.0   0.0     1   0.0   0.0   0.0
[6,]    0    0    0    0    0  1.0  0.0  0.0    0   1.0   0.0   0.0     0   1.0   0.0   0.0     0   1.0   0.0   0.0
[7,]    0    0    0    0    0  0.0  1.0  0.0    0   0.0   1.0   0.0     0   0.0   1.0   0.0     0   0.0   1.0   0.0
[8,]    0    0    0    0    0  0.0  0.0  1.0    0   0.0   0.0   1.0     0   0.0   0.0   1.0     0   0.0   0.0   1.0
[9,]    1    0    0    0    0  0.0  0.0  0.0    0   0.0   0.0   0.0     0   0.0   0.0   0.0     0   0.0   0.0   0.0
[10,]    0    1    0    0    0  0.0  0.0  0.0    0   0.0   0.0   0.0     0   0.0   0.0   0.0     0   0.0   0.0   0.0
[11,]    0    0    1    0    0  0.0  0.0  0.0    0   0.0   0.0   0.0     0   0.0   0.0   0.0     0   0.0   0.0   0.0
[12,]    0    0    0    1    0  0.0  0.0  0.0    0   0.0   0.0   0.0     0   0.0   0.0   0.0     0   0.0   0.0   0.0
[1] "Number of Diskettes:  2"
[1] "File Sizes: "
[1] 10  2  1  1
[1] "Files in diskette  1"
[1] 2 3 4
[1] "Files in diskette  2"
[1] 1



## Interpreting the Result

Lines 2-14 of the output gives you the matrix of coefficients. Line 15 prints the number of diskettes needed to store the files. Line 17 prints the randomly generated file sizes from 1 to 10. Finally lines 18-21 prints which diskettes contain which files.

The space complexity of this solution is quite substantial. Given $N$ files, we need to specify $N^2 + N$ variables by $3\times N$ equations for a total of $(N^2 + N)\times 3N$ memory space for coefficients.

## An Interview Question

I was given this interview question and I’d like to share it to you. The setting is back in the days when the largest size of a hard disk was 1 GB and there were no CD writers yet and the only way to back up your data is through those 1.44 floppy disks. You want to back up your files but you want to minimize the number of floppy disks you need to use. Assume your most important files are in a single directory. How will you distribute the files across your disks in such a way that the number of disks you use is minimized ?

To make this simple, let’s assume the following:

– we will not take into account that every file you copy to the disk has a record of the metadata of the file and stored on the disk as well. This will eat up space as you put more files. For our purposes, we ignore this complexity.
– The size of each file is less than or equal to 1.44

First we need to have a list of those files including the size and sort the list according to size in descending order. If A is the list of files, we can apply this algorithm:

B := list of files to copy to current floppy disk
remaining_size := 1.44 MB
For file in A:
If remaining_size - file.size > 0:
A.remove(file)
Copy all files listed in B to disk
Empty B
Repeat process for remaining files in A


Although there are other better algorithms than the one above, this is the one I managed to to come up during the interview.

We now need to determine how fast our algorithm can run.

## Worst Case Complexity

How slow can this algorithm get ? If for any two files $F_i$ and $F_j$ in A we have $F_i + F_j > 1.44$, then all files will have their own diskette. If this is the case, for each file, our algorithm will execute step 4. For the first disk, it will execute the step $N$ times. For the second disk, it will execute the step $N-1$ times, for the third disk it will execute $N-2$ times, etc. the total number of times it executes step 4 is the total number of comparisons and is equal to the summation:

$\displaystyle \sum_{1=1}^{N} i$

which is equal to

$\displaystyle \frac{N(N+1)}{2}$

Therefore, in the worst case, the complexity is $O(N^2)$.

## Best Case Complexity

The best case is when all files fit in just one diskette. For this, the total number of comparisons is $N$

## Average Case Complexity

On the average, files have different sizes. We now compute the complexity on the assumption that the probability distribution is uniform.

If $k$ is the number of diskettes, the number of comparisons is a sequence of monotonic decreasing numbers $\{ a_1, a_2, a_3, \ldots, a_k \}$ taken at random from the set $\{ 1, 2, \ldots, N\}$. Each of the numbers $a_j$, $j\in \{1, 2, \ldots, k\}$ has a probability $1/N$ of being chosen. Let $X$ be a random variable such that

$\displaystyle Pr(X=a_j) = \frac{1}{N} \text{ for } j=1,2,\ldots,k$

then the number of comparisons $C$ is equal to

$\displaystyle C = \sum_{i=1}^k X = kX$

The expected value of $C$ is given by the

$E[C] = E[kX] = kE[X]$

However, the expected value of X is given by

$\displaystyle E[X] = \sum_{j=1}^N j\cdot Pr(X=j) = \frac{1}{N} \sum_{j=1}^N j = \frac{1}{N}\frac{N(N+1)}{2} = \frac{N+1}{2}$

Therefore,

$\displaystyle E[C] = k\frac{N+1}{2}$

What remains is to determine the average value of $k$, which is the number of diskettes. If $M=1.44$ is the maximum file size, the average file size is $M/2$. The average total file size is then $NM/2$. The average number of diskettes is equal to the average total size divided by size of diskette, that is

$k = \displaystyle \frac{NM}{2}\frac{1}{M} = \frac{N}{2}$

This means that

$\displaystyle E[C] = \frac{N}{2} \frac{N+1}{2} = O(N^2)$

which is the same as the worst case complexity.

There is another way to solve this problem using Integer Programming.

## Asynchronous Versus Synchronous: A Performance Perspective

I just had my coffee from my favorite coffee shop down the corner. I’m a regular at that coffee shop that the baristas know me and I’m able to sit down after I order and do something else while they make my coffee and serve to me when ready.  If I order from other branches of this coffee shop, I would have to wait for my coffee in a corner and not being able to do anything productive.  So while I was seated comfortable waiting for my coffee, a thought came to me. This is a great example of synchronous versus asynchronous service invocation.

A synchronous service invocation is just like buying your coffee and having to wait in the corner for the baristas to complete your coffee before you can even sit down and do something useful. Asynchronous service invocation is having to get seated and do something else while waiting for your coffee to be served.  My instinct tells me that asynchronous invocation is better than synchronous in terms of performance especially when it takes a long for the service to complete and the waiting time is much better used doing something else. But how can we show that this is the case?

We can model the synchronous/asynchronous invocation as a Markov Chain and compute a few metrics from our model. If you’re not familiar with this approach, you can refer to this article MODELING QUEUING SYSTEMS USING MARKOV CHAINS.

First, we model the asynchronous invocation. We can identify each state of our system by  2 slots each of which contains a number. The first slot is the number of responses waiting for the client to process and the second slot is the number of requests waiting for the server to process.  To simplify the modeling, we assume the following:

1. The maximum number of requests that a server can handle at any given time is 2. Which means that a client cannot anymore send a request if the second number is equal to 2.
2. When the server responds, the first number increments by 1. Once a client receives a response, it will stop whatever it is doing and process that response. When the client finishes processing the response, the first number goes down to zero. As a consequence, this assumption says that the maximum value of slot number 1 is 1 at any given time.

With these assumptions, we can draw the Markov Diagram of the asynchronous system to come up with the below diagram:

Explanation of the Diagram

The system initially starts at 00 where there are no requests and no responses. It will then transition to state 01 when the client will send a  request which increments the server’s pending requests to 1. From this state, the system can transition to either state 02 or state 10. State 02 occurs when the client sends another request without waiting for a response. State 10 is when the server is able to process the request and return a response, thereby decrementing it’s pending items and incrementing the client’s number of responses to process.

At state 02, the client cannot anymore send requests since the maximum requests that the server can handle is 2. At this state, it can only transition to state 11, where the server has processed one of the requests (thus decrementing the second slot from 2 to 1 and incrementing the first slot from 0 to 1).

State 11 can only transition to 01 since the client will stop whatever it is doing to process the single response it has received (thus decrementing the first slot from 1 to 0).

The numbers $a,b,c$ are the rates at which the transitions will occur. For example, the rate at which the system will transition from state 00 to state 01 is $b$.

In the remainder of this article, we assume that the client can request at the rate of 60 requests per unit time and can process the response at 60 per unit time. We also assume that the server can process requests at 30 per unit time. We therefore have the following:

$a=60, b=60, c=30$

Computing the Probabilities

At steady state, the flow going into each state is equal to the flow out of that state. For example, for state 00, the following is true:

$\displaystyle bP_{00} = aP_{10}$

Doing this for all states, we get the following balance equations:

$\begin{array}{rlr} \displaystyle bP_{00} &= aP_{10} & (1)\\ bP_{00} + a P_{11} &= bP_{01} + cP_{01} & (2)\\ bP_{01} &= cP_{02} & (3)\\ aP_{10} &= cP_{01} & (4)\\ aP_{11} &= cP_{02} & (5) \end{array}$

Since the sum of the probabilities should be equal to 1, we have our last equation:

$P_{00} + P_{01} + P_{02} + P_{10} + P_{11} = 1$

We have 6 equations in 5 unknowns, one of the equations above is actually redundant. In fact, you can show that equation 2 above is redundant.

We can form the matrix equation of the system of equations above and solve for the probabilities:

$\begin{bmatrix} -b & 0 & 0 & a & 0 \\ 0 & b & -c & 0 & 0 \\ 0 & -c & 0 & a & 0 \\ 0 & 0 & -c & 0 & a \\ 1 & 1 & 1 & 1 & 1 \end{bmatrix} \begin{bmatrix} P_{00}\\ P_{01}\\ P_{02}\\ P_{10}\\ P_{11} \end{bmatrix} = \begin{bmatrix} 0\\ 0\\ 0\\ 0\\ 1 \end{bmatrix}$

Solving this matrix equation when a=60, b=60 and c=30, we can find the probabilities:

$\begin{bmatrix} P_{00}\\ P_{01}\\ P_{02}\\ P_{10}\\ P_{11} \end{bmatrix} = \displaystyle \frac{1}{ab^2+abc+ac^2+b^2c+bc^2} \begin{bmatrix} ac^2\\ abc\\ ab^2\\ bc^2\\ b^2c \end{bmatrix} = \begin{bmatrix} 1/10 \\ 1/5 \\2/5\\ 1/10\\ 1/5 \end{bmatrix}$

Utilization and Throughput: Asynchronous Case

The client utilization is defined to be the probability that the client is busy. In the same way, the server utilization is the probability that the server is busy. Looking at the diagram, the client is busy sending requests at state 00 and state 01. It is busy processing responses at state 10 and 11.  On the other hand, the server is busy processing requests at state 01 and 02.

Therefore, the client utilization is  equal to

$\begin{array}{rl} U_{\text{client}} &= P_{00} + P_{01} + P_{10} + P_{11}\\ &= 1/10 + 1/5 + 1/10 + 1/5\\ &= 3/5\\ &= 60\% \end{array}$

The server utilization is equal to

$\begin{array}{rl} U_{\text{server}} &= P_{01} + P_{02}\\ &= 1/5 + 2/5 \\ &= 3/5 \\ &= 60\% \end{array}$

The system througput is the number of requests the client is able to submit and is equal to

$\begin{array}{rl} X &= 60P_{00} + 60 P_{01} \\ &= 60*1/10 + 60 * 1/5 \\ &= 18 \end{array}$

Comparison with Synchronous Invocation

For the synchronous invocation, the client will submit a request at state 00. The server will then receive this request and process it immediately at state 01. The client will get the response and process it at state 10 and do the loop again at state 00. Please see the Markov diagram below describing this process.

We can solve the probabitlies of this Markov Chain by solving the balance equation

$\begin{bmatrix} b & 0 & -a \\ 0 & c & -a \\ 1 & 1 & 1 \end{bmatrix} \begin{bmatrix} P_{00}\\ P_{01}\\ P_{10} \end{bmatrix} = \begin{bmatrix} 0\\0\\1 \end{bmatrix}$

Solving for the probabilities, we get

$\begin{bmatrix} P_{00}\\ P_{01}\\ P_{10} \end{bmatrix} = \displaystyle \frac{1}{ab + ac + bc} \begin{bmatrix} ac\\ ab\\ bc \end{bmatrix} = \begin{bmatrix} 1/4\\ 1/2\\ 1/4 \end{bmatrix}$

At state 00, the client is busy sending requests at the rate of 60 requests per unit time, therefore the system throughput is

$\begin{array}{rl} X &= 60 P_{00} \\ &= 60/4 \\ &= 15 \end{array}$

which is less than the Asynchronous case. In addition, the client utilization is

$\begin{array}{rl} U_{\text{client}} &= P_{00} + P_{10}\\ &= 1/4 + 1/4\\ &= 1/2 \end{array}$

This is lower than the Asynchronous case where the client utilization is 60%.

Therefore, the asynchronous service invocation is indeed better compared to the synchronous case. It has a much higher throughput and a much higher utilization compared to the synchronous case.

P.S. I used Mathics to solve the matrix equations. Mathics is an open source alternative to Mathematica. It features Mathematica-compatible syntax and functions so if you’re a user of Mathematica you can run basic commands in Mathics. If you’re a Mathics user, you can easily switch to Mathematica as well. Please visit http://mathics.github.io/ to know more about Mathics.

## If A Picture Paints A Thousand Words

And so begins the words of the a song by the Bread. The complexity we see in the world is a product of the interactions of simple building blocks. Everything we see is just a combination of a few elements in the periodic table. In turn every element is a combination of electrons, protons and neutrons.

A picture paints a thousands words but what paints a thousand pictures? Just like a Lego truck that is composed of a few Lego shapes, a “black and white” picture is also composed of a few levels of gray. The levels is just a set of 256 numbers starting from 0 up to 255. The number 0 represents black and the number 255 represents white. The numbers in between represent colors between extreme black and extreme white. The diagram below shows the levels of gray in a 16 by 16 matrix. The gradient goes from left to right, top to bottom.

Using just these 256 numbers, we can already draw interesting figures. Take for example this 8 by 8 matrix of numbers:

197   4  46  18 155 186 144 190
111  30   6 131 235  99 112 227
10   26 141 147 120  28  92 187
241 100 118 149 178   5 158 213
105  43 104  63 172 222  64 174
130 194 243  56  59 166 134 133
247 145  36  82 238 251  53 129
171  29  51 163 199 214 113 195


When you plot this on an 8×8 grid, you get the following pattern:

Not all pictures need to be boring. Here’s an example of a picture that is not that boring:

17 23 26 20 13 20 20 22 21 17 17 25 20 11 15 15 9 10 18 13 14 12 13 39 61 63 71 72 69 71 74 72 74 75 72 76 75 75 75 80 77 78 79 79 80 80 80 80 81 81 82 82 83 83 83 83 84 84 84 84 85 86 86 86 87 87 87 88 88 88 89 89 89 89 90 90 90 90 90 91 90 90 91 91 92 92 92 92 94 94 95 95 95 95 95 95 96 96 96 96 97 97 97 97 98 98 98 99 99 99 100 100 101 101 101 101 102 102 103 103 104 104 104 105 105 105 105 105 106 106 106 107 107 108 108 108 110 110 110 110 111 111 112 112 113 113 113 114 114 115 115 115 116 116 117 117 117 118 118 119 119 119 120 120 121 121 122 122 123 124 124 125 125 125 126 127 127 127 128 128 129 129 129 129 131 131 132 132 133 134 ...


That is just the start of a 310×204 matrix saved in the file “clifton.txt”. If you plot that using the R command

clifton = read.table("clifton.txt")

#convert this data frame to a vector
vclifton = as.numeric(clifton)

#create the 310x204 matrix
mclifton = matrix(vclifton,nrow=310,ncol=204)

#plot the image
image(mclifton[,ncol(mclifton):1],col=gray((0:255)/255))


you’ll get the following image

Histogram

If you look at the file “clifton.txt” and count the occurrence of each number, you will see that there are 9 occurrences of zero and 4 occurrences of 240. If you do this for all other numbers, you will get the following counts:

Level Frequency
0     9
1     16
2     31
3     34
4     49
:
:
237   115
238   113
239   18
240   4
:
:
254   0
255   0


If you plot this data as a bar chart, you will get what is known as the histogram of the picture:

The histogram above was obtained using the following R command:

hist(vclifton,breaks=256)


The histogram tells us the how much of each gray level contribute to the actual image.

Here’s another example of a picture that is relatively dark. In photography, the image below is called a “Low Key” image because it is mostly dark.

Low Key images tend to have the bulk of the colors at the right side of the scale. The next image is a “High Key” image because it is mostly white:

Histograms of High Key images tend to have the bulk of the colors at the right side of the scale. The histogram of the “clifton.txt” image tends to have a much better balance in that it contains most of the colors at the middle of the range.

It’s surprising how we can come up with complex images using just 256 levels of gray. We haven’t even considered colors yet! We also learned how histograms can be constructed by counting the frequency of each level and what histograms of High Key, Low Key and Optimum exposure look like.

## A Sound Correction

As I was taking my coffee over breakfast today, I realized that in the previous post, I did not incorporate the time it took for the sound waves to reach my ear when I pressed the stop button of the stopwatch. Sound waves travel at 340.29 meters per second. At a height of 25 meters, it would take 0.073 seconds for the sound of the explosion to reach the ground. This should be negligible for the purposes of our computation. However, what is really the effect if we incorporate this minor correction?

As seen in the figure below, the recorded time is composed of the time it took for the skyrocket to reach the maximum height plus the time it took for the sound waves to reach my ear, that is,

$t_r = t_a + t_s$

where $t_r$ is the recorded time, $t_a$ is the actual time the skyrocket reached its maximum height and $t_s$ is the time it took for the sound waves to travel from the maximum height to the ground.

We can rewrite the height as

$\displaystyle h_{\text{max}} = - \frac{gt_a^2}{2} + v_i t_a$

Since $v_i = gt_a$,

$\displaystyle h_{\text{max}} = - \frac{gt_a^2}{2} + gt_a^2$
$\displaystyle = \frac{gt_a^2}{2}$

At the maximum height, $h_{\text{max}}$, it would take the sound waves $t_s$ seconds to travel, that is,

$\displaystyle h_{\text{max}} = v_s t_s$

where $v_s$ is the speed of sound. Therefore,

$\displaystyle t_sv_s = \frac{gt_a^2}{2}$
$\displaystyle (t_r- t_a)v_s = \frac{gt_a^2}{2}$
$\displaystyle v_st_r - v_st_a = \frac{gt_a^2}{2}$
$\displaystyle \frac{gt_a^2}{2} + v_st_a - v_st_r = 0$

The last equation above is a quadratic equation which we can solve for $t_a$ using the quadratic formula:

$\displaystyle t_a = \frac{-v_s \pm \sqrt{v_s^2 - 4(1/2)gv_st_r}}{2(1/2)g}$

Substituting the values $g = 9.8$, $v_s=340.29$, and $t_r = 2.3$, we get

$\displaystyle t_a = 2.22$

Using this result, the maximum height is computed to be

$\displaystyle h_{\text{max}} = \frac{gt_a^2}{2} = 24.33$

Comparing this with our previous computation of $h_{\text{max}} = 25.829$, we find that we have overestimated the maximum height by about 1.49 meters. It’s not really that bad so we can just drop the effect of sound.

## New Year, New Heights and the Projectile

I spent my new year celebration at our friend’s place with 2 other colleagues. It was started with a dinner at about 9:30 pm and we bought some bottles of wine in addition to the spirits we already brought. At about 10:30, a street party was setup in the vicinity and about 5 other families joined in a potluck. Marijo’s brother started the fireworks at about that time with David and Gerard (my colleagues) taking turns setting them off. I did not attempt to set them off because I did a lot those things way back as a child and still lucky enough to have my fingers intact.

This is a video of our 2012 New Year’s Celebration:

As Gerard was flying off skyrockets (or kwitis as they are called in my language) David asked me “How high do you think these things go up?”. That was a really good question which got me to thinking. I said “Let’s compute!” So I took my iphone and launched its built-in stopwatch. Gerard flew about 3 skyrockets and I took the time it takes for them to fly until they blow up in the air. The average was found to be 2.3 seconds.

Independent of Mass

Legend has it that Galileo Galilei performed an experiment at the Leaning Tower of Pisa. He dropped objects of different masses to see which of them falls first. Starting from the heaviest to the lightest object, what he found was that, in a vacuum, they reach the ground at the same length of time when dropped from the same height. Using this result, we can compute for the distance travelled by any object under the influence of gravity. At heights near the earth’s surface, the acceleration of objects in the presence of gravity alone is given by

$\displaystyle \frac{d^2h}{dt^2}= \frac{dv}{dt} = -g$

where $h$ is the vertical distance (height), $v$ is the velocity and $g$ is the acceleration due to gravity given by $g = 9.8 \text{m/s}^2$.

The equation above is a simple differential equation which we can solve for the velocity at any time $t$ by integrating both sides:

$\displaystyle \int^{v}_{v_i} dv = \int^{t}_{t_i} -g dt$

which by the Fundamental Theorem of Calculus gives us

$\displaystyle v - v_i = -g(t - t_i)$
$\displaystyle v = -gt + v_i + t_i$

If we take initial time of the rocket launch to be zero, that is, $t_i=0$, we have

$\displaystyle v = \frac{dh}{dt}= -gt + v_i$

We don’t know the initial velocity $v_i$ of the rocket. However, at the maximum height attained by the rocket, the velocity is zero. Using this fact, we can solve for the initial velocity:

$\displaystyle 0 = -gt_{\text{max}} + v_i$
$\displaystyle v_i = gt_{\text{max}}$

where $t_{\text{max}}$ is the time the rocket reached maximum height.

At $t_{\text{max}}=2.3$ seconds, the initial velocity is therefore $v_i = 9.8 * 2.3 = 22.5$ m/s.

Maximum height

We can solve for the height as a function of time by integrating both sides of the velocity equation:

$\displaystyle \int^{h}_{h_i} dh = \int^{t}_{t_i} -gt dt + v_i dt$
$\displaystyle h - h_i = -\frac{gt^2}{2} + v_i t \big|^t_{t_i}$

Since the initial height is zero , that is , $h_i = 0$ and the initial time is zero $t_i = 0$, we have

$\displaystyle h = -\frac{gt^2}{2} + v_i t$

Since at t=2.3 seconds, the rocket reaches the maximum height, we have

$\displaystyle h_{\text{max}} = -\frac{9.8 \times (2.3)^2}{2} + 22.5 * 2.3 = 25.829 \text{ meters}$

Does this make sense? The average height of a building storey is about 3 meters. This is about 8 or 9 storeys high which I think makes sense.

It’s good that my new year not only started with a warm welcome from friends but it also appealed to my physics curiosity.

Thank you to Marijo Condes and family for letting us spend a memorable New Year’s Eve.

## Chances Are

I just came from a great weekend escapade, with my wife and two colleagues, in one of the beautiful regions of the Philippines. Bicol is home to Mayon Volcano, one of the seven perfect coned volcanoes on Earth. The scenery is so green and the air so refreshing. It was one of the best trips I ever made in the Philippines.

The trip was also full of coincidences. We met a colleague in one of the restaurants in Naga City where we had our dinner. In the same restaurant we also met one of our contacts in Ateneo de Naga. On the way home, we met another colleague at the airport. But the first coincidence happened on our way to Legazpi City where one of my companions (i’ll hide the name to protect the innocent) met a good friend in the airplane seated next to him. It was a pleasant surprise for my companion and he asked “What are the odds of that happening”? That’s a good question which got me thinking later that night.

To simplify the problem, let us not compute the odds of my companion and his friend to be in the same plane going to Bicol. Rather, let us compute the odds of him being seated next to his friend. The plane was an airbus A320 and about 180 seats. The seats are laid out in such a way that there is a middle isle and each row is composed of three seats. Let us draw a row of seats and assume we have persons A and B. How many ways can they be seated in that row in such a way that they sit next to each other? Below is a diagram on the ways two persons can be seated next to each other.

As you can see, there are four ways for them to be seated next to each other for any given row. In the first arrangement, we have A, B seated in that order. The third seat can be occupied by anyone. Having chosen the seats for A and B, how many choices do we have for the third person? Since there are 180 seats minus two, the third person has 178 other seats to choose from. Once person number 3 chooses one seat, the fourth person can then choose from the remaining 177 seats. Doing this for all other passengers, you will come up with 178! ways of assigning seating arrangements for the 180 passengers such that persons A and B are seated next to each other. Now, that is just for the case where A and B are in the first two seats. For the other three cases, you will see that it also takes 178! ways for each case. Since there are 4 cases per row and 60 rows, we have

$\displaystyle 4\cdot 60\cdot 178!$

ways to have two persons sit next to each other on a 180 seater plane. Since the total number of ways to assign passengers to 180 seats is 180!, then the probability of having two persons seat next to each other is

$\displaystyle \frac{4\cdot 60\cdot 178!}{180!}$
$\displaystyle = \frac{4\cdot 60\cdot 178!}{180\cdot 179\cdot 178!}$
$\displaystyle = \frac{4\cdot 60}{180\cdot 179}$
$\displaystyle = 0.00744879$

This is quite a low probability and it assumes that the seating arrangement is totally random. By totally random, I mean that my wife and I could be seated far apart from each other. In reality, this is not the case since many passengers come in groups and they prefer to be seated next to each other. If we take this into account, then the probability of my companion and his friend to be seated next to each other increases.