For the Nth time

We are sometimes irritated when somebody does something stupid and repeatedly does so. That’s why we use the phrase “for the nth time” to describe the fact that you’ve already lost count of such things. So where does this phrase come from? I would probably think this came from mathematics and especially that technique called mathematical induction.

There are some statements in mathematics which we think to be true but we are not really sure of it. One class of these problems have something to do with statements about sequences of numbers. For example, if somebody told you that the sum the first n natural numbers is equal to \frac{n(n+1)}{2}, would you believe it? You can either derive this result or you can try it out for a few numbers to convince yourself that it is true. But if this statement is true for the first few numbers, there is no guarantee that it will be true for all numbers. To convince ourselves, we use the mathematical induction technique.

The mathematical induction is a two step process. If you wish to prove that f(n) is true for all positive integers n,

  1. Basis Step – First show that it is true for n = 1
  2. Inductive Step – Assume that f(n) is true. Prove that f(n+1) is true.

When you have shown both steps to be true, then you have shown that f(n) is true for all positive integer n.

An Example

Show that

\displaystyle  f(n) = \sum_{i=1}^{n} i = \frac{n(n+1)}{2}

If we substitute n = 1 to the above expression, we get f(1) = \frac{1(1+1)}{2} = \frac{2}{2} = 1

We have just shown that the basis step is true. Now let’s work on the inductive step. Assuming that f(n) is true, we now show that it is also true for f(n+1). How do we do this? We can write f(n+1) as the sum of the first n terms and n+1, that is,

\displaystyle f(n+1) = \sum_{i=1}^{n+1} i = \frac{n(n+1)}{2} + (n+1)

Then we simplify the right hand side:

\displaystyle = \frac{n(n+1) + 2n + 2}{2}
\displaystyle = \frac{n^2 + n + 2n + 2}{2}
\displaystyle = \frac{n^2 + 3n + 2}{2}

Notice that we can factor the quadratic expression of the numerator:

\displaystyle = \frac{(n+1)(n+2)}{2}
\displaystyle = \frac{(n+1)(n + 1 + 1)}{2}
\displaystyle = \frac{(n+1)\Bigg[(n+1) + 1\Bigg]}{2}

Now the last expression is precisely what you will get when you substitute n+1 to f(n):

\displaystyle f(n) = \frac{n(n+1)}{2} \Rightarrow f(n+1) = \frac{(n+1) ( n+1 +1)}{2}

We have just shown, using the mathematical induction technique, that \sum_{i=1}^{n} i = \frac{n(n+1)}{2} is true for all positive integers. Using this technique, how would you show that the binary search algorithm has running time a_n = \lfloor \log_2 n \rfloor + 1?

Advertisements

Conquering Divide and Conquer

In a previous post, we derived an expression for the running time of the binary search using a recurrence relation. We also solved this recurrence relation using the brute force method. In this article, we are going to derive a general expression for the running time of the divide and conquer algorithm.

A divide and conquer algorithm has a recurrence relation in the form

\displaystyle f(n) = af(n/b) + g(n)

where f(n) is an increasing function. Looking at this recurrence relation, the right hand side says that the total executions for an input of size n is equal to the number of executions of a sub-problem of size n/b times some number a plus some function of n. The recurrence relation of the binary search algorithm, which is one type of divide and conquer algorithm, is a_n = a_{\lfloor n/2 \rfloor} + 1. In this case, f(n/b) =  a_{\lfloor n/2 \rfloor} and g(n) = 1.

As usual, we make things simpler by assuming that n = b^k, for some integer k. We can expand the recurrence relation above using the first few terms:

\displaystyle f(n/b) = af(n/b^2) + g(n/b)
\displaystyle f(n/b^2) = af(n/b^3) + g(n/b^2)
\displaystyle f(n/b^3) = af(n/b^4) + g(n/b^3)

Plugging these results back to the expression for f(n), we get

\displaystyle f(n) = a \big[ a f(n/b^2) + g(n/b) \big] + g(n)
\displaystyle = a^2 f(n/b^2) + ag(n/b) + g(n)

Substituting the expression for f(n/b^2) we get

\displaystyle f(n) = a^2 \big[ af(n/b^3) + g(n/b^2) \big] + ag(n/b) + g(n)
\displaystyle = a^3 f(n/b^3) + a^2g(n/b^2) + ag(n/b) + g(n)

And one more time for f(n/b^3),

\displaystyle f(n) = a^3 \big[  af(n/b^4) + g(n/b^3) \big] + a^2g(n/b^2) + ag(n/b) + g(n)
\displaystyle = a^4 f(n/b^4) + a^3g(n/b^3) +  a^2g(n/b^2) + ag(n/b) + g(n)

We are starting to see a pattern here! If we continue this k number of times, we should get

\displaystyle f(n) = a^k f(1) + \sum_{j=0}^{k-1} a^j g(n/b^j)

The reason why we have f(1) is because n/b^k = 1.

Let us simplify a bit and assume that g(n) = c, where c is a constant. In this case, the above expression will simplify to

\displaystyle f(n) = a^k f(1) + c \sum_{j=0}^{k-1}  a^j

Case when a = 1

When a = 1, the second term of the above expression is just \sum_{j=0}^{k-1} 1 = ck. Therefore,

\displaystyle  f(n) = f(1) + ck

Since n = b^k, then by definition of logarithms, we have

\displaystyle f(n) = f(1) + c\cdot \log_b n

In the realistic case where n is not a power of b, then we can find n between powers of b, that is,

\displaystyle b^k < n < b^{k+1}

for some k. Taking the logarithms of the above expression, we get

\displaystyle k < \log_b n < k + 1

Since f(n) is an increasing function, substituting b^{k+1} to f(n) = f(1) + ck, we get

f(n) < f(b^{k+1}) = f(1) + c(k+1) = \big[ f(1) + c \big] + ck \le \big[ f(1) + c \big] + c\cdot \log_b n

Therefore, when a = 1, the function f(n) = a^k f(1) + c \sum_{j=0}^{k-1}  a^j is O(\log_2 n).

Case when a > 1

Let’s digress for a while and review the definition of logarithms. Let a, b, and c be positive real numbers, if a = b^c, then the logarithm of a to the base b is

\displaystyle \log_b a = c

From this definition, we therefore have the identity a = b^{\log_b a}. We can further show that

a^{\log_b c} = c^{\log_b a}

To see this, let

\displaystyle y = a ^{\log_b c}

Taking the logarithms, we get

\displaystyle \log_a y = \log_b c
\displaystyle b^{\log_a y} = c

Raising both sides to the power \log_b a, we get

\displaystyle c^{\log_b a} = \big( b^{\log_b a}\big) ^{\cdot \log_a y}
\displaystyle = a^{\log_a y} = y = a ^{\log_b c}

Therefore,

\displaystyle a^{\log_b c} = c^{\log_b a}

Having taken cared of that, let’s now return to the case where a > 1. Let’s write again our expression for f(n),

\displaystyle f(n) = a^k f(1) + c \sum_{j=0}^{k-1}  a^j

For a > 1, the right term of the above expression is just a geometric progression whose sum is

\displaystyle \sum_{j=0}^{k-1}  a^j = \frac{a^k - 1}{a-1}

Plugging this into our expression for f(n), we get

\displaystyle f(n) = a^k f(1) + c \frac{a^k - 1}{a-1} = a^k f(1) + \frac{ca^k - c}{a-1}
\displaystyle = a^k\big[ f(1) + \frac{c}{a-1}\big] - \frac{c}{a-1}

If we assume n = b^k, we have \log_b n = k and a^k = a^{\log_b n} = n ^{\log_b a}. Therefore,

\displaystyle f(n) = C_1 n^{\log_b a} + C_2

where C_1 = f(1) + c/(a-1) and C_2 = - c/(a-1).

Now, what happens when n is not a power of b? As usual, we have b^k < n <b^{k+ 1} and

\displaystyle f(n) \le f(b^{k+1}) = a^{k+1}\big[ f(1) + \frac{c}{a-1}\big] - \frac{c}{a-1}
\displaystyle = C_1 a^{k+1} + C_2
\displaystyle = (a\cdot C_1) \cdot a^k + C_2
\displaystyle = (a\cdot C_1) \cdot n ^{\log_b a} + C_2

Therefore, for a > 1, f(n) is O(n^{\log_b a}).

The Big Oh!

We have demonstrated how basic counting is used to determine the running time of our algorithms. We have seen that the running time of traversing an array element by element is proportional to n, where n is the size of the array. We have seen that a code snippet that contains 2 loops is proportional to n^2. And we have seen that halving the input in every iteration is proportional to \log_2 n. Do we really need to know the exact number of comparisons in order to determine the running time of an algorithm? Or is an estimate sufficient? For a sufficiently large n, the answer is YES an estimate is sufficient!

In one of the articles, we found an algorithm that has a running time of \frac{n(n+1)}{2}. If we take a look at the graph of \frac{n(n+1)}{2} versus n^2, we will see that n^2 is greater than \frac{n(n+1)}{2}.

This means that \frac{n(n+1)}{2} is bounded above by n^2. To generalize this concept, we define what is known as the “Big Oh” approximation. Given two functions f(x) and g(x), we say that f is of order g, written f(x) is O(g(x)), if and only if, there exist a positive real number M and a real number x_0 such that for all x > x_0,

\displaystyle |f(x)| \le M\cdot |g(x)|, whenever x > x_0.

Using this definition, we can show that \frac{n(n+1)}{2} is O(n^2). Expanding

\displaystyle \frac{n(n+1)}{2} = \frac{n^2 + n }{2}
\displaystyle \frac{n^2 + n }{2}  \le \frac{n^2 + n^2}{2} = \frac{2n^2}{2} = n^2

By taking M = 1, we have shown that \frac{n(n+1)}{2} = O(n^2).

Another example would be computing the Big Oh of the binary search. We have seen in this post that the running time of the binary search is

\displaystyle a_n = \lfloor \log_2 n \rfloor + 1

Since \lfloor \log_2 n \rfloor \le \log_2 n and \log_2 n > 1 for n > 2, we have

\displaystyle a_n = \lfloor \log_2 n \rfloor + 1 \le \log_2 n + \log_2 n = 2\log_2 n. By taking M=2 and x_0 = 2, we have

\displaystyle |a_n| \le 2\cdot \log_2 n. Therefore, the binary search algorithm is O(\log_2 n).

A Recurring Theme

One of the goals of these series of articles is to give a flavor of how we programmers get a feel of the running time of our algorithms. In the last article, we showed how fast a binary search is compared to a linear search using simple arguments that anyone can do. We do this routinely in elementary algebra where we get the solution of a quadratic equation and we try to see if we can reduce it to its common factors. However for general quadratic equations, we can always apply the quadratic formula to solve for the unknown. In this article, we are going to investigate solving for the running time of a binary search using a technique called recurrence relations.

We shall repeat below the listing of the binary search algorithm:

algorithm binarySearch( Array A): 
1. get the middle element A[middle]
2. compare x with A[middle]. If x equals A[middle] return middle.
3. if x less than A[middle], return binary search(A[1:middle-1]
4. else return binarySearch(A[middle + 1, n])

Floor and Ceiling functions

In reality, we don’t always have arrays that come in lengths of powers of 2. So when we halve an array, it does not necessarily divide evenly. This means that we can get a number that is not an integer. To deal with situations like this, we define the floor and ceiling functions.

The floor function, when it operates on a real number x, returns the largest whole number that is less than x. On the other hand, the ceiling function, when it operates on an integer x returns the smallest whole number that is greater than x. We symbolize the floor function acting on a real number x as \lfloor x \rfloor and the ceiling function is denoted by \lceil x\rceil. For example, if x = 3.14, then \lfloor x \rfloor = 3 and \lceil x \rceil = 4. Now suppose, we have an array of length 10. If we halve this, we have 5. Halving again, we get 2.5, which is not an integer. So we either get the floor to get 2 or the ceiling to get 3.

Continuing with the Analysis

In the binary search algorithm defined above, we did not specify how to compute “mid”. For the sake of our discussion, let us take mid = \lfloor x/2 \rfloor, where x is the length of the input array of the algorithm.

We observe from the binary search algorithm above the following:

– if the length x is an odd number, the array can be split into two equal parts which is equal to \lfloor x \rfloor. If the length is even, then the length of the right subarray is \lfloor x/2 \rfloor -1 and the right-hand side has length \lfloor x/2 \rfloor. Since we are interested in the worst-case complexity of this algorithm, we take \lfloor x/2 \rfloor as the input to the next iteration.
– Every iteration has one comparison and it is comparing the search number to the middle element.
– The total number of comparisons is therefore equal to 1 plus the number of comparisons for the subarray of size \lfloor x/2 \rfloor.

Mathematically, this means that if n is the size of the array and a_n is the number of comparisons executed by the binary search for an array of size n, then

\displaystyle a_n = a_{\lfloor n/2 \rfloor} + 1

where a_{\lfloor n/2 \rfloor} is the number of comparisons needed to run the binary search for an array of \lfloor n/2 \rfloor elements. The equation above is what is called a recurrence relation for the algorithm. In this case, it is the recurrence relation for the binary search algorithm. In the special case that the array has only one element, we impose the condition that

\displaystyle a_1 = 1

because there is only one comparison involved.

A Brute Force Solution

One way to solve this recurrence relation is by listing the first few terms of this recurrence relation.
\displaystyle a_1 =  1*
\displaystyle a_2 = a_{\lfloor 2/2 \rfloor} + 1 = a_{1} + 1 = 1 + 1 = 2*
\displaystyle a_3 = a_{\lfloor 3/2 \rfloor} + 1 = a_{1} + 1 = 1 + 1 = 2
\displaystyle a_4 = a_{\lfloor 4/2 \rfloor} + 1 = a_{2} + 1 = 2 + 1 = 3*
\displaystyle a_5 = a_{\lfloor 5/2 \rfloor} + 1 = a_{2} + 1 = 2 + 1 = 3
\displaystyle a_6 = a_{\lfloor 6/2 \rfloor} + 1 = a_{3} + 1 = 2 + 1 = 3
\displaystyle a_7 = a_{\lfloor 7/2 \rfloor} + 1 = a_{3} + 1 = 2 + 1 = 3
\displaystyle a_8 = a_{\lfloor 8/2 \rfloor} + 1 = a_{4} + 1 = 3 + 1 = 4*
\displaystyle a_9 = a_{\lfloor 9/2 \rfloor} + 1 = a_{4} + 1 = 3 + 1 = 4
\displaystyle a_{10} = a_{\lfloor 10/2 \rfloor} + 1 = a_{5} + 1 = 3 + 1 = 4
\displaystyle a_{11} = a_{\lfloor 11/2 \rfloor} + 1 = a_{5} + 1 = 3 + 1 = 4
\displaystyle a_{12} = a_{\lfloor 12/2 \rfloor} + 1 = a_{6} + 1 = 3 + 1 = 4
\displaystyle a_{13} = a_{\lfloor 13/2 \rfloor} + 1 = a_{6} + 1 = 3 + 1 = 4
\displaystyle a_{14} = a_{\lfloor 14/2 \rfloor} + 1 = a_{7} + 1 = 3 + 1 = 4
\displaystyle a_{15} = a_{\lfloor 15/2 \rfloor} + 1 = a_{7} + 1 = 3 + 1 = 4
\displaystyle a_{16} = a_{\lfloor 16/2 \rfloor} + 1 = a_{8} + 1 = 3 + 1 = 5*

and so on…

We now start to see a pattern! Look at the lines with * in them. We have a * when

\displaystyle n = 1 = 2^0, \text{ and } a_{n} = 1
\displaystyle n = 2 = 2^1, \text{ and } a_{2} = 2 = 1 + 1
\displaystyle n = 4 = 2^2, \text{ and } a_{n} = 3 = 2 + 1
\displaystyle n = 8 = 2^3, \text{ and } a_{n} = 4 = 3 + 1
\displaystyle n = 16 = 2^4, \text{ and } a_{n} = 5= 4 + 1

So for powers of 2, a_n = \log_2 n + 1. Now what happens between powers of 2?

\displaystyle n = 3, a_{3} = 1 + 1
\displaystyle n = 5, a_{5} = 2 + 1
\displaystyle n = 6, a_{5} = 2 + 1
\displaystyle n = 7, a_{5} = 2 + 1
\displaystyle n = 9, a_{5} = 5 + 1
\displaystyle n = 10, a_{5} = 5 + 1
\displaystyle n = 11, a_{5} = 5 + 1
\displaystyle n = 12, a_{5} = 5 + 1
\displaystyle n = 13, a_{5} = 5 + 1
\displaystyle n = 14, a_{5} = 5 + 1
\displaystyle n = 15, a_{5} = 5 + 1

So for n between powers of 2, that is, 2^n \le n < 2^{n+1}, we see that a_{n} = \lfloor \log_2 n \rfloor + 1. Since this expression is also satisfied for n equal to powers of 2, the solution of the recurrence relation is therefore

\displaystyle a_n = \lfloor \log_2 n \rfloor + 1

Now, are we sure that this is true for all n? That's another topic we will discuss in a future post.

Divide et Impera

What’s the difference between managing 5 people and managing 25 people? None (Surprise!). Group the people into 5 and pick a leader in each group. Each leader manages his/her group and you manage the 5 leaders.

This strategy is known as divide and conquer and has been used by many successful military leaders like Julius Ceasar who has been attributed the phrase “divide et impera” or Divide and Conquer. Not surprisingly, this is also used in programming.

Imagine you are asked to search a number in a sorted array of 1 million numbers. How will you search for it? A novice programmer will probably search for it one element at a time. The code would end up like this:


public int findNumber(int numToFind, int[] array){

  for( int i=0;i<1000000; i++){
    if(a[i] == numToFind)
       return i;
  }

  return -1;
}

What’s the worst case that can happen when searching for a number in a sorted array? When the number is the last number in the array or is not even in the array! When that happens, the number of times you traverse the “for loop” is 1 million! You can just imagine how long it takes for this algorithm to execute if you are now searching on an array of 1 trillion numbers.

However, if you make use of the fact that the array is sorted, something beautiful happens. You can search the array of 1 million numbers in less than 20 comparisons! The way to do it is to start searching at the middle. If x is the number you are searching, compare x with the middle element. If x is equal to it, then you’ve found your number. If not, then either x is greater than the middle element or less than the middle element. If you’ve found that x is less than the middle element, then obviously x is not in the right side of the array but can be on the left side. I say “can be” because x might not even be in the array. We can write this in a more structured manner

algorithm binarySearch( Array A): 
1. get the middle element A[middle]
2. compare x with A[middle]. If x equals A[middle] return middle.
3. if x less than A[middle], return binary search(A[1:middle-1]
4. else return binarySearch(A[middle + 1, n])

So how many times do we compare x to the middle element? How many comparisons are there in each iteration? Only one. Now it only remains to count the number of iterations for any sorted integer array of size n. To make things simpler, let us assume that n = 2^m, for some m (that is, n is a power of 2). At every iteration, we split the array size into two equal parts. The final iteration occurs when you have sufficiently divided the array so that there is only one element. So when does this happen? This happens when you have halved the array m number of times, or mathematically:

\displaystyle m = \log_2 n

Therefore, if you are given a sorted array of 1 million integers, the number of comparisons you need is only

\displaystyle \log_2 1000000 = 19.9

or about 20 comparisons. That’s amazingly fast!

A Combinatorial Solution

In the last post, we computed the number of times the “Hello World” was printed when executing the code snippet below:


for(int i=0;i<10;i++){
   for(int j=0;j<i;j++){
     for(int k=0;k<j;k++){
         System.out.println("Hello World");
     }
   }
}

The computation that we did last time was too cumbersome. It turns out that there is a much better way to do it.

Imagine dividing a sheet of paper into ten regions by match sticks as shown below.


How many match sticks do you need to have ten regions? Nine. Suppose you have 3 blue chips. Distribute these chips into any region. They can be lumped together as shown in the figure above or you can put one chip in region 1 and two chips in region 2.


Or you can put a chip in each region.

Now, how does this relate to our original problem? By interpreting the regions as corresponding to the values 1 to 10 and the chips as the variables, whenever a chips is in a region, it assumes that value. So for example, in the first figure, all the chips are in region 1, so we say that the variables i,j,k all have values equal to 1. The second figure is more tricky, we have one chip in the first region, and 2 chips in the second region. So what are the values of i,j,k ? We get a hint from the code snippet itself. Since k \le j \le i, we interpret the leftmost chip as the k variable, the middle chip as the j variable and the rightmost chip as the i variable. So, in the second figure, the values of i,j,k are i=2, j=2, and k=1. In the third figure, the values of i,j,k are i=3,j=2, k=1.

The problem is now reduced to counting the number of ways of finding the positions of 3 chips out of 12 positions ( 9 matchings and 3 chips):

\displaystyle {{9+3}\choose{3}}

which is equal to 220, the number of times the “Hello World” is printed as we have seen in the previous post.

In general, if N is the range of values of i,j,k,

\displaystyle C = {{N+r -1}\choose{r}}

is the number of times the “Hello World” is printed.

From the previous post, we know that

\displaystyle C = \sum_{i=1}^N\sum_{j=1}^i j

Therefore,

\displaystyle C = \sum_{i=1}^N\sum_{j=1}^i j = \frac{1}{2}\Big(\sum_{i=1}^N i^2 + \sum_{i=1}^N i \Big)= {{N+r -1}\choose{r}}

As an aside, we can compute for the sum of squares of the first n numbers from the expression above:

\displaystyle \frac{1}{2}\Big(\sum_{i=1}^N i^2 + \sum_{i=1}^N i \Big)= {{N+r -1}\choose{r}}
\displaystyle \frac{1}{2}\Big(\sum_{i=1}^N i^2 + \sum_{i=1}^N i \Big)= \frac{(N+2)(N+1)(N)(N-1)!}{3! (N-1)!}
\displaystyle \sum_{i=1}^N i^2 = \frac{(N+2)(N+1)(N)}{3} - \frac{N(N+1)}{2}
\displaystyle = \frac{n(2n+1)(n+1)}{6}

In summary, the combinatorial solution given above is much more elegant as it gives us the answer without too much computation.

Some More Counting Techniques

Some people were not amused by the title of the previous post. It was in a way misleading. So I’m now going to give an appropriate title to this post, albeit a rather boring title.

In this article, let’s try to count the number of times the “Hello World” is printed by the code snippet below:


for(int i=0;i<10;i++){
   for(int j=0;j<i;j++){
     for(int k=0;k<j;k++){
         System.out.println("Hello World");
     }
   }
}

Let’s try to count for the first few values of i,j and k.

1,1,1 ----> 1

2,1,1 ----> 1 + 2
2,2,1
2,2,2

3,1,1 ----> 1 + 2 + 3
3,2,1
3,2,2
3,3,1
3,3,2
3,3,3

4,1,1 ----> 1 + 2 + 3 + 4
4,2,1
4,2,2
4,3,1
4,3,2
4,3,3
4,4,1
4,4,2
4,4,3
4,4,4

Looking at the pattern above, you can see that when i=1, the “Hello World” is printed only once. When i=2, it is executed 1 + 2 = 3. When i=3, it is executed 1 + 2 + 3 = 6 times, and when i=4, it is executed 1 + 2 + 3 + 4 = 10 times. So how many times is the statement executed when i=5? Based on the pattern we can see that it will be executed 1 + 2 + 3 + 4 + 5 times.

However, what we are after is the sum of the number of executions from i=1 to i=10. If we denote by C the total number of executions, we can write this mathematically as:

\displaystyle C = \sum_{i=1}^{10} \sum_{j=1}^i j

From the previous post we know that

\displaystyle C = \sum_{j=1}^i j = \frac{i(i+1)}{2}

Hence, we have

\displaystyle C = \sum_{i=1}^{10} \frac{i(i+1)}{2} = \frac{1}{2}\sum_{i=1}^{10} i(i+1)

Multiplying out the summand we have

\displaystyle C = \frac{1}{2}\sum_{i=1}^{10} ( i^2 + i )

Distributing the summation:

\displaystyle C = \frac{1}{2}\Large(\sum_{i=1}^{10} i^2 + \sum_{i=1}^{10} i\Large)

Let’s simplify this expression for a general n

\displaystyle C = \frac{1}{2}\Large( \sum_{i=1}^{n} i^2 + \sum_{i=1}^{n} i \Large)

We know that the second summation is equal to

\displaystyle \sum_{i=1}^{n} i = \frac{n(n+1)}{2}

The first summation is the sum of squares of the first n numbers. To evaluate this, let’s tackle the sum of cubes (*):

\displaystyle \sum_{i=0}^{n} i^3  = \sum_{i=0}^{n} (i+1)^3  - (n + 1)^3

You might be wondering how we arrived at the right-hand side. If you write the summation explicitly, say for n=5, the right-hand side looks like:

\displaystyle \sum_{i=0}^{5} (i+1)^3  - (5 + 1)^3 = (0+1)^3 + (1+1)^3 + (2+1)v + (3+1)^3+ (4+1)^3 + (5+1)^3 - (5 +1)^3
\displaystyle = 1^3 + 2^3 + 3^3 + 4^3 +5^3 = \sum_{i=0}^{5} i^3

Going back to the right-hand side, let’s expand it to get:

\displaystyle \sum_{i=0}^{n} i^3  = \sum_{i=0}^{n} (i^3 + 3i^2 + 3i +1) - (n+1)^3
\displaystyle = \sum_{i=0}^{n} i^3 + 3 \sum_{i=0}^{n} i^2 + 3 \sum_{i=0}^{n} i + \sum_{i=0}^{n} 1 - (n+1)^3

Canceling the term containing the i^3 and solving for the \sum_{i=0}^{n} i^2, we get:

\displaystyle 3\sum_{i=0}^{n} i^2= (n+1)^3 - 3\sum_{i=0}^{n} - \sum_{i=0}^{n} 1

The last term is just equal to n+ 1. Plugging this value and simplifying we get:

\displaystyle 3\sum_{i=0}^{n} i^2 = n^3 + 3n^2 +3n + 1 - 3\frac{n(n+1)}{2} - n - 1
\displaystyle = n^3 + 3n^2 + 2n - \frac{3n^2 - 3n}{2}
\displaystyle = \frac{2n^3 + 6n^2 + 4n - 3n^2 - 3n}{2}
\displaystyle = \frac{2n^3 + 3n^2 + n}{2}
\displaystyle = \frac{n(2n^2 + 3n + 1)}{2} = \frac{n(n+1)(2n+1)}{2}

This finally gives us

\displaystyle \sum_{i=0}^{n} i^2  = \frac{n(n+1)(2n+1)}{6}

Using this expression in the value of C gives us:

\displaystyle C = \frac{1}{2} \Bigg( \sum_{i=1}^{n} i^2 + \sum_{i=1}^{n} i\Bigg)
\displaystyle C = \frac{1}{2} \Bigg(  \frac{n(n+1)(2n+1)}{6} + \frac{n(n+1)}{2}\Bigg)
\displaystyle = \frac{1}{6} n(n+1)(n+2)

Substituting n=10, we get the total number of executions for the “Hello World”, which is

\displaystyle C = \frac{1}{6} 10 \times 11 \times 12 = 220.

There is a much easier way of solving this which we shall cover in the next post.

* See this page for more information on the sum of squares of the first n numbers.