Tuesday, October 23, 2012

How a CFO Can Make a Trip to the Airport Productive

Over this past week I had to make two separate trips: one to deliver a presentation to the Association of Finance Professional’s Annual Conference and one to participate in an Advisory Board panel for a financial institution.
Traveling through airports can be quite an ordeal, but much of the experience can be analyzed using “queuing models”, a concept we briefly discussed in our last Treasury Café post “How Cloud Computing is an Example of Finance Principles”. Since at times it is a necessary evil, we might as well go ahead and make the most of our journey!
Queuing Models

Figure A
At a minimum, building a queuing model requires the following inputs:
·         Number of queues in the system
·         Number of servers in the system
·         Arrival rate of “customers” into the queue per time period
·         Service rate of the servers per time period
The number of queues and servers may vary. At the airport ticket counter, there is usually a single line that is formed, and once we make it through the line we go to the next ticket agent that is open (single queue, multiple servers). Conversely, at many fast food restaurants there are multiple lines formed, one for each cash register that is open (multiple queues, multiple servers). Depending on the situation, there can be many possible combinations of these two factors. Figure A visualizes the single and multiple queue types.

Figure B
The arrival rate of customers per time period is often calculated assuming a Poisson distribution. This distribution is linked in mathematical ways to our binomial distribution that we looked at in “How Cloud Computing is an Example of Finance Principles”. Figure B shows the probability function of the Poisson process, where the Greek letter l indicates the arrivals per unit of time.  

Figure C
Figure C shows the results of the Poisson equation assuming people arrive at an airline ticket counter line at the rate of 2 per minute ( l = 2). Reading the first line in the results, the probability that no people show up in the next minute is about 13.5%. The probabilities that 1 or 2 people show up in the next minute are both about 27%, and so on.
Service rates can be modeled in different ways. They may be static with no variance at all, they might range equally between a minimum and maximum value, or they can take the form of any number of statistical distributions. A distribution that appears often in queuing model literature is the exponential distribution (the probability function for which is shown in Figure D).
In order to simulate an exponential distribution, we can use two methods:
Figure D
·        Using a uniform distribution, we can generate a random number (r) between the values of 0 and 1, and then substitute that value into the equation in Figure E. In Excel you can do this using the RANDBETWEEN() function,
·        Use a statistical program that has the capability of generating exponentially distributed variables once the inputs are supplied. For example, in R this is accomplished by the rexp() function.
Figure E
The Simulated Airport Ordeal
Let’s say that customers arrive at the ticket line at the rate of 2 per minute ( l=2). There are 4 ticket agents and it takes them 2 minutes to check a passenger in, print their boarding pass, check their luggage, etc. (so rate per minute is µ=0.5). We will further assume that the service times match an exponential distribution.
The passenger arrivals-per-time-period follows a Poisson distribution. One of the characteristics of the Poisson distribution is that the times between the arrivals (called the inter-arrival time) follows an exponential distribution. Because of this, we can simulate the passenger arrival process using an exponential distribution discussed above just as we can the service times.
This example is established to maximize the ticket agent’s utilization. Using the utilization equation from our last post (l/sµ) we can calculate utilization to be 100% (2/(4*.5)). Intuitively, if 2 passengers arrive every minute, and it takes each agent 2 minutes to process a passenger, then in 2 minutes time 4 passengers will have arrived on average, and thus all 4 agents will be busy processing passenger requests.
Figure F shows two rows of R output from this simulation process for the first and second customers arriving into the system. Tying this calculation back to our discussion above, the simulated values (a_rand and s_rand) are between 0 and 1, as specified in our uniform distribution. The next two columns, s_ia and s_st, are the values created using the equation in Figure E. Figure G shows the specific calculation for s_ia in the 2nd row of data.
Figure F
Recalling that our service and arrival rates are in minute measurements, we multiply this result by 60 to convert to measurements in terms of seconds, with the result of approximately 41 (.691387 * 60) for the 2nd customer. Checking our simulation results, looking at the arrival time column in Figure F (column labeled at), we see that the first passenger arrived at 00:21 seconds and the second passenger arrived at 01:02, or 41 seconds later.
Figure G
Now that we have established the parameters of the model, we can simulate what occurs over time as passengers arrive into the system. The simulation will produce many different pieces of information which might prove of interest, such as:
·         Waiting time – average, minimum, maximum
·         Service time – average, minimum, maximum
·         Line – average, minimum and maximum number of customers in line
Figure H
To illustrate, Figure H shows three sample paths for the next 1,000 passengers entering the system generated by the simulation. The orange line is the average of 100 simulations for each of the 1,000 passengers, and the brown line is the average waiting time of all passengers, which is around 20 minutes.
While the average waiting time is 20 minutes across all passengers, the individual simulation paths illustrate the variability that occurs. One of the black lines touches 0 on several occasions, indicating that it is within the realm of possibility that no line is encountered upon arrival at the ticketing booth. However, one of the other black lines indicates that hour long waits are also within the realm of possibility.
Figure I
Finally, in order to check our work, we mentioned earlier that using the exponential distribution to simulate inter-arrival times will result in our arrival-times-per-minute exhibiting a Poisson distribution. Figure I compares our actual simulation results to a Poisson Distribution with l=2, which indicates our simulation has achieved a pretty good fit with the theoretical result.
Fulfilling the Partnering Role
We have covered the Treasury Vision in a number of posts (see the “Labels” sidebar on this post), most recently “Triangulating Vision and Mission Using Mind Maps”, where we identified that one of the main components of our activity is fulfilling “with people” functions.
Since the Queuing Model is a robust means of analyzing processes, especially production and customer facing ones, adding this model to our “Analytics Toolkit” provides Finance and Treasury with another potential means of working with our operating units in a meaningful and value-added way.
Furthermore, by leveraging the analytic capabilities of the finance function in the creation of models such as this, finance is placed in a role where we can assist our business unit partners with decisions in the operations, marketing, and service areas, such as:
·        What staffing levels need to be maintained to make the average waiting time x?
·        What is the cost of our current operations in terms of future lost customer revenue? How does this compare to our service costs?
·        How does our customer experience compare to our competitors?
·        What is the best route to improve performance – reducing service time or adding agents?
Figure J
As an example, let’s say that Airline B staffs their ticket counter with 5 agents vs. our 4 (from the previous simulation). They have the same arrival and service time rates as our Airline does. How does their customer waiting time performance compare to ours? Figure J shows the customer waiting time between our airline and Airline B’s.
This information can then be utilized by the business unit’s management to determine whether this represents a feasible area of improvement, an indication as to the extent of the improvement, and how much it might cost.
Closer to Home
We can use these models to assess parts of the CFO organization’s processes as well.
Suppose we have 50 subsidiaries for which financial statements must be developed. This information then needs to be consolidated in order to produce our firm’s overall financial statements.
Figure K
Currently, there are 5 people who process the consolidated financial close information. Each of the 5 people are responsible for 10 subsidiaries each, which they do every month due to the assumption that specialization improves the time to completion.
Completed financial information arrives into this area at the rate of 1 every two hours after month’s end, and follows a Poisson distribution with respect to this arrival rate (l=0.5). Let’s assume that they can process a subsidiary’s financial information in 4 hours, and this processing exhibits an exponential distribution (µ=0.25).
By changing the structure of the process, can we improve the “time to close”?
If we switch to a single queue, multiple server process this might reduce our time to close through the “pooling principle” we discussed in the last post.
However, since each staff member processes the same subsidiaries month after month, there might be a “learning curve” benefit to their work.
Figure K shows the R output for three separate simulation runs. The current multi-queue, multi-server model is the current state. The simple switch to single queue, multi-server is then shown next. As predicted, the “pooling principle” provides significant improvement. Assuming 8 hour workdays, the books can be closed about 2 days sooner! However, if we adjust the service time from µ=.25 to µ=.2 (i.e. 5 hours to process instead of 4) due to loss of “learning curve” benefits, the results still show improvement though not as much.
 
What You Can Do
The queuing model is an analytical tool that can be productively applied within your organization. In order to do so, consider the following:
·        Review processes within your particular function. Where do queues form? Are there other departments or units requiring information or process activities from your group (i.e. internal customers)? Do any of your process inputs begin with queues or batches that arrive from other areas (either internal or external)? Are there queues or batches that form as part of your process activities? Can any of these be made more efficient by changing the configuration from multiple queue to single queue, taking advantage of the pooling principle? Can learning curve or specialization benefits be realized by changing a single queue to a multiple queue setup?
·        Look at other units or functions within your company where queues are a fact of life. Approach those business areas or functions and learn how they are configured to process these queues, and what estimation methods they use to make staffing decisions, predict workflow, and optimize service utilization and efficiency. If they are not already doing so, inquire as to their interest in using a queuing model to provide another perspective on their operations. If they are doing so, learn how they are currently employing this model and whether they are willing to allow you to participate in the modeling activity going forward.
·        Develop your queue modeling “chops”. Create some “ready to go” queuing models in Excel, R, or another statistical program in use by your firm. Simply “googling” the term “Queue Model Examples” or something similar will pull up a number of articles that you can use to calibrate the information and solutions generated by the models you develop (see this one as an example). This practice prepares you to be in a “ready-state” when an opportunity arises to deploy a queuing model analysis.
Key Takeaways
Queuing Models are a useful tool that can be applied by Finance and Treasury for use internally and as a method of providing decision support functionality to its business unit partners.
Questions
·         Where is a Queuing Model applicable in your organization?
Add to the discussion with your thoughts, comments, questions and feedback! Please share Treasury Café with others. Thank you!

Sunday, October 7, 2012

How Cloud Computing is an Example of Finance Principles

There are certain principles that we see appearing over and over again when performing finance work. Since they are common to many situations, we can view them as “building blocks” to a complete finance and treasury practice.
One of these is the “Pooling Principle”. Let’s look into this a little more closely.
What is the Pooling Principle?
The key element involved in the Pooling Principle is variability. Without variability, pooling does not much matter. However, under certain conditions, variability can be reduced if we can create certain combinations.
Simply speaking, if we add 2 + 2, and we get 4, then the Pooling Principle has not manifested itself. On the other hand, if we can add 2 + 2 and get 3, then we have witnessed the Pooling Principle in action.
Following are some illustrative examples.
Figure A
Example 1
Let’s say that we own a house worth ²100,000 (for new readers, the symbol ² stands for Treasury Café Monetary Units, or TCMU’s, freely convertible into any currency at any exchange rate you choose). In addition, assume we know that the chances of a house having a fire in our town is 1% annually. If we want to make sure we can cover 100% of the cost of the house for the maximum number of fires that may occur over the next 20 years, how much cash do we need to reserve to cover that possibility?
The statistical way to answer this question is to use the binomial probability distribution. Figure A shows the equation we use to calculate how many events happen (x) in a given number of “trials” (n), with a certain probability (p). In our case the number of fires over this time period is x, 20 years is n, and 1% is p.
Figure B
Figure B shows the results of Formula A in Excel (there is also a function in Excel called “BINOMDIST” which can be used as an alternative to entering the equation manually).
These results show that 3 fires during the 20 year span is within the realm of probability (we get the value of 3 as this is the first time in the cumulative column that 100% is reached), so the amount of funds we need to set-aside is ²300,000 (²100,000 value x 3 fires).
Let’s finally suppose that there are 100 houses in our town, each averaging the same value, and everyone targets the same level of reserves for their houses. Collectively the citizens of our town will have to set aside ²30,000,000.
Enter Pooling
Now let’s apply the pooling principle. If all 100 people participate in a “pool”, where each individual’s funds goes into the pool and funds are paid out to those who have a fire in the next 20 years, what is the amount of funds each person must contribute?
Figure C
In this case, the number of homes is 100 (n), the probability of a fire is 1% (p), and we use the binomial distribution to establish how many fires occur each year (x). Figure C shows the results.
The point at which the cumulative probabilities equal 1 is 6 homes, so the pool must have the capacity to cover 6 fires every year, or ²600,000. The period under consideration is 20 years, so the total amount of reserve required is ²12,000,000.
Since there are 100 people in the pool, each individual’s contribution is therefore ²120,000, 60% less than the ²300,000 requirement if the pool did not exist. Two plus two no longer equals 4!
You might think this example looks like insurance, and you would be right.
Cloud Computing
Figure D
As a second example, let’s assume that our company has 1 server running our finance and accounting software. The processors in the server can process 10 process requests per second. Process requests generated by employee’s use of the software arrive at the server at a rate of 8 per second.
Figure E
This situation can be analyzed using a Queuing Model. Figure D shows the main equations that characterize the operating system according to this model. Using those equations, our server situation is shown in the first column of data in Figure E.
We now assume that we can move this computer processing environment “into the Cloud”, which is just a fancy way of saying that the servers holding and running the software are in many other places – and all of them somewhere other than our facility.
We further assume that 99 companies identical to ours sign up for this cloud service as well. If the Cloud provider’s infrastructure has 1 server for each company, than there will be 100 servers, and if our firm sent 8 process requests per second, the Cloud users as a whole are therefore sending 800 process requests per second.  The second column of Figure E shows the operating characteristics under this set of assumptions.
Comparing the first and second columns, average time per server and utilization remain the same, but the other measures – wait time, total time in system, average number of customers (a “customer” being a process request in this case) have all gone down. This means the operation has improved. Two plus two no longer equals 4!
One way to measure this improvement in a slightly different manner is to reduce the number of servers in the cloud until one of the operating metrics is equal to what it had been prior to moving to the cloud.
In the first column of Figure E, the average number of “customers” (i.e. processing requests) waiting in the system was 3.2, whereas in the second it is close to 0.08. By adjusting the number of servers, we can get back closer to the “pre-cloud” number. If we go to 88 servers, the metric is a little better, and if we go to 87 servers, it is a little worse. The third column in Figure E shows the case with 88 servers. Most of the operating characteristics are still improved over the single server case.
In essence, whereas under the situation where we were operating the software “in-house” and had to purchase a server, under the cloud situation our needs only require 88% of a server, so the costs to the cloud provider (due to less capital investment) are 12% less. The pooling principle is one of the elements that drives the economic advantage of the Cloud business model (but by no means the only one!)
How You Can Apply the Pooling Principle
The following list of questions might indicate areas where implementing the Pooling Principle can improve your function or business:
Operations
Can you combine certain activities common to different departments? The Shared Service Center, for example, relies on the pooling principle.
Can you combine similar activities with other organizations? The economics of outsourcing rely in part on the pooling principle. Similarly, some forms of strategic alliance and joint venture employ the benefits of pooling as part of the rationale.
Strategic Acivity
What activities and costs can be pooled through mergers and acquisitions? One of the benefits of M&A activity are “synergies” that can be created through business combination. Many synergies are driven by the pooling principle.
Can we use the pooling potential in our business model to create growth opportunities? This question was certainly considered by many of those now providing Cloud Computing activities.
These are just a sample of questions that might provide insight.
As we discussed in the beginning, the pooling principle relies on variability. The more you hunt for where this is a factor, the more opportunities you will find.
Is it in sales, operations, finances, or marketing? Is it manifested in the value chain of your industry? Where is variability itself changing? Either increasing, in which case pooling will become more valuable, or decreasing, where the value of in-place pooling structures might no longer be compelling.
Key Takeaways
The pooling principle is a significant economic driver in many internal and external facets of your business. Be ever alert to where variability manifests itself, and carefully consider how you might create opportunity under this situation via pooling.
Questions
·         Where are the primary benefits of pooling in your business or function?
·         Where else do you think pooling might create value?
Add to the discussion with your thoughts, comments, questions and feedback! Please share Treasury Café with others. Thank you!