Capsim

Back to Research & Publication

Capsim



Introduction

Capsim is an online business simulation that I participated in as part of the UGBA 10 course at Berkeley. In the simulation, I managed a company that competed against five other companies in the aftermath of a monopoly breakup in the electronic sensor industry. During the simulation, I discovered that I could not find existing computer programs for projecting the most important indicator of a company’s success, its income. Since my company’s performance was part of the course grade, I set out to create a program to project a company’s yearly income.

Capsim Background

Capsim allows for many customizations that create different business environments. This project will be restricted to the particular Capsim Foundation simulation factors found in the UGBA 10 course here at Berkeley. Each company’s performance is evaluated based on four categories:

  • Average return on assets
  • Average return on equity
  • Cumulative profit
  • Ending market capitalization

The data in this project comes from the Capsim Foundation simulation that I was a part of for the Spring 2018 UGBA 10 class. All data were PDF files generated by the simulation. In other Capsim simulations, it is possible to use Excel to upload company decisions, but the simulation that I was in only provided the PDF files. This simulation included the optional Human Resources module but did not include the TQM and Advance Marketing modules. All equations and formulas come from the Capsim Foundation user guide that is linked at the end of the article.

Product's Market Demand

This project attempts to determine the market demand of a company’s products in order to calculate the projected yearly income of the company. In Capsim, market demand for a product is based on a score that the customers assigns to the product, called the customer survey score. The product’s customer survey score depends on six factors:

  • Price
  • Performance and Size
  • Age
  • Reliability (Mean time before failure)
  • Awareness
  • Accessibility

In the simulation, there is an ideal value for each of the product characteristic. The customer compares the product characteristics to the ideal values and assigns a score ranging from 1 to 100. Products that have characteristics closer to the ideal values will have a higher customer survey score. For all characteristics except for performance and size, the ideal value does not change over time. For instance, below is a graph of several products’ performance vs. size in the fourth year of a simulation. Each point represents a product, and the color of the product indicates the product’s overall customer survey score.



The plot shows points that are closer to the (6.8, 13.2) have a higher customer survey score since those points. However, performance and size are special because the ideal performance and size changes every year. Let us compare the graph for the second year of the simulation and the seventh year of the simulation.





Comparing the two plots, we observe that the products in the first graph have a higher survey score when closer to the point (5.8, 14.2), whereas products in the second graph have a higher survey score when closer to the point (8.3, 11.7). As a general rule of thumb, customers expect products to have a higher performance and a smaller size as years pass.

Also, there are two types of customers in Capsim Foundation, high- and low-tech. Each type of customer has its own ideal value.





As we can see, high-tech customers demand a higher performance and smaller size than low-tech customers. In the same year, high-tech customers have an ideal value close to (12.3, 7.7) whereas the low-tech customers have an ideal value close to (8.3, 11.7). Additionally, each customer segment will value certain characteristics over others. For instance, the high-tech customers value performance and size while the low-tech customers value price.

Product's Customer Survey Score

Calculating a product’s customer survey score depends on the six characteristics outlined in the above section. The exact calculations is outlined in the Capsim Foundation Team Member Guide. Each product is first assigned a base score ranging from 1 to 100 for each of the following characteristics depending on how close the product’s characteristic is to the ideal value:

  • Price
  • Performance and Size
  • Reliability (Mean time before failure)
  • Age

The scores are then weighted according to which customer segment (high- or low-tech) the product is targeted towards, and the weighted scores are combined to form the product’s raw score. The scores can range from 1 to 100. Performance and size are two different characteristics, but the customer survey score combines the two characteristics to produce one component of the customer survey score. For instance, the low tech segment customer survey score can be calculated according to the following formula as defined by the guide:

$$ Weighted = 0.41 * Price + 0.29 * Age + 0.21 * Reliability + 0.09 * Performance $$

where performance represents the score for both performance and size. In this case, we expect the maximum contribution of each score to the overall base score to be the following:

  • Price: 41 points
  • Age: 29 points
  • Reliability: 21 points
  • Performance and Size: 9 points

According to the guide, the scores are then weighted once more according the product’s awareness and accessibility. If the product has 100% awareness and 100% accessibility, the product’s base survey score does not change at all. Otherwise, the product’s final score is calculated according to the following equation:

$$ Final = (\frac{Weighted}{10})^2 * (1 - \frac{1-awareness}{2}) * (1 - \frac{1-accessibility}{2}) $$

For instance, one product in the second year of the simulation had the following characteristics:

  • Price: $34.99
  • Performance: 6.5
  • Size: 13.5
  • Age: 2.93
  • Reliability (MTBF): 21000
  • Accessibility: 100%
  • Awareness: 69%

Additionally, the ideal values for the low-tech customer segment in the second year are the following:

  • Price: $15
  • Performance: 5.8
  • Size: 14.2
  • Age: 3.0
  • Reliability (MTBF): 20000
  • Accessibility: 100%
  • Awareness: 100%

We can see that the product will contain the following scores:

  • Price: 1 point
  • Age: 29 points
  • Reliability: 21 points

Since this products’ characteristics for the above areas are either close to the ideal values or in the barely acceptable range, we can be fairly certain that this product will have the above scores. Intuitively, we know products with ideal or close to ideal values should have the maximum possible score. Similarly, products with values in the barely acceptable range should have the minimum possible score. However, we do not know the exact calculations of the performance and size score. Fortunately, we do know that this product had a final score of 33, so we can proceed to calculate that the base score using the above formulas:

$$ Weighted = 10 * \sqrt{Final \div (1 - \frac{1 - awareness}{2}) \div (1 - \frac{1 - accessibility}{2})} $$

Finally, we derive the performance and size score by subtracting the price, age, and reliability scores from the weighted score. When we do the math, we find that the product should have a performance and size score of about 11.5 points.

11.5 points. But didn’t we establish that the performance and size score can only have a maximum of 9 points?

Wait, What?

Although we attempted to use the materials provided to all simulation participants to construct a model, my data contradicted our model. We assumed that the simulation used the formulas outlined in the team member guide to calculate the customer survey score of each product. However, my analysis shows that some of the customer survey scores in my data could not have been derived from the team member guide’s formulas. Because my model is built off the team member guide, I cannot continue building my model if the team member guide is incorrect. Based on our results, we are faced with two explanations:

  1. The simulation from which this data point was drawn from produced an error.
  2. The provided materials does not give the full details on how to calculate the survey score.

In other words, we need more data to build our model off of. Unfortunately, my involvement in the UGBA 10 course while I worked on this project prevented me from gaining access to all the simulation data since it could potentially provide me with an unfair advantage. For future UGBA 10 students, I would encourage product placement that allows for determination of the product’s performance and size score; in other words, produce a product that has characteristics either close to the ideal values or in the barely acceptable range. This way, students can make sure that Capsim is not producing errors and potentially determine the product’s final market demand!