Powerpoint Presentation For Chapter 7 Forecasting
Statistics, Data Analysis, and Decision Modeling
FOURTH EDITION
James R. Evans
9780558689766
Chapter 7 Forecasting
Introduction
QUALITATIVE AND JUDGMENTAL METHODS
Historical Analogy
The Delphi Method
Indicators and Indexes for Forecasting
STATISTICAL FORECASTING MODELS
FORECASTING MODELS FOR STATIONARY TIME SERIES
Moving Average Models
Error Metrics and Forecast Accuracy
Exponential Smoothing Models
FORECASTING MODELS FOR TIME SERIES WITH TREND AND SEASONALITY
Models for Linear Trends
Models for Seasonality
Models for Trend and Seasonality
CHOOSING AND OPTIMIZING FORECASTING MODELS USING CB PREDICTOR
REGRESSION MODELS FOR FORECASTING
Autoregressive Forecasting Models
Incorporating Seasonality in Regression Models
Regression Forecasting with Causal Variables
THE PRACTICE OF FORECASTING
BASIC CONCEPTS REVIEW QUESTIONS
SKILL-BUILDING EXERCISES
SKILL-BUILDING EXERCISES
PROBLEMS AND APPLICATIONS
CASE: ENERGY FORECASTING
APPENDIX: ADVANCED FORECASTING MODELS—THEORY AND COMPUTATION
Double Moving Average
Double Exponential Smoothing
Additive Seasonality
Multiplicative Seasonality
Holt–Winters Additive Model
Holt– –Winters Multiplicative Model
INTRODUCTION
One of the major problems that managers face is forecasting future events in order to make good decisions. For example, forecasts of interest rates, energy prices, and other economic indicators are needed for financial planning; sales forecasts are needed to plan production and workforce capacity; and forecasts of trends in demographics, consumer behavior, and technological innovation are needed for long-term strategic planning. The government also invests significant resources on predicting short-run U.S. business performance using the Index of Leading Indicators. This index focuses on the performance of individual businesses, which often is highly correlated with the performance of the overall economy, and is used to forecast economic trends for the nation as a whole. In this chapter, we introduce some common methods and approaches to forecasting, including both qualitative and quantitative techniques.
Managers may choose from a wide range of forecasting techniques. Selecting the appropriate method depends on the characteristics of the forecasting problem, such as the time horizon of the variable being forecast, as well as available information on which the forecast will be based. Three major categories of forecasting approaches are qualitative and judgmental techniques, statistical time-series models, and explanatory/causal methods.
Qualitative and judgmental techniques rely on experience and intuition; they are necessary when historical data are not available or when the decision maker needs to forecast far into the future. For example, a forecast of when the next generation of a microprocessor will be available and what capabilities it might have will depend greatly on the opinions and expertise of individuals who understand the technology.
Statistical time-series models find greater applicability for short-range forecasting problems. A time series is a stream of historical data, such as weekly sales. Time-series models assume that whatever forces have influenced sales in the recent past will continue into the near future; thus, forecasts are developed by extrapolating these data into the future.
Explanatory/causal models seek to identify factors that explain statistically the patterns observed in the variable being forecast, usually with regression analysis. While time-series models use only time as the independent variable, explanatory/causal models generally include other factors. For example, forecasting the price of oil might incorporate independent variables such as the demand for oil (measured in barrels), the proportion of oil stock generated by OPEC countries, and tax rates. Although we can never prove that changes in these variables actually cause changes in the price of oil, we often have evidence that a strong influence exists.
Surveys of forecasting practices have shown that both judgmental and quantitative methods are used for forecasting sales of product lines or product families, as well as for broad company and industry forecasts. Simple time-series models are used for short- and medium-range forecasts, whereas regression analysis is the most popular method for long-range forecasting. However, many companies rely on judgmental methods far more than quantitative methods, and almost half judgmentally adjust quantitative forecasts.
In this chapter, we focus on these three approaches to forecasting. Specifically, we will discuss the following:
Historical analogy and the Delphi method as approaches to judgmental forecasting
Moving average and exponential smoothing models for time-series forecasting, with a discussion of evaluating the quality of forecasts
A brief discussion of advanced time-series models and the use of Crystal Ball (CB) Predictor for optimizing forecasts
The use of regression models for explanatory/causal forecasting
Some insights into practical issues associated with forecasting
Qualitative and Judgmental Methods
Qualitative, or judgmental, forecasting methods are valuable in situations for which no historical data are available or for those that specifically require human expertise and knowledge. One example might be identifying future opportunities and threats as part of a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis within a strategic planning exercise. Another use of judgmental methods is to incorporate nonquantitative information, such as the impact of government regulations or competitor behavior, in a quantitative forecast. Judgmental techniques range from such simple methods as a manager’s opinion or a group-based jury of executive opinion to more structured approaches such as historical analogy and the Delphi method.
Historical Analogy
One judgmental approach is historical analogy, in which a forecast is obtained through a comparative analysis with a previous situation. For example, if a new product is being introduced, the response of similar previous products to marketing campaigns can be used as a basis to predict how the new marketing campaign might fare. Of course, temporal changes or other unique factors might not be fully considered in such an approach. However, a great deal of insight can often be gained through an analysis of past experiences. For example, in early 1998, the price of oil was about $22 a barrel. However, in mid-1998, the price of a barrel of oil dropped to around $11. The reasons for this price drop included an oversupply of oil from new production in the Caspian Sea region, high production in non-OPEC regions, and lower-than-normal demand. In similar circumstances in the past, OPEC would meet and take action to raise the price of oil. Thus, from historical analogy, we might forecast a rise in the price of oil. OPEC members did in fact meet in mid-1998 and agreed to cut their production, but nobody believed that they would actually cooperate effectively, and the price continued to drop for a time. Subsequently, in 2000, the price of oil rose dramatically, falling again in late 2001. Analogies often provide good forecasts, but you need to be careful to recognize new or different circumstances. Another analogy is international conflict relative to the price of oil. Should war break out, the price would be expected to rise, analogous to what it has done in the past.
The Delphi Method
A popular judgmental forecasting approach, called the Delphi method, uses a panel of experts, whose identities are typically kept confidential from one another, to respond to a sequence of questionnaires. After each round of responses, individual opinions, edited to ensure anonymity, are shared, allowing each to see what the other experts think. Seeing other experts’ opinions helps to reinforce those in agreement and to influence those who did not agree to possibly consider other factors. In the next round, the experts revise their estimates, and the process is repeated, usually for no more than two or three rounds. The Delphi method promotes unbiased exchanges of ideas and discussion and usually results in some convergence of opinion. It is one of the better approaches to forecasting long-range trends and impacts.
Indicators and Indexes for Forecasting
Bottom of Form
Indicators and indexes generally play an important role in developing judgmental forecasts. Indicators are measures that are believed to influence the behavior of a variable we wish to forecast. By monitoring changes in indicators, we expect to gain insight about the future behavior of the variable to help forecast the future. For example, one variable that is important to the nation’s economy is the Gross Domestic Product (GDP), which is a measure of the value of all goods and services produced in the United States. Despite its shortcomings (for instance, unpaid work such as housekeeping and child care is not measured; production of poor-quality output inflates the measure, as does work expended on corrective action), it is a practical and useful measure of economic performance. Like most time series, the GDP rises and falls in a cyclical fashion. Predicting future trends in the GDP is often done by analyzing leading indicators—series that tend to rise and fall some predictable length of time prior to the peaks and valleys of the GDP. One example of a leading indicator is the formation of business enterprises; as the rate of new businesses grows, one would expect the GDP to increase in the future. Other examples of leading indicators are the percent change in the money supply (M1) and net change in business loans. Other indicators, called lagging indicators, tend to have peaks and valleys that follow those of the GDP. Some lagging indicators are the Consumer Price Index, prime rate, business investment expenditures, or inventories on hand. The GDP can be used to predict future trends in these indicators.
Indicators are often combined quantitatively into an index. The direction of movement of all the selected indicators are weighted and combined, providing an index of overall expectation. For example, financial analysts use the Dow Jones Industrial Average as an index of general stock market performance. Indexes do not provide a complete forecast, but rather a better picture of direction of change, and thus play an important role in judgmental forecasting.
The Department of Commerce began an Index of Leading Indicators to help predict future economic performance. Components of the index include the following:
•average weekly hours, manufacturing
•average weekly initial claims, unemployment insurance
•new orders, consumer goods and materials
•vendor performance—slower deliveries
•new orders, nondefense capital goods
•building permits, private housing
•stock prices, 500 common stocks (Standard & Poor)
•money supply
•interest rate spread
•index of consumer
•average weekly hours, manufacturing
•average weekly initial claims, unemployment insurance
•new orders, consumer goods and materials
•vendor performance—slower deliveries
•new orders, nondefense capital goods
•building permits, private housing
•stock prices, 500 common stocks (Standard & Poor)
•money supply
•interest rate spread
•index of consumer expectations (University of Michigan)
Business Conditions Digest included more than 100 time series in seven economic areas. This publication was discontinued in March 1990, but information related to the Index of Leading Indicators was continued in Survey of Current Business. In December 1995, the U.S. Department of Commerce sold this data source to The Conference Board, which now markets the information under the title Business Cycle Indicators; information can be obtained at its Web site (www.conference-board.org). The site includes excellent current information about the calculation of the index, as well as its current components.
Statistical Forecasting Models
Many forecasts are based on analysis of historical time-series data and are predicated on the assumption that the future is an extrapolation of the past. We will assume that a time series consists of T periods of data, At, = 1, 2, …, T. A naive approach is to eyeball a trend—a gradual shift in the value of the time series—by visually examining a plot of the data. For instance, Figure 7.1 shows a chart of total energy production from the data in the Excel file Energy Production & Consumption. We see that energy production was rising quite rapidly during the 1960s; however, the slope appears to have decreased after 1970. It appears that production is increasing by about 500,000 each year and that this can provide a reasonable forecast provided that the trend continues.
Figure 7.1 Total Energy Production Time Series
Figure 7.2 Federal Funds Rate Time Series
Time series may also exhibit short-term seasonal effects (over a year, month, week, or even a day) as well as longer-term cyclical effects or nonlinear trends. At a neighborhood grocery store, for instance, short-term seasonal patterns may occur over a week, with the heaviest volume of customers on weekends, and even during the course of a day. Cycles relate to much longer-term behavior, such as periods of inflation and recession or bull and bear stock market behavior. Figure 7.2 shows a chart of the data in the Excel file Federal Funds Rate. We see some evidence of long-term cycles in the time series.
Of course, unscientific approaches such as the “eyeball method” may be a bit unsettling to a manager making important decisions. Subtle effects and interactions of seasonal and cyclical factors may not be evident from simple visual extrapolation of data. Statistical methods, which involve more formal analyses of time series, are invaluable in developing good forecasts. A variety of statistically based forecasting methods for time series are commonly used. Among the most popular are moving average methods, exponential smoothing, and regression analysis. These can be implemented very easily on a spreadsheet using basic functions available in Microsoft Excel and its Data Analysis tools; these are summarized in Table 7.1. Moving average and exponential smoothing models work best for stationary time series. For time series that involve trends and/or seasonal factors, other techniques have been developed. These include double moving average and exponential smoothing models, seasonal additive and multiplicative models, and Holt–Winters additive and multiplicative models . We will review each of these types of models. This book provides an Excel add-in, CB Predictor, that applies these methods and incorporates some intelligent technology. We will describe CB Predictor later in this chapter.
Table 7.1 Excel Support for Forecasting
Excel Functions Description
TREND (known_y’s, known_x’s, new_x’s, constant)
Returns values along a linear trend line
LINEST(known_y’s, known_x’s, new_x’s, constant, stats)
Returns an array that describes a straight line that best fits the data
FORECAST(x, known_y’s, known_x’s)
Calculates a future value along a linear trend
Analysis Toolpak
Description
Moving average Projects forecast values based on the
average value of the variable over a specific number of preceding periods
Exponential smoothing Predicts a value based on the forecast for the
prior period, adjusted for the error in that prior forecast
Regression Used to develop a model relating time-series data to a set of
variables assumed to influence the data
Forecasting Models for Stationary Time Series
Two simple approaches that are useful over short time periods when trend, seasonal, or cyclical effects are not significant are moving average and exponential smoothing models.
Moving Average Models
The simple moving average method is based on the idea of averaging random fluctuations in the time series to identify the underlying direction in which the time series is changing. Because the moving average method assumes that future observations will be similar to the recent past, it is most useful as a short-range forecasting method. Although this method is very simple, it has proven to be quite useful in stable environments, such as inventory management, in which it is necessary to develop forecasts for a large number of items.
Specifically, the simple moving average forecast for the next period is computed as the average of the most recent k observations. The value of k is somewhat arbitrary, although its choice affects the accuracy of the forecast. The larger the value of k, the more the current forecast is dependent on older data; the smaller the value of k, the quicker the forecast responds to changes in the time series. (In the next section, we discuss how to select k by examining errors associated with different values.)
For instance, suppose that we want to forecast monthly burglaries from the Excel file Burglaries since the citizen-police program began. Figure 7.3 shows a chart of these data. The time series appears to be relatively stable, without trend, seasonal, or cyclical effects; thus, a moving average model would be appropriate. Setting k = 3, the three-period moving average forecast for month 59 is:
Moving average forecasts can be generated easily on a spreadsheet. Figure 7.4 shows the computations for a three-period moving average forecast of burglaries. Figure 7.5 shows a chart that contrasts the data with the forecasted values. Moving average forecasts can also be obtained from Excel’s Data Analysis options (see Excel Note: Forecasting with Moving Averages).
Figure 7.3 Monthly Burglaries Chart
In the simple moving average approach, the data are weighted equally. This may not be desirable because we might wish to put more weight on recent observations than on older observations, particularly if the time series is changing rapidly. Such models are called weighted moving averages. For example, you might assign a 60% weight to the most recent observation, 30% to the second most recent observation, and the remaining 10% of the weight to the third most recent observation. In this case, the three-period weighted moving average forecast for month 59 would be:
EXCEL NOTE Forecasting with Moving Averages
From the Analysis group, select Data Analysis then Moving Average. Excel displays the dialog box shown in Figure 7.6. You need to enter the Input Range of the data, the Interval (the value of k), and the first cell of the Output Range. To align the actual data with the forecasted values in the worksheet, select the first cell of the Output Range to be one row below the first value. You may also obtain a chart of the data and the moving averages, as well as a column of standard errors, by checking the appropriate boxes. However, we do not recommend using the chart or error options because the forecasts generated by this tool are not properly aligned with the data (the forecast value aligned with a particular data point represents the forecast for the next month) and, thus, can be misleading. Rather, we recommend that you generate your own chart as we did in Figure 7.5. Figure 7.7 shows the results produced by the Moving Average tool (with some customization of the forecast chart to show the months on the x-axis). Note that the forecast for month 59 is aligned with the actual value for month 58 on the chart. Compare this to Figure 7.5 and you can see the difference.
Page 244
Figure 7.6 Excel Moving Average Tool Dialog
Figure 7.7 Results of Excel Moving Average Tool (note misalignment of forecasts with actual in the chart)
Different weights can easily be incorporated into Excel formulas. This leads us to the questions of how to measure forecast accuracy and also how to select the best parameters for a forecasting model.
Error Metrics and Forecast Accuracy
The quality of a forecast depends on how accurate it is in predicting future values of a time series. The error in a forecast is the difference between the forecast and the actual value of the time series (once it is known!). In Figure 7.5, the forecast error is simply the vertical distance between the forecast and the data for the same time period. In the simple moving average model, different values for k will produce different forecasts. How do we know, for example, if a two- or three-period moving average forecast or a three-period weighted moving average model (orothers) would be the best predictor for burglaries? We might first generate different forecasts using each of these models, as shown in Figure 7.8, and compute the errors associated with each model.
Figure 7.8 Alternative Moving Average Forecasting Models
To analyze the accuracy of these models, we can define error metrics, which compare quantitatively the forecast with the actual observations. Three metrics that are commonly used are the mean absolute deviation, mean square error, and mean absolute percentage error. The mean absolute deviation (MAD) is the absolute difference between the actual value and the forecast, averaged over a range of forecasted values:
where At is the actual value of the time series at time t, Ft is the forecast value for time t, and n is the number of forecast values (not the number of data points since we do not have a forecast value associated with the first k data points). MAD provides a robust measure of error and is less affected by extreme observations.
Mean square error (MSE) is probably the most commonly used error metric. It penalizes larger errors because squaring larger numbers has a greater impact than squaring smaller numbers. The formula for MSE is:
Again, n represents the number of forecast values used in computing the average. Sometimes the square root of MSE, called the root mean square error (RMSE), is used.
Table 7.2 Error Metrics for Moving Average Models of Burglary Data
k = 2 k = 3 3-Period Weighted
MAD 13.63 14.86 13.70
MSE 254.38 299.84 256.31
MAPE 23.63% 26.53% 24.46%
A third commonly used metric is mean absolute percentage error (MAPE). MAPE is the average of absolute errors divided by actual observation values.
The values of MAD and MSE depend on the measurement scale of the time-series data. For example, forecasting profit in the range of millions of dollars would result in very large MAD and MSE values, even for very accurate forecasting models. On the other hand, market share is measured in proporti The values of MAD and MSE depend on the measurement scale of the time-series data. For example, forecasting profit in the range of millions of dollars would result in very large MAD and MSE values, even for very accurate forecasting models. On the other hand, market share is measured in proportions; therefore, even bad forecasting models will have small values of MAD and MSE. Thus, these measures have no meaning except in comparison with other models used to forecast the same data. Generally, MAD is less affected by extreme observations and is preferable to MSE if such extreme observations are considered rare events with no special meaning. MAPE is different in that the measurement scale is eliminated by dividing the absolute error by the time-series data value. This allows a better relative comparison ons; therefore, even bad forecasting models will have small values of MAD and MSE. Thus, these . Although these comments provide some guidelines, there is no universal agreement on which measure is best.
These measures can be used to compare the moving average forecasts in Figure 7.8. The results, shown in Table 7.2, verify that the two-period moving average model provides the best forecast among these alternatives.
Exponential Smoothing Models
A versatile, yet highly effective approach for short-range forecasting is simple exponential smoothing. The basic simple exponential smoothing model is: where Ft + 1 is the forecast for time period t + 1, Ft is the forecast for period t, At is the observed value in period t, and α is a constant between 0 and 1, called the smoothing constant. To begin, the forecast for period 2 is set equal to the actual observation for period 1.
Using the two forms of the forecast equation just given, we can interpret the simple exponential smoothing model in two ways. In the first model, the forecast for the next period, Ft + 1, is a weighted average of the forecast made for period t, Ft, and the actual observation in period t, At. The second form of the model, obtained by simply rearranging terms, states that the forecast for the next period, Ft + 1, equals the forecast for the last period, plus a fraction α of the forecast error made in period t, At − Ft. Thus, to make a forecast once we have selected the smoothing constant, we need only know the previous forecast and the actual value. By repeated substitution for Ft in the equation, it is easy to demonstrate that Ft + 1 is a decreasingly weighted average of all past time-series data. Thus, the forecast actually reflects all the data, provided that is strictly between 0 and 1.
For the burglary data, the forecast for month 43 is 88, the actual observation for month 42. Suppose we choose α = 0.7; then the forecast for month 44 would be:
The actual observation for month 44 is 60; thus, the forecast for month 45 would be:
Since the simple exponential smoothing model requires only the previous forecast and the current time-series value, it is very easy to calculate; thus, it is highly suitable for environments such as inventory systems where many forecasts must be made. The smoothing constant is usually chosen by experimentation in the same manner as choosing the number of periods to use in the moving average model. Different values of α affect how quickly the model responds to changes in the time series. For instance, a value of α = 1 would simply repeat last period’s forecast, while α = 1 would forecast last period’s actual demand. The closer α is to 1, the quicker the model responds to changes in the time series because it puts more weight on the actual current observation than on the forecast. Likewise, the closer is to 0, the more weight is put on the prior forecast, so the model would respond to changes more slowly.
An Excel spreadsheet for evaluating exponential smoothing models for the burglary data using values of between 0.1 and 0.9 is shown in Figure 7.9. A smoothing constant of α = 0.6 provides the lowest error for all three metrics. Excel has a Data Analysis tool for exponential smoothing (see Excel Note: Forecasting with Exponential Smoothing).
EXCEL NOTE Forecasting with Exponential Smoothing
From the Analysis group, select Data Analysis then Exponential Smoothing. In the dialog (Figure 7.10), as in the Moving Average dialog, you must enter the Input Range of the time-series data, the Damping Factor (1 − α)—not the smoothing constant as we have defined it (!)—and the first cell of the Output Range, which should be adjacent to the first data point. You also have options for labels, to chart output, and to obtain standard errors. As opposed to the Moving Average tool, the chart generated by this tool does correctly align the forecasts with the actual data, as shown in Figure 7.11. You can see that the exponential smoothing model follows the pattern of the data quite closely, although it tends to lag with an increasing trend in the data.
Figure 7.10 Exponential Smoothing Tool Dialog
Figure 7.11 Exponential Smoothing Forecasts for α = 0.6
Forecasting Models for Time Series with Trend and Seasonality
When time series exhibit trend and/or seasonality, different techniques provide better forecasts than the basic moving average and exponential smoothing models we have described. The computational theory behind these models are presented in the appendix to this chapter as they are quite a bit more complicated than the simple moving average and exponential smoothing models. However, a basic understanding of these techniques is useful in order to apply CB Predictor software for forecasting, which we introduce in the next section.
Models for Linear Trends
For time series with a linear trend but no significant seasonal components, double moving average and double exponential smoothing models are more appropriate. Both methods are based on the linear trend equation:
This may look familiar from simple linear regression. That is, the forecast for k periods into the future from period t is a function of a base value at also known as the level, and a trend, or slope, bt. Double moving average and double exponential smoothing differ in how the data are used to arrive at appropriate values for at and bt
Models for Seasonality
Seasonal factors (with no trend) can be incorporated into a forecast by adjusting the level, at, in one of two ways. The seasonal additive model is:
and the seasonal multiplicative model is:
In both models, st − s + k is the seasonal factor for period t − s + k and s is the number of periods in a season. A “season” can be a year, quarter, month, or even a week, depending on the application. In any case, the forecast for period t + k is adjusted up or down from a level (at) by the seasonal factor. The multiplicative model is more appropriate when the seasonal factors are increasing or decreasing over time. This is evident when the amplitude of the time series changes over time.
Models for Trend and Seasonality
Many time series exhibit both trend and seasonality. Such might be the case for growing sales of a seasonal product. The methods we describe are based on the work of two researchers, C.C. Holt, who developed the basic approach, and P.R. Winters, who extended Holt’s work. Hence, these approaches are commonly referred to as Holt–Winters models. These models combine elements of both the trend and seasonal models described above. The Holt-Winters additive model is based on the equation:
Table 7.3 Forecasting Model Choice
No Seasonality Seasonality
No Single moving average or single Seasonal additive or seasonal
Trend exponential smoothing multiplicative model
Trend Double moving average or Holt–Winters additive or Holt–
double exponential smoothing Winters multiplicative model
and the Holt-Winters multiplicative model is:
F t+1= ( a t + b t) S t- s + 1
The additive model applies to time series with relatively stable seasonality, while the multiplicative model applies to time series whose amplitude increases or decreases over time.
Table 7.3 summarizes the choice of models based on characteristics of the time series.
Choosing and Optimizing Forecasting Models Using CB Predictor
CB Predictor is an Excel add-in for forecasting that is part of the Crystal Ball suite of applications. We introduced Crystal Ball for distribution fitting in Chapter 3. CB Predictor can be used as a stand-alone program for forecasting, and can also be integrated with Monte Carlo simulation, which we discuss in Chapter 10. CB Predictor includes all the time-series forecasting approaches we have discussed. See Excel Note: Using CB Predictor for basic information on using the add-in.