Top 15 Sales Forecasting Interview Questions You Must Prepare 19.Mar.2024

Some organizations provide forecasts. Because research on forecasting comes from many disciplines, since 1980 efforts have been made to unify the field. There is an academic institute (International Institute of Forecasters), two academic journals (the Journal of Forecasting and the International Journal of Forecasting), and a journal for practitioners (Journal of Business Forecasting).

Forecasting is concerned with how to collect and process information. Decisions about how to structure a forecasting problem can be important. For example, when should one decompose a problem and address each component separately? Forecasting includes such prosaic matters as obtaining relevant up-to-date data, checking for errors in the data, and making adjustments for inflation, working days, and seasonality. Forecast error sometimes depends more on how information is used than on getting ever more accurate information. The question of what information is needed and how it is best used is determined by the selection of forecasting methods

Forecasting methods and principles have been developed in many different fields, such as statistics, economics, psychology, finance, marketing, and meteorology. The primary concern of researchers in each field is to communicate with other academics in their field. The Forecasting Dictionary has been developed to aid communication among groups.

Yes. This common wisdom is supported by research. It is also important to use data that spans a long time period or a wide range of similar situations. Doing so will reduce the risk that you will mistake short-term variations for fundamental trends or local anomalies for general findings.

Research on forecasting has produced many changes in recommended practice, especially since the 1960s. Much advice that was formerly given about the best way to generate forecasts has been found to be wrong. For example, the advice to base forecasts on regression models that fit historical timeseries data has had a detrimental effect on accuracy.

Sometimes the research findings have been upsetting to academics, such as the discovery that relatively simple models are more accurate than complex ones in many situations. Perhaps the major reason that research has been so important in forecasting is that it has stressed empirical results that compare the forecasting performance of alternative methods.

One of the more important empirical comparisons was the M-competition (Makridakis, et al. 1982). The M-competition was followed by others, the most recent being the M3-Competition (Ord, Hibon, and Makridakis 2000). Emphasising empirical findings may appear to be obviously desirable, but the approach is not always adopted.

Following good forecasting practice does not guarantee accurate forecasts on every occasion. One approach you could take to answering critics is to compare the accuracy of your forecasts to a suitable benchmark. Unfortunately, benchmarks are not readily available for all types of forecasting. If there is no benchmark relevant to your forecasts, you will need to show that you followed best forecasting practice. To do this, you can conduct an audit of the forecasting process you used and, if you did adhere to the relevant principles, you will get a good report that you can show critics.

Many books have been published about forecasting. For a listing of those published since 1990, along with reviews, see Text/Trade Books. One of the more popular is Makridakis, Wheelwright, and Hyndman (1998); now in its fifth edition, it describes how to use a variety of methods. The International Symposium on Forecasting brings together practitioners, academics, and software exhibitors in June or July of each year. The purpose of the Principles of Forecasting book is to summarize knowledge about forecasting methods. 

Forecasting is concerned with what the future will look like, while planning is concerned with what it should look like. One would usually start by planning. The planning process produces a plan that is, along with information about the environment, an input to the forecasting process. If the organization does not like the forecasts generated by the forecasting process, it can generate other plans until a plan is found that leads to forecasts of acceptable outcomes. Of course, many organizations take a shortcut and merely change the forecast. (This is analogous to a family deciding to change the weather forecast so they can go on a picnic). For more on the roles of planning and forecasting.

Anyone is free to practice forecasting for most products and in most countries. This has not always been true. Societies have been suspicious of forecasters. In A.D. 357, the Roman Emperor Constantius made a law forbidding anyone from consulting a soothsayer, mathematician, or forecaster. He proclaimed “…may curiosity to foretell the future be silenced forever”.”  It is sensible for a person practicing forecasting to have been trained in the most appropriate methods for the problems they face. Expert witnesses who forecast can be expected to be examined on their familiarity with methods. One measure of witness expertise is whether they have published in the area in which they claim expertise. In a recent U.S. Supreme Court ruling, while publication was not accepted as a necessary condition for being an expert witness, it was regarded as an important qualification. The development of well-validated forecasting methods has improved the status of forecasting expertise. Nobel Prizes for Economics have gone to economists, including Engle, Granger, Klein, Leontief, Modigliani, Prescott, Samuelson, and Tinbergen, who have contributed to forecasting methodology.

Most people think so and they revise forecasts from quantitative methods, usually reducing accuracy as a result. Nevertheless, people often have useful knowledge about the problem, which is referred to as domain knowledge. One approach to making effective use of domain knowledge consists of providing graphic decision support for judgmental forecasting (Edmundson 1990). Another approach is to integrate domain knowledge with statistical methods. For a review of research in this area, see Sanders & Ritzman (2001).The best way to integrate judgment with statistical methods is as an input to the quantitative models For example, causal-force knowledge can be used to incorporate knowledge about trends into forecasts (Collopy and Armstrong 1992, and Armstrong and Collopy 1993).

Forecasting the future of technology is a dangerous enterprise. Schnaars (1989) examined hundreds of technology forecasts. He found that there is a myopia, even among experts, that causes them to focus upon the future in terms of present conditions. Cerf and Navasky (1998) gave interesting examples of errors in expert judgments about the future of technology. Perhaps the most famous is the 1899 call by the US Commissioner of Patents to abolish the Patent Office on the grounds that there was nothing left to invent.

There are many good special-purpose forecasting programs. For descriptions, reviews, and surveys, go to Software . Some programs help the user to conduct validations of exante forecasts by making it easy to use successive updating and by providing a variety of error measures. Some programs incorporate more forecasting principles than others. For an assessment of software, see Use of Principles.

Forecasting methods can be classified first as either subjective or objective. Subjective (judgmental) methods are widely used for important forecasts. Objective methods include extrapolation (such as moving averages, linear regression against time, or exponential smoothing) and econometric methods (typically using regression techniques to estimate the effects of causal variables). To see how forecasting methods relate to one another, see the Methodology Tree. 

The field of forecasting is concerned with approaches to determining what the futureholds. It is also concerned with the proper presentation and use of forecasts. The terms“forecast”, “prediction”, “projection”, and “prognosis” are typically used interchangeably. Forecasts may be conditional. That is, if policy A is adopted then X is likely, but if B is adopted then Y is most likely to occur. Often forecasts are of future values of a time-series; for example, the number of babies that will be born in a year, or the likely demand for compact cars. Alternatively, forecasts can be of one-off events suchas the outcome of a union-management dispute or the performance of a new recruit.Forecasts can also be of distributions such as the locations of terrorist attacks or theoccurrence of heart attacks among different age cohorts. The field of forecasting includesthe study and application of judgment as well as of quantitative (statistical) methods.

The belief that people’s decisions are a reflection of their personality rather than a common response to the situation they are in is widely held and has been termed the “fundamental attribution error”. Again the question is an empirical one, and the conflicts that have been used in research, which involved many extraordinary people, were forecast well by structured analogies and by simulated interaction.