Businesses, investors, and public institutions depend on accurate short-term economic forecasts—estimates of the current state of the economy—for a number of reasons, including unexpected delays in the release of official statistical data and to make accurate projections of what is likely to happen in the next month, quarter or year.
Organizations that use forecasts from international agencies to support planning and operational decisions need to operate using accurate information. Here are the some of the summary outputs of our research into the accuracy of international forecast agencies.
Forecasting agencies favor estimates that underestimate volatility. Though their caution is understandable, these projections often miss periods of rapid growth and contractions alike. They also tend to underestimate the magnitude of change in both. In fact, professional forecasters don’t perform much better than if we simply use the prior year’s value as a forecast for the next year.
Economic forecasts depend on analysis of the current phase of the economic cycle. However, we should expect that experience and major advancements in research technology would yield a significant increase in accuracy. This hasn’t been the case: We still almost invariably see an increase in errors during periods of crises, recessions, and other abrupt changes.
For instance, the average absolute errors of agency forecasts in the post-crisis period between 2011-2016 are about the same as in the pre-crisis period. Furthermore, data shows this also holds true for forecasts in the late 1990s and early 2000s, both in general and for specific country groups. Projections for the unemployment rate actually seem to deteriorate after 2009 when compared to the pre-crisis period.
This all holds true despite exponential advancements in analytics, data collection, and processing power.
These charts illustrate the Mean Absolute Error of one and two year-ahead forecasts from 1998-2016 for real GDP growth, CPI inflation growth, and the unemployment rate.
“Agencies make developed and high-income nations appear much more economically stable than they are: They underestimate volatility by about 50% – 100%.”
All major forecasting agencies tend to underestimate volatility.
On average, forecasters underestimate variance (standard deviation) in GDP growth by about 45% compared to the actual variation. They also low-ball inflation by about 25%.
This tendency applies across all groups of countries, regardless of the forecasting agency. The bias also isn’t isolated to extreme events (i.e., unusually rapid or large-scale changes).
Figure 3 shows that mean absolute deviation of actual values for GDP growth from its average is essentially higher compared to mean absolute deviation of one year ahead forecasts from period average of actual GDP growth values.
Moreover, this pattern isn’t isolated to the case of extreme changes—it’s consistent across the board. When it comes to unemployment, though, these trends are almost invisible.
Notes for reading this chart:
- σF, σA – standard deviations of forecasts and actual estimates, respectively; δ (σF) – relative error of standard deviations of forecasts (underestimation of actual variation if positive, overestimation of actual variation if negative).
- Limited 90% distribution is obtained by cutting 5% of observations form tails of initial distribution.
Forecasts are more accurate for developing countries
Agencies make developed and high-income nations appear much more economically stable than they are: They underestimate volatility by about 50% – 100%. Because states and businesses adjust policy based in large on these forecasts, this can have serious negative consequences – especially when it comes to anticipating economic shocks.
Despite the high proportion of accurate predictions in directional change, we note that, depending on the region, in 20% – 30% of cases agencies didn’t foretell any indication of slowing GDP growth rates. In other words, slowdowns weren’t predicted even at a qualitative level. It’s instructive to consider several vivid examples of such cases:
- In the autumn of 2008, after one of the most acute phases of the financial crisis, the IMF, the U.N., and the European Commission forecast Canada’s GDP growth for 2009 at 1.17%, 0.8% and 0.3% respectively. In reality, Canada’s GDP in 2009 declined by 2.5%.
- In the same period, the IMF and the United Nations projected Japan’s 2009 GDP growth would be 0.47% and 0.5%, respectively. This suggested a slight acceleration in growth compared to the end of 2008, but the actual change was -5.2% (according to 2010 estimates.
“We can classify forecasts that aren’t significantly different from the previous actual value as ‘noise.’ However, if agencies forecast a significant amount of change, that’s a ‘signal’ we should pay attention to.”
The naive method
The accuracy of agency forecasts is only slightly higher than forecasts based on the previous year’s trends (“naive method”). In fact, in a number of cases, agency errors are even higher than the naive method.
In the case of annual projections for year-ahead U.S. GDP growth from 1997-2016, forecasting bodies performed only .36 percentage points better than the naive method — and if we exclude 2009-2010, they performed with the same accuracy as the naive method.
That said, agency accuracy increases in moments of fractures, where trends change direction (not a surprising finding). Despite the fact that the errors grow in absolute value, in the majority of fractures the agencies do anticipate a change in direction.
From the point of view of information theory, then, we can classify forecasts that aren’t significantly different from the previous actual value as “noise.” However, if agencies forecast a significant amount of change, that’s a “signal” we should pay attention to. It’s also likely we can extract additional useful information from the revisions and clarifications agencies make to their forecasts.
“Forecasters almost always compare results with those of other organizations, and they tend to avoid publishing significantly different estimates.”
No agencies perform better for certain indicators or countries
Even though each agency conducts its own independent analysis, the published forecasts are very similar. The correlation coefficients between their errors in most cases exceed 0.9, a systemic absence of significant differences in the predicted direction of indicator change.
Not only do forecasters uniformly project changes in direction, but also in magnitude. This is especially noticeable when we compare forecasts for all overlapping subsamples. We can explain this convergence by noting that forecasters almost always compare results with those of other organizations, and they tend to avoid publishing significantly different estimates.
Accurate forecasting is critical for strategy and planning efforts across both the public and private sectors, and the work done by macroeconomic forecasting agencies is extremely important. Our goal in building this analysis is to educate professionals that use forecasts on some of the underlying considerations around forecast performance and accuracy, so they can optimally incorporate this data into their organizational decision-making processes.
We are building upon this research to identify ways that we can build more accurate forecasting models. One possible outcome is a forecast model that both incorporates the best performing aspects of each agency’s forecasts and identifies areas where the Naive Method, or other internal forecasts methods, should be used instead of work from the global agencies. This is one of many examples where we believe that Knoema’s broad approach to integrated data can help organizations build an accurate picture of the markets and environments in which they operate.