Electoral Incentives and Information Content in Macroeconomic Forecasts

Abstract: Essay I (with Davide Cipullo): This essay introduces macroeconomic forecasters as new political agents and suggests that they use their forecasts to influence voting outcomes. The essay develops a probabilistic voting model in which voters do not have complete information about the future economy and rely on professional forecasters when forming beliefs. The model predicts that optimal forecasters with economic interests (stakes) and influence publish biased forecasts before a referendum. The theory is tested using data surrounding the Brexit referendum. The results show that forecasters with stakes and influence released more pessimistic and incorrect estimates for GDP growth subject to the leave outcome than other forecasters.Essay II (with Davide Cipullo): This essay documents the existence of Political Forecast Cycles. A theoretical model of political selection shows that governments release overly optimistic GDP growth forecasts ahead of elections to increase the reelection probability. The theory is tested using forecast data from the United States, the United Kingdom, and Sweden. The results confirm key model predictions and show that governments overestimate short-term GDP growth by 10 to 13 percent during campaign periods. Moreover, the bias is larger when the incumbent is not term-limited or constrained by a parliament led by the opposition. Furthermore, election timing determines the size of the bias at different forecast horizons.Essay III: This essay assesses to what extent forecasters use competitors’ forecasts efficiently. Empirical results using a large panel of forecasters suggest that forecasters underuse information from their competitors when forecasting GDP growth and inflation. The results also show that forecasters pay more attention to competitors when releasing short-term forecasts than medium-term forecasts. A belief updating model with noisy and private information supports the underuse interpretation and predicts that it is optimal to pay sizable attention to competitors’ work. Furthermore, the essay shows that a revision cost model can only match the observed behavior if asymmetric horizon discounting between cost from revisions and loss from forecast errors is assumed.Essay IV (with Michael K. Andersson and Ted Aranki): This essay proposes a method to account for differences in release dates when assessing an unbalanced panel of forecasters. Cross-institutional forecast evaluations may be severely distorted because forecasts are made at different points in time and thus with different amounts of information. The proposed method computes the timing effect and the forecaster’s ability (performance) simultaneously. Simulations demonstrate that evaluations that do not adjust for the differences in information may be misleading. The method is also applied to a real-world data set of 10 Swedish forecasters, and the results show that the forecasters’ ability ranking is affected by the proposed adjustment.

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)