On Using Predictive-ability Tests in the Selection of Time-series Prediction Models: A Monte Carlo Evaluation

Mauro Costantini, Robert Kunst

Publications: Working paper

Abstract

Comparative ex-ante prediction experiments over expanding subsamples are a popular tool for the task of selecting the best forecasting model class in finite samples of practical relevance. Flanking such a horse race by predictive-accuracy tests,such as the test by Diebold and Mariano (1995), tends to increase support for the simpler structure. We are concerned with the question whether such simplicity boosting actually benefits predictive accuracy in finite samples. We consider two variants of the DM test, one with naive normal critical values and one with bootstrapped critical values, the predictive-ability test by Giacomini and White (2006), which continues to be valid in nested problems, the F test by Clark and McCracken (2001), and also model selection via the AIC as a benchmark strategy. Our Monte Carlo simulations focus on basic univariate time-series specifications, such as linear (ARMA) and nonlinear (SETAR) generating processes.
Original languageEnglish
Number of pages38
Volume341
Publication statusPublished - Jul 2018

Publication series

SeriesIHS economics series : working paper

Austrian Fields of Science 2012

  • 502025 Econometrics

Keywords

  • Forecasting
  • simulation
  • prediction accuracy test

Cite this