Resource title

Bootstrap Procedures for Recursive Estimation Schemes With Applications to Forecast Model Selection

Resource image

image for OpenScout resource :: Bootstrap Procedures for Recursive Estimation Schemes With Applications to Forecast Model Selection

Resource description

In recent years it has become apparent that many of the classical testing procedures used to select amongst alternative economic theories and economic models are not realistic. In particular, researchers have become more aware of the fact that parameter estimation error and data dependence play a crucial role in test statistic limiting distributions, a role which had hitherto been ignored to a large extent. Given the fact that one of the primary ways for comparing different models and theories is via use of predictive accuracy tests, it is perhaps not surprising that a large literature on the topic has developed over the last 10 years, including, for example, important papers by Diebold and Mariano (1995), West (1996), and White (2000). In this literature, it is quite common to compare multiple models (which are possibly all misspecified - i.e. they are all approximations of some unknown true model) in terms of their out of sample predictive ability, for given loss function. Our objectives in this paper are twofold. First, we introduce block bootstrap techniques that are (first order) valid in recursive estimation frameworks. Thereafter, we present two applications where predictive accuracy tests are made operational using our new bootstrap procedures. One of the applications outlines a consistent test for out-of-sample nonlinear Granger causality, and the other outlines a test for selecting amongst multiple alternative forecasting models, all of which may be viewed as approximations of some unknown underlying model. More specifically, our examples extend the White (2000) reality check to the case of non vanishing parameter estimation error, and extend the integrated conditional moment (ICM) tests of Bierens (1982, 1990) and Bierens and Ploberger (1997) to the case of out-of-sample prediction. Of note is that in both of these examples, it is shown that appropriate re-centering of the bootstrap score is required in order to ensure that the tests are properly sized, and the need for such re-centering is shown to arise quite naturally when testing hypotheses of predictive accuracy. The results of a Monte Carlo investigation of the ICM test suggest that the bootstrap procedure proposed in this paper yield tests with reasonable finite sample properties for samples with as few as 300 observations.

Resource author

Valentina Corradi, Norman R. Swanson

Resource publisher

Resource publish date

Resource language


Resource content type


Resource resource URL

Resource license

Adapt according to the presented license agreement and reference the original author.