Resource title

A Test for Comparing Multiple Misspecified Conditional Distributions

Resource image

image for OpenScout resource :: A Test for Comparing Multiple Misspecified Conditional Distributions

Resource description

This paper introduces a conditional Kolmogorov test, in the spirit of Andrews (1997), that allows for comparison of multiple misspecifed conditional distribution models, for the case of dependent observations. A conditional confidence interval version of the test is also discussed. Model accuracy is measured using a distributional analog of mean square error, in which the squared (approximation) error associated with a given model, say model i; is measured in terms of the average over U of E((Fi(u t,Theta-t-plus)-Fo(u t,Theta-o))) ; where U is a possibly unbounded set on the real line, Zt is the conditioning information set, Fi is the distribution function of a particular candidate model, and F0 is the true (unkown) distribution function. When comparing more than two models, a ?benchmark? model is specified, and the test is constructed along the lines of the ?reality check? of White (2000). Valid asymptotic critical values are obtained via a version of the block bootstrap which properly captures the effect of parameter estimation error. The results of a small Monte Carlo experiment indicate that the conditional confidence interval version of the test has reasonable finite sample properties even for samples with as few as 60 observations.

Resource author

Valentina Corradi, Norman R. Swanson

Resource publisher

Resource publish date

Resource language

eng

Resource content type

text/html

Resource resource URL

http://hdl.handle.net/10419/23171

Resource license

Adapt according to the presented license agreement and reference the original author.