A Consistent Test for Nonlinear Out of Sample Predictive Accuracy

Paper number: 00/12

Paper date: September 2000

Year: 2000

Paper Category: Working Paper

Authors

Valentina Corradi and Norman R. Swanson

Abstract

In this paper, we draw on both the consistent specification testing and the predictive ability testing literatures and propose a test for predictive accuracy which is consistent against generic nonlinear alternatives. Broadly speaking, given a particular reference model, assume that the objective is to test whether there exists any alternative model, among an infinite number of alternatives, that has better predictive accuracy than the reference model, for a given loss function. A typical example is the case in which the reference model is a simple autoregressive model and the objective is to check whether amore accurate forecasting model can be constructed by including possibly unknown (non) linear functions of the past of the process or of the past of some other process(es). We propose a statistic which is similar in spirit to that of White (2000), although our approach differs from his as we allow for an infinite number of competing models that may be nested. In addition, we allow for non vanishing parameter estimation error. In order to construct valid asymptotic critical values, we implement a conditional p-value procedure which extends the work of Inoue (1999) by allowing for non vanishing parameter estimation error. In a series of Monte Carlo experiments, we focus on a version of our test which can be interpreted as an out of sample nonlinear Granger causality test, and find that empirical size is very close to nominal, and power increases quite sharply when the sample size is increased from 400 to 600 observations.

A Consistent Test for Nonlinear Out of Sample Predictive Accuracy A Consistent Test for Nonlinear Out of Sample Predictive Accuracy