We develop a method based on the use of polar coordinates to investigate the existence of moments for instrumental variables and related estimators in the linear regression model. For generalized IV estimators, we obtain familiar results. For JIVE, we obtain the new result that this estimator has no moments at all. Simulation results illustrate the consequences of its lack of moments. Copyright Royal Economic Society 2007
The bootstrap is a statistical technique used more and more widely in econometrics. While it is capable of yielding very reliable inference, some precautions should be taken in order to ensure this. Two “Golden Rules” are formulated that, if observed, help to obtain the best the bootstrap can offer. Bootstrapping always involves setting up a bootstrap data-generating process (DGP). The main types of bootstrap DGP in current use are discussed, with examples of their use in econometrics. The ways in which the bootstrap can be used to construct confidence sets differ somewhat from methods of hypothesis testing. The relation between the two sorts of problem is discussed.
Two procedures are proposed for estimating the rejection probabilities (RPs) of bootstrap tests in Monte Carlo experiments without actually computing a bootstrap test for each replication. These procedures are only about twice as expensive (per replication) as estimating RPs for asymptotic tests. Then a new procedure is proposed for computing bootstrap P values that will often be more accurate than ordinary ones. This “fast double bootstrap” (FDB) is closely related to the double bootstrap, but it is far less computationally demanding. Simulation results for three different cases suggest that the FDB can be very useful in practice.
No abstract is available for this item.
We perform an extensive series of Monte Carlo experiments to compare the performance of two variants of the 'jackknife instrumental variables estimator', or JIVE, with that of the more familiar 2SLS and LIML estimators. We find no evidence to suggest that JIVE should ever be used. It is always more dispersed than 2SLS, often very much so, and it is almost always inferior to LIML in all respects. Interestingly, JIVE seems to perform particularly badly when the instruments are weak. Copyright © 2006 John Wiley & Sons, Ltd.
We introduce the concept of the bootstrap discrepancy, which measures the difference in rejection probabilities between a bootstrap test based on a given test statistic and that of a (usually infeasible) test based on the true distribution of the statistic. We show that the bootstrap discrepancy is of the same order of magnitude under the null hypothesis and under non-null processes described by a Pitman drift. However, complications arise in the measurement of power. If the test statistic is not an exact pivot, critical values depend on which data-generating process (DGP) is used to determine the distribution under the null hypothesis. We propose as the proper choice the DGP which minimizes the bootstrap discrepancy. We also show that, under an asymptotic independence condition, the power of both bootstrap and asymptotic tests can be estimated cheaply by simulation. The theory of the paper and the proposed simulation method are illustrated by Monte Carlo experiments using the logit model.
It has been shown in previous work that bootstrapping the J test for nonnested linear regression models dramatically improves its finite-sample performance. We provide evidence that a more sophisticated bootstrap procedure, which we call the fast double bootstrap, produces a very substantial further improvement in cases where the ordinary bootstrap does not work as well as it might. This FDB procedure is only about twice as expensive as the usual single bootstrap.
Le test J applique aux modeles de regression non emboites a souvent des performances qui sont mauvaises pour la version asymptotique du test, mais tres bonnes pour la forme bootstrap. On donne une analyse theorique qui explique les deux phenomenes. On propose une version modifiee du test qui, dans sa version bootstrap, s'avere encore plus performante que le test J. Les excellentes performances des tests bootstrap sont demontrees par des experiences Monte Carlo, qui sont d'une tres grande precision grace a nos resultats theoriques, qui permettent de reduire le temps de calcul de maniere importante.
Differential Geometry has become a standard tool in the analysis of statistical models, offering a deeper appreciation of existing methodologies and highlighting the issues which can be hidden in an algebraic development of a problem. This volume is the first to apply these techniques to econometrics. An introductory chapter provides a brief tutorial for those unfamiliar with the tools of Differential Geometry. The following chapters offer applications of geometric methods to practical solutions and offer insight into problems of econometric inference.