Aller au contenu principal

Russell Davidson

ChercheurFaculté d'économie et de gestion (FEG)Université de McGill

Économétrie, finance et méthodes mathématiques
Davidson
Statut
Professeur
Domaine(s) de recherche
Choix social, Économétrie
Thèse
1977, University of British Columbia
Téléchargement
CV
Adresse

AMU - AMSE
5-9 Boulevard Maurice Bourdet, CS 50498
​13205 Marseille Cedex 1

Résumé This article derives the (asymptotic) variances and covariances – and hence standard errors – of quantile means and quantile shares in terms of explicit formulas that are distribution-free and easily computable. The article then develops a toolbox of quantile-based disaggregative inequality measures, based on the means and shares, which allow for detailed inferential analysis of income distributions in a straightforward unified framework. The analytical formulas are applied to Canadian Census public-use microdata files on workers’ earnings for 2000 and 2005. The results highlight the statistical significance of how upper-earnings levels have advanced beyond middle earnings, how much the share of mid-range earnings has eroded over even a five-year period, and how decile mean growth rates for women were everywhere higher than for men – except at the top decile, where the opposite phenomenon was highly significant.
Mots clés Quantile share, Quantile means, Income shares, Distribution-free inference, Disaggregative measures
Résumé The standard forms of bootstrap iteration are very computationally demanding. As a result, there have been several attempts to alleviate the computational burden by use of approximations. In this paper, we extend the fast double bootstrap of Davidson and MacKinnon (2007) to higher orders of iteration, and provide algorithms for their implementation. The new methods make computational demands that increase only linearly with the level of iteration, unlike standard procedures, whose demands increase exponentially. In a series of simulation experiments, we show that the fast triple bootstrap improves on both the standard and fast double bootstraps, in the sense that it suffers from less size distortion under the null with no accompanying loss of power.
Mots clés Fast iterated bootstrap, Bootstrap iteration, Bootstrap iteration Fast iterated bootstrap
Résumé The bootstrap is a technique for performing statistical inference. The underlying idea is that most properties of an unknown distribution can be estimated as the same properties of an estimate of that distribution. In most cases, these properties must be estimated by a simulation experiment. The parametric bootstrap can be used when a statistical model is estimated using maximum likelihood since the parameter estimates thus obtained serve to characterise a distribution that can subsequently be used to generate simulated data sets. Simulated test statistics or estimators can then be computed for each of these data sets, and their distribution is an estimate of their distribution under the unknown distribution. The most popular sort of bootstrap is based on resampling the observations of the original data set with replacement in order to constitute simulated data sets, which typically contain some of the original observations more than once, some not at all. A special case of the bootstrap is a Monte Carlo test, whereby the test statistic has the same distribution for all data distributions allowed by the null hypothesis under test. A Monte Carlo test permits exact inference with the probability of Type I error equal to the significance level. More generally, there are two Golden Rules which, when followed, lead to inference that, although not exact, is often a striking improvement on inference based on asymptotic theory. The bootstrap also permits construction of confidence intervals of improved quality. Some techniques are discussed for data that are heteroskedastic, autocorrelated, or clustered.
Résumé Conventional wisdom says that the middle classes in many developed countries have recently suffered losses, in terms of both the share of the total population belonging to the middle class, and also their share in total income. Here, distribution-free methods are developed for inference on these shares, by means of deriving expressions for their asymptotic variances of sample estimates, and the covariance of the estimates. Asymptotic inference can be undertaken based on asymptotic normality. Bootstrap inference can be expected to be more reliable, and appropriate bootstrap procedures are proposed. As an illustration, samples of individual earnings drawn from Canadian census data are used to test various hypotheses about the middle-class shares, and confidence intervals for them are computed. It is found that, for the earlier censuses, sample sizes are large enough for asymptotic and bootstrap inference to be almost identical, but that, in the twenty-first century, the bootstrap fails on account of a strange phenomenon whereby many presumably different incomes in the data are rounded to one and the same value. Another difference between the centuries is the appearance of heavy right-hand tails in the income distributions of both men and women.
Mots clés Middle class, Canada, Bootstrap
Résumé In this study, we model realized volatility constructed from intra-day highfrequency data. We explore the possibility of confusing long memory and structural breaks in the realized volatility of the following spot exchange rates: EUR/USD, EUR/JPY, EUR/CHF, EUR/GBP, and EUR/AUD. The results show evidence for the presence of long memory in the exchange rates' realized volatility. FromtheBai-Perrontest,wefoundstructuralbreakpointsthatmatch significant events in financial markets. Furthermore, the findings provide strong evidence in favour of the presence of long memory.
Résumé The bootstrap can be validated by considering the sequence of P values obtained by bootstrap iteration, rather than asymptotically. If this sequence converges to a random variable with the uniform U(0,1) distribution, the bootstrap is valid. Here, the model is made discrete and finite, characterised by a three-dimensional array of probabilities. This renders bootstrap iteration to any desired order feasible. A unit-root test for a process driven by a stationary MA(1) process is known to be unreliable when the MA(1) parameter is near −1. Iteration of the bootstrap P value to convergence achieves reliable inference unless the parameter value is very close to −1.
Mots clés MA1, Unit root, Bootstrap iteration, Bootstrap
Résumé Testing the specification of econometric models has come a long way from the t tests and F tests of the classical normal linear model. In this paper, we trace the broad outlines of the development of specification testing, along the way discussing the role of structural versus purely statistical models. Inferential procedures have had to advance in tandem with techniques of estimation, and so we discuss the generalized method of moments, non parametric inference, empirical likelihood and estimating functions. Mention is made of some recent literature, in particular, of weak instruments, non parametric identification and the bootstrap.
Résumé The bootstrap is typically less reliable in the context of time-series models with serial correlation of unknown form than when regularity conditions for the conventional IID bootstrap apply. It is, therefore, useful to have diagnostic techniques capable of evaluating bootstrap performance in specific cases. Those suggested in this paper are closely related to the fast double bootstrap (FDB) and are not computationally intensive. They can also be used to gauge the performance of the FDB itself. Examples of bootstrapping time series are presented, which illustrate the diagnostic procedures, and show how the results can cast light on bootstrap performance.
Mots clés Time series, Fast double bootstrap, Diagnostics for bootstrap, Bootstrap, Autocorrelation of unknown form
Résumé A major contention in this paper is that scientific models can be viewed as virtual realities, implemented, or rendered, by mathematical equations or by computer simulations. Their purpose is to help us understand the external reality that they model. In economics, particularly in econometrics, models make use of random elements, so as to provide quantitatively for phenomena that we cannot or do not wish to model explicitly. By varying the realizations of the random elements in a simulation, it is possible to study counterfactual outcomes, which are necessary for any discussion of causality. The bootstrap is virtual reality within an outer reality. The principle of the bootstrap is that, if its virtual reality mimics as closely as possible the reality that contains it, it can be used to study aspects of that outer reality. The idea of bootstrap iteration is explored, and a discrete model discussed that allows investigators to perform iteration to any desired level.
Mots clés Economie quantitative
Résumé We study the finite-sample properties of tests for overidentifying restrictions in linear regression models with a single endogenous regressor and weak instruments. Under the assumption of Gaussian disturbances, we derive expressions for a variety of test statistics as functions of eight mutually independent random variables and two nuisance parameters. The distributions of the statistics are shown to have an ill-defined limit as the parameter that determines the strength of the instruments tends to zero and as the correlation between the disturbances of the structural and reduced-form equations tends to plus or minus one. This makes it impossible to perform reliable inference near the point at which the limit is ill-defined. Several bootstrap procedures are proposed. They alleviate the problem and allow reliable inference when the instruments are not too weak. We also study their power properties.
Mots clés Anderson-Rubin test, Basmann test, Sargan test, Weak instruments
Résumé It is known that Efron’s bootstrap of the mean of a distribution in the domain of attraction of the stable laws with infinite variance is not consistent, in the sense that the limiting distribution of the bootstrap mean is not the same as the limiting distribution of the mean from the real sample. Moreover, the limiting bootstrap distribution is random and unknown. The conventional remedy for this problem, at least asymptotically, is either the m out of n bootstrap or subsampling. However, we show that both these procedures can be unreliable in other than very large samples. We introduce a parametric bootstrap that overcomes the failure of Efron’s bootstrap and performs better than the m out of n bootstrap and subsampling. The quality of inference based on the parametric bootstrap is examined in a simulation study, and is found to be satisfactory with heavy-tailed distributions unless the tail index is close to 1 and the distribution is heavily skewed.
Mots clés Economie quantitative
Résumé The most widely used measure of segregation is the so‐called dissimilarity index. It is now well understood that this measure also reflects randomness in the allocation of individuals to units (i.e. it measures deviations from evenness, not deviations from randomness). This leads to potentially large values of the segregation index when unit sizes and/or minority proportions are small, even if there is no underlying systematic segregation. Our response to this is to produce adjustments to the index, based on an underlying statistical model. We specify the assignment problem in a very general way, with differences in conditional assignment probabilities underlying the resulting segregation. From this, we derive a likelihood ratio test for the presence of any systematic segregation, and bias adjustments to the dissimilarity index. We further develop the asymptotic distribution theory for testing hypotheses concerning the magnitude of the segregation index and show that the use of bootstrap methods can improve the size and power properties of test procedures considerably. We illustrate these methods by comparing dissimilarity indices across school districts in England to measure social segregation.
Mots clés Segregation, Hypothesis testing, Dissimilarity index, Bootstrap methods
Résumé An axiomatic approach is used to develop a one-parameter family of measures of divergence between distributions. These measures can be used to perform goodness-of-fit tests with good statistical properties. Asymptotic theory shows that the test statistics have well-defined limiting distributions which are, however, analytically intractable. A parametric bootstrap procedure is proposed for implementation of the tests. The procedure is shown to work very well in a set of simulation experiments, and to compare favorably with other commonly used goodness-of-fit tests. By varying the parameter of the statistic, one can obtain information on how the distribution that generated a sample diverges from the target family of distributions when the true distribution does not belong to that family. An empirical application analyzes a U.K. income dataset.
Mots clés Economie quantitative
Résumé We study several methods of constructing confidence sets for the coefficient of the single right-hand-side endogenous variable in a linear equation with weak instruments. Two of these are based on conditional likelihood ratio (CLR) tests, and the others are based on inverting t statistics or the bootstrap P values associated with them. We propose a new method for constructing bootstrap confidence sets based on t statistics. In large samples, the procedures that generally work best are CLR confidence sets using asymptotic critical values and bootstrap confidence sets based on limited-information maximum likelihood (LIML) estimates.
Mots clés Economie quantitative
Résumé Economists are often interested in the coefficient of a single endogenous explanatory variable in a linear simultaneous-equations model. One way to obtain a confidence set for this coefficient is to invert the Anderson-Rubin (AR) test. The AR confidence sets that result have correct coverage under classical assumptions. However, AR confidence sets also have many undesirable properties. It is well known that they can be unbounded when the instruments are weak, as is true of any test with correct coverage. However, even when they are bounded, their length may be very misleading, and their coverage conditional on quantities that the investigator can observe (notably, the Sargan statistic for overidentifying restrictions) can be far from correct. A similar property manifests itself, for similar reasons, when a confidence set for a single parameter is based on inverting an F-test for two or more parameters.
Mots clés Economie quantitative
Résumé Asymptotic and bootstrap tests are studied for testing whether there is a relation of stochastic dominance between two distributions. These tests have a null hypothesis of nondominance, with the advantage that, if this null is rejected, then all that is left is dominance. This also leads us to define and focus on restricted stochastic dominance, the only empirically useful form of dominance relation that we can seek to infer in many settings. One testing procedure that we consider is based on an empirical likelihood ratio. The computations necessary for obtaining a test statistic also provide estimates of the distributions under study that satisfy the null hypothesis, on the frontier between dominance and nondominance. These estimates can be used to perform dominance tests that can turn out to provide much improved reliability of inference compared with the asymptotic tests so far proposed in the literature.
Mots clés Economie quantitative
Résumé The understanding of causal chains and mechanisms is an essential part of any scientific activity that aims at better explanation of its subject matter, and better understanding of it. While any account of causality requires that a cause should precede its effect, accounts of causality inphysics are complicated by the fact that the role of time in current theoretical physics has evolved very substantially throughout the twentieth century. In this article, I review the status of time and causality in physics, both the classical physics of the nineteenth century, and modern physics based on relativity and quantum mechanics. I then move on to econometrics, with some mention of statistics more generally, and emphasise the role of models in making sense of causal notions, and their place in scientific explanation
Mots clés Economie quantitative
Résumé The wild bootstrap is studied in the context of regression models with heteroskedastic disturbances. We show that, in one very specific case, perfect bootstrap inference is possible, and a substantial reduction in the error in the rejection probability of a bootstrap test is available much more generally. However, the version of the wild bootstrap with this desirable property is without the skewness correction afforded by the currently most popular version of the wild bootstrap. Simulation experiments show that this does not prevent the preferred version from having the smallest error in rejection probability in small and medium-sized samples.
Mots clés Wild bootstrap, Heteroskedasticity, Bootstrap inference
Résumé A random sample drawn from a population would appear to offer an ideal opportunity to use the bootstrap in order to perform accurate inference, since the observations of the sample are IID. In this paper, Monte Carlo results suggest that bootstrapping a commonly used index of inequality leads to inference that is not accurate even in very large samples, although inference with poverty indices is satisfactory. We find that the major cause is the extreme sensitivity of many inequality indices to the exact nature of the upper tail of the income distribution. This leads us to study two non-standard bootstraps, the m out of n bootstrap, which is valid in some situations where the standard bootstrap fails, and a bootstrap in which the upper tail is modelled parametrically. Monte Carlo results suggest that accurate inference can be achieved with this last method in moderately large samples.
Mots clés Income distribution, Poverty, Bootstrap inference
Résumé This study concerns Article 55 of the SRU law which requires certain municipalities in pain of financial penalties to have more than 20 housing. We develop in this evaluation an innovative methodology to measure the incentive of the Law on the actual production of housing. We confront our strategy to classical counterfactual methods (Differences in differences; changes in changes). Using only past information on production of social housing to build counterfactual distribution, we show that our strategy is based on less restrictive assumptions and more likely to be checked for consistency indirect methods as competitors. In particular, the advantage of this method is not to assume a condition of support, which could not be verified in the case. All measures contribute to an positive impact but small. For a quadrennial production gain is estimated at 0.35 additional percentage point or even more than 40 housing for a city of 20.000 inhabitants.
Mots clés Counterfactual analysis, Public housing, Public policy evaluation, Article 55 SRU law, Analyse contrefactuelle, Logement social, Évaluation politique publique, Article 55 loi SRU