AMU - AMSE
5-9 Boulevard Maurice Bourdet, CS 50498
13205 Marseille Cedex 1
Dans cette étude, nous estimons l’influence de certaines caractéristiques sur les prix des logements avec la méthode des prix hédoniques. Nous utilisons tout d’abord une approche classique, basée sur un modèle de régression paramétrique avec autocorrélation spatiale. Cette approche présente deux inconvénients : la forme fonctionnelle du modèle et la matrice de poids sont fixées a priori. Nous présentons une approche semi-paramétrique qui permet de pallier ces limites.
A random sample drawn from a population would appear to offer an ideal opportunity to use the bootstrap in order to perform accurate inference, since the observations of the sample are IID. In this paper, Monte Carlo results suggest that bootstrapping a commonly used index of inequality leads to inference that is not accurate even in very large samples. Bootstrapping a poverty measure, on the other hand, gives accurate inference in small samples. We investigate the reasons for the poor performance of the bootstrap, and find that the major cause is the extreme sensitivity of many inequality indices to the exact nature of the upper tail of the income distribution. Consequently, a bootstrap sample in which nothing is resampled from the tail can have properties very different from those of the population. This leads us to study two non-standard bootstraps, the m out of n bootstrap, which is valid in some situations where the standard bootstrap fails, and a bootstrap in which the upper tail is modelled parametrically. Monte Carlo results suggest that accurate inference can be achieved with this last method in moderately large samples.
No abstract is available for this item.
We examine the statistical performance of inequality indices in the presence of extreme values in the data and show that these indices are very sensitive to the properties of the income distribution. Estimation and inference can be dramatically affected, especially when the tail of the income distribution is heavy, even when standard bootstrap methods are employed. However, use of appropriate semiparametric methods for modelling the upper tail can greatly improve the performance of even those inequality indices that are normally considered particularly sensitive to extreme values.
In this article, we develop a dichotomous choice model with follow-up questions that describes the willingness to pay being uncertain in an interval. The initial response is subject to starting point bias. Our model provides an alternative interpretation of the starting point bias in the dichotomous choice valuation surveys. Using the Exxon Valdez survey, we show that, when uncertain, individuals tend to answer “yes”.
This article addresses the important issue of anchoring in contingent valuation surveys that use the double-bounded elicitation format. Anchoring occurs when responses to the follow-up dichotomous choice valuation question are influenced by the bid presented in the initial dichotomous choice question. Specifically, we adapt a theory from psychology to characterize respondents as those who are likely to anchor and those who are not. Using a model developed by Herriges and Shogren (1996), our method appears successful in discriminating between those who anchor and those who did not. An important result is that when controlling for anchoring ? and allowing the degree of anchoring to differ between respondent groups ? the efficiency of the double-bounded welfare estimate is greater than for the initial dichotomous choice question. This contrasts with earlier research that finds that the potential efficiency gain from the double-bounded questions is lost when anchoring is controlled for and that we are better off not asking follow-up questions. JEL Classification: Q26, C81, D71.
In this article, we propose a unified framework that accomodates many of the existing models for dichotomous choice contingent valuation with follow-up and allows to discriminate between them by simple parametric tests of hypothese. Our empirical results show that the Range model, developped in Flachaire and Hollard , outperforms other standard models and confirms that, when uncertain, respondents tend to accept proposed bids.
In this paper, we study starting-point bias in double-bounded contingent valuation surveys. This phenomenon arises in applications that use multiple valuation questions. Indeed, response to follow-up valuation questions may be influenced by the bid proposed in the initial valuation question. Previous researches have been conducted in order to control for such an effect. However, they find that efficiency gains are lost when we control for undesirable response effects, relative to a single dichotomous choice question. Contrary to these results, we propose a way to control for starting-point bias in double-bounded questions with gains in efficiency.
Public economics proposed various models that intend to determine the optimal provision of public goods based on individual preferences. To provide decision makers with empirical recommendations, economists thus need to elicit individual preferences, and more precisely the marginal rate of substitution between private and public goods. Contingent valuation has proved a useful, and successful, tool to gather information on individual preferences. However, contingent valuation has been proved sensitive to various biases. In other words, variables that are not expected to have any influence do so in practice. In this paper, we propose a methodology, based on social psychology, which allows the identification of individuals that are proved immune to biases. This allows designing more powerful, bias free, estimation of individual preferences. Two distinct applications are provided.Classification JEL : C81, C93, Q26
In the presence of heteroskedasticity of unknown form, the Ordinary Least Squares parameter estimator becomes inefficient, and its covariance matrix estimator inconsistent. Eicker (1963) and White (1980) were the first to propose a robust consistent covariance matrix estimator, that permits asymptotically correct inference. This estimator is widely used in practice. Cragg (1983) proposed a more efficient estimator, but concluded that tests basd on it are unreliable. Thus, this last estimator has not been used in practice. This article is concerned with finite sample properties of tests robust to heteroskedasticity of unknown form. Our results suggest that reliable and more efficient tests can be obtained with the Cragg estimators in small samples.