Publications
As Ricoeur acknowledges, the publication of Rawls’s book, A Theory of Justice(TJ), in 1971 was a major event in the area of contemporary political philoso-phy. Ricoeur offers important comments on Rawls’s approach to justice. These commentaries are mainly based on a close reading of this book. This article focuses on such commentaries. They are at the same time glowing and critical. Ricoeur expresses his support to Rawls for his illuminating study of justice, seen as a virtue of institutions.
...
This chapter discusses the potential impacts of the spread of COVID-19, and the restriction policies that it has triggered in many countries, on conflict incidence worldwide. Based on anecdotal evidence and recent research, we argue that imposing nation-wide shutdown policies diminishes conflict incidence on average, but that this conflict reduction may be short-lived and highly heterogeneous across countries. In particular, conflict does not appear to decline in poor, fractionalised countries. Evidence points to two potential ways in which COVID-related restriction policies may increase conflict: losses in income and magnified ethnic and religious tensions leading to scapegoating of minorities.
This chapter presents an intuitive overview of the methods that researchers can use to estimate the monetary value of changes in health outcomes. These methods are separated into two categories: stated preference methods and revealed preference methods. Stated preference methods ask people how much they are willing to pay for health improvements directly using surveys of the relevant population. Revealed preference methods infer the trade-offs that people make between health and money indirectly by observing everyday behavior, such as when people accept a riskier job in return for higher wages; or when they buy products to protect their health from hazards. The chapter discusses the main advantages and disadvantages of each method.
In September 2021, the World Health Organization decided to implement stronger air quality guidelines for protecting health, based on the last decade of research. Ambient air pollution (AAP) was already the first environmental risk to health in terms of number of premature deaths, and this decision suggests that the risk was seriously underestimated. This chapter covers the relationship between AAP and health from an economic perspective. The first part presents the major regulated air pollutants and their related health effects, the way population exposure is measured, and the individual vulnerability and susceptibility to AAP-related effects. Then, the main approaches that estimate the relationships between health effects and air pollutants are covered: pure observational and interventional/quasi-experimental studies. Up-to-date reviews of the most robust relationships, and of the main findings of interventional/causal inference methods, are detailed. Next, impact assessments studies are tackled and some recent global assessments of health impacts due to AAP are presented. Once calculated, the health impacts can be expressed in monetary terms to enter the decision-making process. The relevant approaches for valuing market and nonmarket health impacts – market prices, revealed and stated preferences – are critically outlined, and their adequation with the AAP context examined. Finally, the economic health-related impacts of AAP are presented and discussed, with specific sections devoted to the necessity of an interdisciplinary approach and inequity-related issues at national and international levels. This chapter concludes with a widening of the perspective that tackles interactions between AAP on the one hand and climate change and indoor pollution on the other hand.
-
From July 25 to 31, 1796, Georg Wilhelm Friedrich Hegel, then working as a private tutor for the aristocracy in Bern, took a mountain hike in the neighbouring Alps. Hegel travelled from Thoune to Altdorg via the Jungfrau and the Uri, a land of glaciers. As Hegel began studying economics for good, the query would reappear: in his reading of Sir James Steuart’s Inquiry into the Principles of Political economy, Hegel would make his first step into economic theorizing. In Frankfurt, Hegel was not yet a tenured Gymnasium professor. He was again a private tutor, experiencing hardships of a salaried life – though in wealthy families. Paul Chamley selected excerpts of interest based on his first assessment of the thesis that there surely exists a solid ‘system of political economy’ by Hegel. He assumed it rather than he found it as a result of his comparative study.
Many regulations with first-order economic and environmental consequences have to be adopted under significant scientific uncertainty. Examples include tobacco regulations in the second half of the 20th century, climate change regulations and current regulations on pesticides and neonicotinoid insecticides. Firms and industries have proved adept at exploiting such scientific uncertainty to shape and delay regulation. The main strategies documented include: hiring and funding dissenting scientists, producing and publicizing favorable scientific findings, ghostwriting, funding diversion research, conducting large-scale science-denying communication campaigns, and placing experts on advisory and regulatory panels while generally concealing involvement. In many cases, special interests have thus deliberately manufactured doubt and these dishonest tactics have had large welfare consequences.
Largely and unduly neglected by economists, these doubt-manufacturing strategies should now be addressed by the field. Here, we first present a simple theoretical framework providing a useful starting point for considering these issues. The government is benevolent but populist and maximizes social welfare as perceived by citizens. The industry can produce costly reports showing that its activity is not harmful, and citizens are unaware of the industry’s miscommunication. This framework raises important new questions, such as how industry miscommunication and citizens’ beliefs are related to scientific uncertainty. It also sheds new light on old questions, such as the choice of policy instrument to regulate pollution. We subsequently outline a tentative roadmap for future research, highlighting critical issues in need of more investigation.
The bootstrap is a technique for performing statistical inference. The underlying idea is that most properties of an unknown distribution can be estimated as the same properties of an estimate of that distribution. In most cases, these properties must be estimated by a simulation experiment. The parametric bootstrap can be used when a statistical model is estimated using maximum likelihood since the parameter estimates thus obtained serve to characterise a distribution that can subsequently be used to generate simulated data sets. Simulated test statistics or estimators can then be computed for each of these data sets, and their distribution is an estimate of their distribution under the unknown distribution. The most popular sort of bootstrap is based on resampling the observations of the original data set with replacement in order to constitute simulated data sets, which typically contain some of the original observations more than once, some not at all. A special case of the bootstrap is a Monte Carlo test, whereby the test statistic has the same distribution for all data distributions allowed by the null hypothesis under test. A Monte Carlo test permits exact inference with the probability of Type I error equal to the significance level. More generally, there are two Golden Rules which, when followed, lead to inference that, although not exact, is often a striking improvement on inference based on asymptotic theory. The bootstrap also permits construction of confidence intervals of improved quality. Some techniques are discussed for data that are heteroskedastic, autocorrelated, or clustered.