Michel Lubrano
Faculty
,
CNRS
- Status
- Emeritus Research professor
- Research domain(s)
- Econometrics, Public economics, Social choice
- Thesis
- 1986, Toulouse
- Download
- CV
- Address
AMU - AMSE
5-9 Boulevard Maurice Bourdet, CS 50498
13205 Marseille Cedex 1
Karim Abadir, Michel Lubrano, Biometrika, 02/2024
Abstract
We show that least-squares cross-validation methods share a common structure that has an explicit asymptotic solution, when the chosen kernel is asymptotically separable in bandwidth and data. For density estimation with a multivariate Student-t(ν) kernel, the cross-validation criterion becomes asymptotically equivalent to a polynomial of only three terms. Our bandwidth formulae are simple and noniterative, thus leading to very fast computations, their integrated squared-error dominates traditional cross-validation implementations, they alleviate the notorious sample variability of cross-validation and overcome its breakdown in the case of repeated observations. We illustrate our method with univariate and bivariate applications, of density estimation and nonparametric regressions, to a large dataset of Michigan State University academic wages and experience.
Keywords
Academic wage distribution, Bandwidth choice, Cross-validation, Explicit analytical solution, Nonparametric density estimation
Majda Benzidia, Michel Lubrano, Paolo Melindi-Ghidi, International Tax and Public Finance, 01/2024
Abstract
What is the role of income polarization for explaining differentials in public funding of education? To answer this question, we provide a new theoretical modelling for the income distribution that can directly monitor income polarization. It leads to a new income polarization index where the middle class is represented by an interval. We implement this distribution in a political economy model with endogenous fertility and public/private educational choices. We show that when households vote on public schooling expenditures, polarization matters for explaining disparities in public education funding across communities. Using micro-data covering two groups of school districts, we find that both income polarization and income inequality affect public school funding with opposite signs whether there exist a Tax Limitation Expenditure (TLE) or not.
Keywords
Education politics, Schooling choice, Income polarization, Probabilistic voting, Bayesian inference
Michel Lubrano, Zhou Xun, Edward Elgar Publishing, pp. 475-487, 03/2023
Abstract
This chapter reviews the recent Bayesian literature on poverty measurement together with some new results. Using Bayesian model criticism, we revise the international poverty line. Using mixtures of lognormals to model income, we derive the posterior distribution for the FGT, Watts and Sen poverty indices, for TIP curves (with an illustration on child poverty in Germany) and for Growth Incidence Curves. The relation of restricted stochastic dominance with TIP and GIC dominance is detailed with an example based on UK data. Using panel data, we decompose poverty into total, chronic and transient poverty, comparing child and adult poverty in East Germany when redistribution is introduced. When panel data are not available, a Gibbs sampler can be used to build a pseudo panel. We illustrate poverty dynamics by examining the consequences of the Wall on poverty entry and poverty persistence in occupied West Bank.
Keywords
Poverty dynamics, Stochastic dominance, Poverty indices, Mixture model, Bayesian inference
Edwin Fourrier-Nicolaï, Michel Lubrano, Studies in Nonlinear Dynamics and Econometrics, 01/2023 (Forthcoming)
Abstract
The paper examines the question of non-anonymous Growth Incidence Curves (na-GIC) from a Bayesian inferential point of view. Building on the notion of conditional quantiles of Barnett (1976. “The Ordering of Multivariate Data.” Journal of the Royal Statistical Society: Series A 139: 318–55), we show that removing the anonymity axiom leads to a complex and shaky curve that has to be smoothed, using a non-parametric approach. We opted for a Bayesian approach using Bernstein polynomials which provides confidence intervals, tests and a simple way to compare two na-GICs. The methodology is applied to examine wage dynamics in a US university with a particular attention devoted to unbundling and anti-discrimination policies. Our findings are the detection of wage scale compression for higher quantiles for all academics and an apparent pro-female wage increase compared to males. But this pro-female policy works only for academics and not for the para-academics categories created by the unbundling policy.
Keywords
Academic wage formation, Bayesian inference, Conditional quantiles, Gender policy, Non-anonymous GIC
Michel Lubrano, Zhou Xun, Economics 2023, pp. 475–487, 01/2023
Abstract
This chapter reviews the recent Bayesian literature on poverty measurement together with some new results. Using Bayesian model criticism, we revise the international poverty line. Using mixtures of lognormals to model income, we derive the posterior distribution for the FGT, Watts and Sen poverty indices, for TIP curves (with an illustration on child poverty in Germany) and for Growth Incidence Curves. The relation of restricted stochastic dominance with TIP and GIC dominance is detailed with an example based on UK data. Using panel data, we decompose poverty into total, chronic and transient poverty, comparing child and adult poverty in East Germany when redistribution is introduced. When panel data are not available, a Gibbs sampler can be used to build a pseudo panel. We illustrate poverty dynamics by examining the consequences of the Wall on poverty entry and poverty persistence in occupied West Bank.
Keywords
Bayesian inference, Mixture model, Poverty indices, Stochastic dominance, Poverty dynamics
Ewen Gallic, Michel Lubrano, Pierre Michel, Journal of Public Economic Theory, Vol. 24, No. 5, pp. 944-967, 10/2022
Abstract
Two main nonpharmaceutical policy strategies have been used in Europe in response to the COVID-19 epidemic: one aimed at natural herd immunity and the other at avoiding saturation of hospital capacity by crushing the curve. The two strategies lead to different results in terms of the number of lives saved on the one hand and production loss on the other hand. Using a susceptible–infected–recovered–dead model, we investigate and compare these two strategies. As the results are sensitive to the initial reproduction number, we estimate the latter for 10 European countries for each wave from January 2020 till March 2021 using a double sigmoid statistical model and the Oxford COVID-19 Government Response Tracker data set. Our results show that Denmark, which opted for crushing the curve, managed to minimize both economic and human losses. Natural herd immunity, sought by Sweden and the Netherlands does not appear to have been a particularly effective strategy, especially for Sweden, both in economic terms and in terms of lives saved. The results are more mixed for other countries, but with no evident trade-off between deaths and production losses.
Edwin Fourrier-Nicolaï, Michel Lubrano, Research on Economic Inequality, Vol. 29, pp. 31-55, 12/2021
Abstract
The growth incidence curve of Ravallion and Chen (2003) is based on the quantile function. Its distribution-free estimator behaves erratically with usual sample sizes leading to problems in the tails. The authors propose a series of parametric models in a Bayesian framework. A first solution consists in modeling the underlying income distribution using simple densities for which the quantile function has a closed analytical form. This solution is extended by considering a mixture model for the underlying income distribution. However, in this case, the quantile function is semi-explicit and has to be evaluated numerically. The last solution consists in adjusting directly a functional form for the Lorenz curve and deriving its first-order derivative to find the corresponding quantile function. The authors compare these models by Monte Carlo simulations and using UK data from the Family Expenditure Survey. The authors devote a particular attention to the analysis of subgroups.
Keywords
Bayesian inference, Growth incidence curve, Distributional changes, Inequality, Mixtures of log-normals, Lorenz curves
Majda Benzidia, Michel Lubrano, Journal of Economic Inequality, Vol. 18, No. 2, pp. 213-238, 06/2020
Abstract
OECD countries have experienced a large increase in top wage inequality. Atkinson (2008) attributes this phenomena to the superstar theory leading to a Pareto tail in the wage distribution with a low Pareto coefficient. Do we observe a similar phenomena for academic wages? We examine wage formation in a public US university using for each academic rank a hybrid mixture formed by a lognormal distribution for regular wages and a Pareto distribution for top wages, using a Bayesian approach. The presence of superstars wages would imply a higher dispersion in the Pareto tail than in the lognormal body. We concluded that academic wages are formed in a different way than other top wages. There is an effort to propose competitive wages to some young Assistant Professors. But when climbing up the wage ladder, we found a phenomenon of wage compression which is just the contrary of a superstar phenomenon.
Keywords
Wage compression, Wage formation, Tournaments theory, Hybrid mixtures, Bayesian inference, Academic market, Superstar wages
Shaozhen Han, Guoming Li, Michel Lubrano, Zhou Xun, Journal of Cleaner Production, Vol. 253, pp. 119858, 04/2020
Abstract
This study investigates the differences between zombie firms and non-zombie firms in corporate social responsibility activities such as reporting, disclosure and fulfillment. Using Chinese listing company data collected from 2009 to 2016, we apply a three stage model with a double Heckman correction to deal with potential self-selection/endogeneity bias and to measure the differences consistently. We found that zombie firms are less willing to release standalone corporate social responsibility reports than non-zombie firms. Among companies that release standalone corporate social responsibility reports, the corporate social responsibility disclosure of zombie firms is at least not worse than non-zombie firms, but the corporate social responsibility fulfillment is significantly lower. We conclude from this gap between disclosure and fulfillment to the hypocritical behavior of zombie firms, due to the absence of control in corporate social responsibility. We suggest that government should enhance supervision over zombie firms’ corporate social responsibility activities and subsidies towards them in order to lower their economic damage. Supplementary analyses provide some clues concerning the heterogeneity of inconsistence in term of external support characteristics, ownership and censorship which require further studies.
Keywords
Hypocrisy, Fulfillment, Disclosure, Reports, Zombie firms, Corporate social responsibility
Edwin Fourrier-Nicolaï, Michel Lubrano, Journal of Economic Inequality, Vol. 18, No. 1, pp. 91-111, 03/2020
Abstract
TIP curves are cumulative poverty gap curves used for representing the three different aspects of poverty: incidence, intensity and inequality. The paper provides Bayesian inference for TIP curves, linking their expression to a parametric representation of the income distribution using a mixture of log-normal densities. We treat specifically the question of zero-inflated income data and survey weights, which are two important issues in survey analysis. The advantage of the Bayesian approach is that it takes into account all the information contained in the sample and that it provides small sample credible intervals and tests for TIP dominance. We apply our methodology to evaluate the evolution of child poverty in Germany after 2002, providing thus an update the portrait of child poverty in Germany given in Corak et al. (Rev. Income Wealth 54(4), 547–571, 2008).
Keywords
Bayesian inference, Survey weights, Mixture model, Zero-inflatedmodel, Inequality, Poverty
Zhou Xun, Michel Lubrano, Review of Income and Wealth, Vol. 64, No. 3, pp. 649-678, 09/2018
Tareq Sadeq, Michel Lubrano, Econometrics, Vol. 6, No. 2, 06/2018
Abstract
In 2002, the Israeli government decided to build a wall inside the occupied West Bank. The wall had a marked effect on the access to land and water resources as well as to the Israeli labour market. It is difficult to include the effect of the wall in an econometric model explaining poverty dynamics as the wall was built in the richer region of the West Bank. So a diff-in-diff strategy is needed. Using a Bayesian approach, we treat our two-period repeated cross-section data set as an incomplete data problem, explaining the income-to-needs ratio as a function of time invariant exogenous variables. This allows us to provide inference results on poverty dynamics. We then build a conditional regression model including a wall variable and state dependence to see how the wall modified the initial results on poverty dynamics. We find that the wall has increased the probability of poverty persistence by 58 percentage points and the probability of poverty entry by 18 percentage points.
Keywords
Bayesian inference, Pseudo panels, Data augmentation, Walls, Poverty dynamics
Michel Lubrano, Abdoul Aziz Junior Ndoye, Computational Statistics and Data Analysis, Vol. 100, pp. 830--846, 08/2016
Abstract
The log-normal distribution is convenient for modelling the income distribution, and it offers an analytical expression for most inequality indices that depends only on the shape parameter of the associated Lorenz curve. A decomposable inequality index can be implemented in the framework of a finite mixture of log-normal distributions so that overall inequality can be decomposed into within-subgroup and between-subgroup components. Using a Bayesian approach and a Gibbs sampler, a Rao-Blackwellization can improve inference results on decomposable income inequality indices. The very nature of the economic question can provide prior information so as to distinguish between the income groups and construct an asymmetric prior density which can reduce label switching. Data from the \UK\ Family Expenditure Survey (FES) (1979 to 1996) are used in an extended empirical application.
Keywords
Switching, Label
Zhou Xun, Michel Lubrano, Journal of Applied Econometrics, Vol. 31, No. 4, pp. 756--761, 06/2016
Abstract
We find that the empirical results reported in Chang (Journal of Applied Econometrics 2011; 26(5): 854–871) are contingent on the specification of the model. The use of Heckman's initial conditions combined with observed and not latent lagged dependent variables leads to a counter-intuitive estimation of the true state dependence. The use of Wooldridge's initial conditions together with the observed lagged dependent variable and a proper modelling of censoring provides a much more natural estimate of the true state dependence parameters together with a clearer interpretation of the decision to participate in the labour market in the two-tiered model. Copyright © 2015 John Wiley & Sons, Ltd.
Keywords
Economie quantitative
Michel Lubrano, Abdoul Aziz Junior Ndoye, Research on Economic Inequality, Vol. 22, pp. 449--479, 01/2014
Abstract
We provide a Bayesian inference for a mixture of two Pareto distributions which is then used to approximate the upper tail of a wage distribution. The model is applied to the data from the CPS Outgoing Rotation Group to analyze the recent structure of top wages in U.S. from 1992 through 2009. We found enormous earnings inequality between the very highest wage earners (“the superstars”), and the other high wage earners. These findings are largely in accordance with the alternative explanations combining the model of super-stars and the model of tournaments in hierarchical organization structure. The approach can be used to analyze the recent pay gaps among top executives in large firms so as to exhibit the “superstar” effect.
Keywords
Economie quantitative
Mathieu Goudard, Michel Lubrano, Manchester School, Vol. 81, No. 6, pp. 876-903, 01/2013
Abstract
The theory of human capital is one way to explain individual decisions to produce scientific research. However, this theory, even if it reckons the importance of time in science, is too short for explaining the existing diversity of scientific output. The present paper introduces the social capital of Bourdieu (1980), Coleman (1988) and Putnam (1995) as a necessary complement to explain the creation of scientific human capital. This paper connects these two concepts by means of a hierarchical econometric model which makes the distinction between the individual level (human capital) and the cluster level of departments (social capital). The paper shows how a collection of variables can be built from a bibliographic data base indicating both individual behaviour including mobility and collective characteristics of the department housing individual researchers. The two level hierarchical model is estimated on fourteen European countries using bibliometric data in the fields of economics.
Keywords
Economie quantitative
Claude Gamel, Michel Lubrano, Springer Berlin Heidelberg, pp. 1-32, 08/2011
Abstract
In this introductory chapter, we give a subjective account of the content of Kolm's book "Macrojustice" (2005) that gave rise to the idea of organising in 2006 a round table where this book was discussed by different authors coming from a large variety of horizons: philosophers, economists, econometricans. We leave Serge-Christophe Kolm the task of presenting his theory in the first part of this book. Macrojustice is concerned about social justice proposing a comprehensive redistributive scheme. Of course, any distributive proposal always raises questions at the ethical, theoretical and practical levels. The questions are at the core of the discussions that are presented in this book, which is designed as a forum for multidisciplinary exchange.
Claude Gamel, Michel Lubrano, Springer Berlin Heidelberg, 01/2011
Abstract
The Theory of Macrojustice, introduced by S.-C. Kolm, is a stimulating contribution to the debate on the macroeconomic income distribution. The solution called "Equal Labour Income Equalisation" (ELIE) is the result of a three stages construction: collective agreement on the scheme of labour income redistribution, collective agreement on the degree of equalisation to be chosen in that framework, individual freedom to exploit his-her personal productive capacities (the source of labour income and the sole basis for taxation). This collective book is organised as a discussion around four complementary themes: philosophical aspects of macrojustice, economic analysis of macrojustice, combinations of ELIE with other targeted transfers, econometric evaluations of ELIE.
Michel Lubrano, Pierre Michel
Abstract
During the Covid-19 pandemic, the Omicron wave was notable for its highly transmissible and contagious variant of concern, coinciding with the availability of a vaccine that has been rolled out well earlier. In this paper, we address two key questions. First, we seek todesign a simple epidemiological model that can best capture the dynamics of Omicron infections. We demonstrate that combining the SIRDand SISD models provides an adequate solution. The second question examines the benefits of vaccination, in terms of both economicactivity and lives saved, once the model is implemented. Our results show that without vaccination, the human cost would have been fivetimes higher, and production losses would have doubled, due to stricter con- finement measures and a higher death toll. We also quantify the cost of vaccine hesitancy at more than 8,000 extra deaths.
Keywords
Compartment models, COVID-19, Omicron wave, Vaccination benefit, Vaccine hesitation
Mathias Silva, Michel Lubrano
Abstract
When estimated from survey data alone, the distribution of high incomes in a population may be misrepresented, as surveys typically provide detailed coverage of the lower part of the income distribution, but offer limited information on top incomes. Tax data, in contrast, better capture top incomes, but lack contextual information. To combine these data sources, Pareto models are often used to represent the upper tail of the income distribution. In this paper, we propose a Bayesian approach for this purpose, building on extreme value theory. Our method integrates a Pareto II tail with a semi-parametric model for the central part of the income distribution, and it selects the income threshold separating them endogenously. We incorporate external tax data through an informative prior on the Pareto II coefficient to complement survey micro-data. We find that Bayesian inference can yield a wide range of threshold estimates, which are sensitive to how the central part of the distribution is modelled. Applying our methodology to the EU-SILC micro-data set for 2008 and 2018, we find that using tax-data information from WID introduces no changes to inequality estimates for Nordic countries or The Netherlands, which rely on administrative registers for income data. However, tax data significantly revise survey-based inequality estimates in new EU member states.
Keywords
Extreme value theory, EU-SILC, Bayesian inference, Pareto II, Top income correction
Karim Abadir, Michel Lubrano
Abstract
We show that least squares cross-validation (CV) methods share a common structure which has an explicit asymptotic solution, when the chosen kernel is asymptotically separable in bandwidth and data. For density estimation with a multivariate Student t(ν) kernel, the CV criterion becomes asymptotically equivalent to a polynomial of only three terms. Our bandwidth formulae are simple and non-iterative (leading to very fast computations), their integrated squared-error dominates traditional CV implementations, they alleviate the notorious sample variability of CV, and overcome its breakdown in the case of repeated observations. We illustrate with univariate and bivariate applications, of density estimation and nonparametric regressions, to a large dataset of Michigan State University academic wages and experience.
Keywords
Academic Wages, Nonparametric density estimation, Explicit analytical solution, Cross Validation, Bandwidth choice
Mathias Silva, Michel Lubrano
Abstract
Survey data are known for under-reporting rich households while providing large information on contextual variables. Tax data provide a better representation of top incomes at the expense of lacking any contextual variables. So the literature has developed several methods to combine the two sources of information. For Pareto imputation, the question is how to chose the Pareto model for the right tail of the income distribution. The Pareto I model has the advantage of simplicity. But Jenkins (2017) promoted the use of the Pareto II for its nicer properties, reviewing three different approaches to correct for missing top incomes. In this paper, we propose a Bayesian approach to combine tax and survey data, using a Pareto II tail. We build on the extreme value literature to develop a compound model where the lower part of the income distribution is approximated with a Bernstein polynomial truncated density estimate while the upper part is represented by a Pareto II. This provides a way to estimate the threshold where to start the Pareto II. Then WID tax data are used to build up a prior information for the Pareto coefficient in the form of a gamma prior density to be combined with the likelihood function. We apply the methodology to the EU-SILC data set to decompose the Gini index. We finally analyse the impact of top income correction on the Growth Incidence Curve between 2008 and 2018 for a group of 23 European countries.
Keywords
D63, I31, D31, EU-SILC JEL codes C11, Bayesian inference Pareto II profile likelihood Bernstein density estimation top income correction EU-SILC JEL codes C11 D31 D63 I31, EU-SILC, Top income correction, Bernstein density estimation, Profile likelihood, Pareto II, Bayesian inference
Michel Lubrano, Zhou Xun, Edward Elgar Publishing, pp. 475-487
Abstract
This chapter reviews the recent Bayesian literature on poverty measurement together with some new results. Using Bayesian model criticism, we revise the international poverty line. Using mixtures of lognormals to model income, we derive the posterior distribution for the FGT, Watts and Sen poverty indices, for TIP curves (with an illustration on child poverty in Germany) and for Growth Incidence Curves. The relation of restricted stochastic dominance with TIP and GIC dominance is detailed with an example based on UK data. Using panel data, we decompose poverty into total, chronic and transient poverty, comparing child and adult poverty in East Germany when redistribution is introduced. When panel data are not available, a Gibbs sampler can be used to build a pseudo panel. We illustrate poverty dynamics by examining the consequences of the Wall on poverty entry and poverty persistence in occupied West Bank.
Keywords
Poverty dynamics, Stochastic dominance, Poverty indices, Mixture model, Bayesian inference
Majda Benzidia, Michel Lubrano, Paolo Melindi-Ghidi
Abstract
What is the role of income polarisation for explaining differentials in public funding of education? To answer this question, we provide a new theoretical modelling for the income distribution that can directly monitor income polarisation. It leads to a new income polarisation index where the middle class is represented by an interval. We implement this distribution in a political economy model with endogenous fertility and public/private educational choices. We show that when households vote on public schooling expenditures, polarisation matters for explaining disparities in public education funding across communities. Using micro-data covering two groups of school districts, we find that both income polarisation and income inequality affect public school funding with opposite signs whether there exist a Tax Limitation Expenditure (TLE) or not.
Keywords
Education politics, Schooling choice, Income polarisation, Probabilistic voting, Bayesian inference
Edwin Fourrier-Nicolaï, Michel Lubrano
Abstract
This paper examines the question of non-anonymous Growth Incidence Curves (na-GIC) from a Bayesian inferential point of view. Building on the notion of conditional quantiles of Barnett (1976), we show that removing the anonymity axiom leads to a non-parametric inference problem. From a Bayesian point of view, an approach using Bernstein polynomials provides a simple solution and immediate confidence intervals, tests and a way to compare two na-GIC. The paper illustrates the approach to the question of academic wage formation and tries to shed some light on wether academic recruitment leads to a super stars phenomenon, that is a large increase of top wages, or not. Equipped with Bayesian na-GIC's, we show that wages at Michigan State University experienced a top compression leading to a shrinking of the wage scale. We finally analyse gender and ethnic questions in order to detect if the implemented pro-active policies were efficient.
Keywords
Ethnic discrimination, Gender policy, Wage formation, Bayesian inference, Non-anonymous GIC, Conditional quantiles
Zhou Xun, Michel Lubrano
Abstract
We analyse preference for redistribution and the perceived role of "circumstances" and "effort" in China within the framework of the belief in a just world hypothesis (BJW) using the 2006 CGSS. As this very rich data base does not include Dalbert questionnaire on GBJW and PBJW, we have completed the CGSS by a survey led during the COVID episode in Shanghai and Nanjing. Thanks to this new survey, we could identify the components of PBJW and GBJW inside the traditional opinion variables about the causes of poverty and the desire for redistribution of the CGSS. Using a tri-variate ordered probit model for explaining opinions, we show how treating the decision to migrate as an endogenous variable modifies the usual results of the literature concerning migrants and the effects of the Hukou status. The correlations found validate the distinction between personal BJW and general BJW, a distinction that has important policy implications for the status of migrants.
Keywords
GHK simulator, Marginal effects, Binary endogenous, Conditional correlations, Hukou and migrant workers, Belief in a just world, Inequality perceptions, Preference for redistribution
Michel Lubrano, Zhou Xun
Abstract
This survey paper reviews the recent Bayesian literature on poverty measurement. After introducing Bayesian statistics, we show how Bayesian model criticism could help to revise the international poverty line. Using mixtures of lognormals to model income, we derive the posterior distribution for the FGT, Watts and Sen poverty indices, then for TIP curves (with an illustration on child poverty in Germany) and finally for Growth Incidence Curves. The relation of restricted stochastic dominance with TIP and GIC dominance is detailed with an example on UK data. Using panel data, we show how to decompose poverty into total, chronic and transient poverty, comparing child and adult poverty in East Germany when redistribution is introduced. When a panel is not available, a Gibbs sampler is used to build a pseudo panel. We illustrate poverty dynamics by examining the consequences of the Wall on poverty entry and poverty persistence in occupied West Bank.
Keywords
Poverty dynamics, Stochastic dominance, Poverty indices, Mixture model, Bayesian inference
Edwin Fourrier-Nicolaï, Michel Lubrano
Abstract
The growth incidence curve of Ravallion and Chen (2003) is based on the quantile function. Its distribution-free estimator behaves erratically with usual sample sizes leading to problems in the tails. We propose a series of parametric models in a Bayesian framework. A first solution consists in modelling the underlying income distribution using simple densities for which the quantile function has a closed analytical form. This solution is extended by considering a mixture model for the underlying income distribution. However in this case, the quantile function is semi-explicit and has to be evaluated numerically. The alternative solution consists in adjusting directly a functional form for the Lorenz curve and deriving its first order derivative to find the corresponding quantile function. We compare these models first by Monte Carlo simulations and second by using UK data from the Family Expenditure Survey where we devote a particular attention to the analysis of subgroups.
Keywords
Bayesian inference, Growth incidence curve, Inequality
Ewen Gallic, Michel Lubrano, Pierre Michel
Abstract
Uprising in China, the global COVID-19 epidemic soon started to spread out in Europe. As no medical treatment was available, it became urgent to design optimal non-pharmaceutical policies. With the help of a SIR model, we contrast two policies, one based on herd immunity (adopted by Sweden and the Netherlands), the other based on ICU capacity shortage. Both policies led to the danger of a second wave. Policy efficiency corresponds to the absence or limitation of a second wave. The aim of the paper is to measure the efficiency of these policies using statistical models and data. As a measure of efficiency, we propose the ratio of the size of two observed waves using a double sigmoid model coming from the biological growth literature. The Oxford data set provides a policy severity index together with observed number of cases and deaths. This severity index is used to illustrate the key features of national policies for ten European countries and to help for statistical inference. We estimate basic reproduction numbers, identify key moments of the epidemic and provide an instrument for comparing the two reported waves between January and October 2020. We reached the following conclusions. With a soft but long lasting policy, Sweden managed to master the first wave for cases thanks to a low R 0 , but at the cost of a large number of deaths compared to other Nordic countries and Denmark is taken as an example. We predict the failure of herd immunity policy for the Netherlands. We could not identify a clear sanitary policy for large European countries. What we observed was a lack of control for observed cases, but not for deaths.
Keywords
SIR models, Phenomenological models, Double sigmoid models, Sanitary policies, Herd immunity, ICU capacity constraint
Edwin Fourrier-Nicolaï, Michel Lubrano
Abstract
This paper investigates the evolution of wage formation in a Mincer model with sample selection for which we develop Bayesian inference and growth incidence and poverty growth curves. We estimate the effect of an exogenous exposure to Western TV broadcasts on labour market participation and wage inequality in East Germany after the German reunification. Using the GSOEP, we find evidences that Western television had significantly increased wage inequality among males while it has significantly affected female labour participation and led the less productive females to drop out from the market, hiding thus a large increase in wage inequality among females.
Keywords
Bayesian inference, Labour market, Distributional changes, Sample selection, Wage inequality
Shaozhen Han, Guoming Li, Michel Lubrano, Zhou Xun
Abstract
This study investigates the differences between zombie firms and non-zombie firms in corporate social responsibility activities such as reporting, disclosure and fulfillment. Using Chinese listing company data collected from 2009 to 2016, we apply a three stage model with a double Heckman correction to deal with potential self-selection/endogeneity bias and to measure the differences consistently. We found that zombie firms are less willing to release standalone corporate social responsibility reports than non-zombie firms. Among companies that release standalone corporate social responsibility reports, the corporate social responsibility disclosure of zombie firms is at least not worse than non-zombie firms, but the corporate social responsibility fulfillment is significantly lower. We conclude from this gap between disclosure and fulfillment to the hypocritical behavior of zombie firms, due to the absence of control in corporate social responsibility. We suggest that government should enhance supervision over zombie firms’ corporate social responsibility activities and subsidies towards them in order to lower their economic damage. Supplementary analyses provide some clues concerning the heterogeneity of inconsistence in term of external support characteristics, ownership and censorship which require further studies.
Keywords
Corporate social responsibility, Zombie firms, Reports, Disclosure, Fulfillment, Hypocrisy
Edwin Fourrier-Nicolaï, Michel Lubrano
Abstract
A long-standing literature has investigated the formation of aspirations and how they shape human behaviours but a recent interest has been devoted on the interplay between aspirations and inequality. Because aspirations are socially determined, household investment decisions tend to be reproduced according to the social context which fosters inequality to persist. We empirically examine the role of aspirations on inequality using a natural experiment. We exploit an exogenous variation of social aspirations determined by the exposure to Western German TV broadcasts in the GDR before the reunification. We measure the treatment effect on wage inequality by comparing inequality changes between the treatment and the control regions after reunification. We use an heteroskedastic parametric model for income with a treatment effect and sample selection into the labour market. We derive analytical formulae for the growth incidence curve of Ravallion and Chen (2003) and poverty growth curve of Son (2004) for the log-normal distribution. Based on those curves, we provide Bayesian inference and a set of tests related to stochastic dominance criteria. We find evidences that aspirations-through exposure to Western German broadcasts-have significantly affected inequality. We find that this effect was detrimental in terms of inequality and poverty. However, we cannot conclude about the persistence of the effect after 1995.
Keywords
Inequality, Social aspirations, Bayesian inference, Treatment effect
Edwin Fourrier-Nicolaï, Michel Lubrano
Abstract
TIP curves are cumulative poverty gap curves used for representing the three different aspects of poverty: incidence, intensity and inequality. The paper provides Bayesian inference for TIP curves, linking their expression to a parametric representation of the income distribution using a mixture of lognormal densities. We treat specifically the question of zero-inflated income data and survey weights, which are two important issues in survey analysis. The advantage of the Bayesian approach is that it takes into account all the information contained in the sample and that it provides small sample confidence intervals and tests for TIP dominance. We apply our methodology to evaluate the evolution of child poverty in Germany after 2002, providing thus an update the portrait of child poverty in Germany given in Corak et al. 2008.
Keywords
Bayesian inference, Mixture model, Survey weights, Zero-inflated model, Poverty, Inequality
Majda Benzidia, Michel Lubrano
Abstract
The paper investigates academic wage formation inside Michigan State University and develops tools in order to detect the presence of possible superstars. We model wage distributions using a hybrid mixture formed by a lognormal distribution for regular wages and a Pareto distributions for higher wages, using a Bayesian approach, particularly well adapted for inference in hybrid mixtures. The presence of superstars is detected by studying the shape of the Pareto tail. Contrary to usual expectations, we did found some evidence of superstars, but only when recruiting Assistant Professors. When climbing up the wage ladder, superstars disappear. For full professors, we found a phenomenon of wage compression as if there were a higher bound, which is just the contrary of a superstar phenomenon. Moreover, a dynamic analysis shows that many recruited superstars did not fulfill the university expectations as either they were not promoted or left for lower ranked universities.
Keywords
Hybrid mixtures, Academic market, Wage formation, Superstars, Tournaments theory, Bayesian inference
Majda Benzidia, Michel Lubrano, Paolo Melindi-Ghidi
Abstract
Do communities with the same level of inequality but a different level of income polarisation perform differently in terms of public schooling? To answer this question, we extend the theoretical model of schooling choice and voting developed by de la Croix and Doepke (2009), introducing a more general income distribution characterised by a three-member mixture instead of a single uniform distribution. We show that not only income inequality, but also income polarisation, matters in explaining disparities in public education quality across communities. Public schooling is an important issue for the middle class, which is more inclined to pay higher taxes in return for better public schools. Contrastingly, poorer households may be less concerned about public education, while rich parents are more willing to opt-out of the public system, sending their children to private schools. Using micro-data covering 724 school districts of California and introducing a new measure of income polarisation, we find that school quality in low-income districts depends mainly on income polarisation, while in richer districts it depends mainly on income inequality.
Keywords
Schooling choice, Income polarisation, Probabilistic voting, Education politics, Bayesian inference
Zhou Xun, Michel Lubrano
Abstract
This paper provides a new estimation of an international poverty line based on a Bayesian approach. We found that the official poverty lines of the poorest countries are related to the countries' mean consumption level. This new philosophy is to be compared to the previous assumptions made by the World Bank in favour of an absolute poverty line. We propose a new international poverty line at $1.48 per day (2005 PPP) based on a reference group consumption level. This figure is much higher than that proposed by the World Bank ($1.25 in 2005 PPP), but still within a reasonable confidence interval. By this standard, there are more than 1.7 billion people living in poverty.
Keywords
Poverty line, Bayesian inference
Michel Lubrano, Abdoul Aziz Junior Ndoye
Abstract
We develop a reliable Bayesian inference for the RIF-regression model of Firpo, Fortin and Lemieux (Econometrica, 2009) in which we first estimate the log wage distribution by a mixture of normal densities. This approach is pursued so as to provide better estimates in the upper tail of the wage distribution as well as valid confidence intervals for the Oaxaca-Blinder decomposition. We apply our method to a Mincer equation for analysing the recent changes in the U.S. wage structure and in earnings inequality. Our analysis uses data from the CPS Outgoing Rotation Group (ORG) from 1992 to 2009. We find first that the largest part (around 77% on average) of the recent changes in the U.S. wage inequality is explained by the wage structure effect and second that the earnings inequality is rising more at the top end of the wage distribution, even in the most recent years. The decline in the unionisation rate has a small impact on total wage inequality while differences in returns to education and gender discrimination are the dominant factors accounting for these recent changes.
Keywords
Oaxaca-Blinder decomposition, Bayesian inference, Quantile regression, Unconditional quantile, Influence function