# Publications

We study the impact of socioeconomic factors on two key parameters of epidemic dynamics. Specifically, we investigate a parameter capturing the rate of deceleration at the very start of an epidemic, and a parameter that reflects the pre-peak and post-peak dynamics at the turning point of an epidemic like coronavirus disease 2019 (COVID-19). We find two important results. The policies to fight COVID-19 (such as social distancing and containment) have been effective in reducing the overall number of new infections, because they influence not only the epidemic peaks, but also the speed of spread of the disease in its early stages. The second important result of our research concerns the role of healthcare infrastructure. They are just as effective as anti-COVID policies, not only in preventing an epidemic from spreading too quickly at the outset, but also in creating the desired dynamic around peaks: slow spreading, then rapid disappearance.

We propose Fieller-type methods for inference on generalized entropy inequality indices in the context of the two-sample problem which covers testing the statistical significance of the difference in indices, and the construction of a confidence set for this difference. In addition to irregularities arising from thick distributional tails, standard inference procedures are prone to identification problems because of the ratio transformation that defines the considered indices. Simulation results show that our proposed method outperforms existing counterparts including simulation-based permutation methods and results are robust to different assumptions about the shape of the null distributions. Improvements are most notable for indices that put more weight on the right tail of the distribution and for sample sizes that match macroeconomic type inequality analysis. While irregularities arising from the right tail have long been documented, we find that left tail irregularities are equally important in explaining the failure of standard inference methods. We apply our proposed method to analyze income per-capita inequality across U.S. states and non-OECD countries. Empirical results illustrate how Fieller-based confidence sets can: (i) differ consequentially from available ones leading to conflicts in test decisions, and (ii) reveal prohibitive estimation uncertainty in the form of unbounded outcomes which serve as proper warning against flawed interpretations of statistical tests.

We show that least squares cross-validation methods share a common structure which has an explicit asymptotic solution, when the chosen kernel is asymptotically separable in bandwidth and data. For density estimation with a multivariate Student t(ν) kernel, the cross-validation criterion becomes asymptotically equivalent to a polynomial of only three terms. Our bandwidth formulae are simple and noniterative thus leading to very fast computations, their integrated squared-error dominates traditional cross-validation implementations, they alleviate the notorious sample variability of cross-validation, and overcome its breakdown in the case of repeated observations. We illustrate our method with univariate and bivariate applications, of density estimation and nonparametric regressions, to a large dataset of Michigan State University academic wages and experience.

Some complex models are frequently employed to describe physical and mechanical phenomena. In this setting, we have an input X\ X \ in a general space, and an output Y=f(X)\ Y=f(X) \ where f\ f \ is a very complicated function, whose computational cost for every new input is very high, and may be also very expensive. We are given two sets of observations of X\ X \, S1\ S_1 \ and S2\ S_2 \ of different sizes such that only fS1\ f\left(S_1\right) \ is available. We tackle the problem of selecting a subset S3⊂S2\ S_3\subset S_2 \ of smaller size on which to run the complex model f\ f \, and such that the empirical distribution of fS3\ f\left(S_3\right) \ is close to that of fS1\ f\left(S_1\right) \. We suggest three algorithms to solve this problem and show their efficiency using simulated datasets and the Airfoil self-noise data set.

Earnings inequality in Germany has increased dramatically. Measuring inequality locally at the level of cities annually since 1985, we find that behind this development is the rapidly worsening inequality in the largest cities, driven by increasing earnings polarisation. In the cross-section, local earnings inequality rises substantially in city size, and this city-size inequality penalty has increased steadily since 1985, reaching an elasticity of .2 in 2010. Inequality decompositions reveal that overall earnings inequality is almost fully explained by the within-locations component, which in turn is driven by the largest cities. The worsening inequality in the largest cities is amplified by their greater population weight. Examining the local earnings distributions directly reveals that this is due to increasing earnings polarisation that is strongest in the largest places. Both upper and lower distributional tails become heavier over time, and are the heaviest in the largest cities. We establish these results using a large and spatially representative administrative data set, and address the top-coding problem in these data using a parametric distribution approach that outperforms standard imputations.

Worrisome topics, such as climate change, economic crises, or pandemics including Covid-19, are increasingly present and pervasive due to digital media and social networks. Do worries triggered by such topics affect the cognitive capacities of young adults? In an online experiment during the Covid-19 pandemic (N=1503), we test how the cognitive performance of university students responds when exposed to topics discussing (i) current adverse mental health consequences of social restrictions or (ii) future labor market hardships linked to the economic contraction. Moreover, we study how such a response is affected by a performance goal. We find that the labor market topic increases cognitive performance when it is motivated by a goal, consistent with a ‘tunneling effect’ of scarcity or a positive stress effect. However, we show that the positive reaction is mainly concentrated among students with larger financial and social resources, pointing to an inequality-widening mechanism. Conversely, we find limited support for a negative stress effect or a ‘cognitive load effect’ of scarcity, as the mental health topic has a negative but insignificant average effect on cognitive performance. Yet, there is a negative response among psychologically vulnerable individuals when the payout is not conditioned on reaching a goal.

What is the role of income polarization for explaining differentials in public funding of education? To answer this question, we provide a new theoretical modelling for the income distribution that can directly monitor income polarization. It leads to a new income polarization index where the middle class is represented by an interval. We implement this distribution in a political economy model with endogenous fertility and public/private educational choices. We show that when households vote on public schooling expenditures, polarization matters for explaining disparities in public education funding across communities. Using micro-data covering two groups of school districts, we find that both income polarization and income inequality affect public school funding with opposite signs whether there exist a Tax Limitation Expenditure (TLE) or not.

This paper introduces an autoregressive conditional beta (ACB) model that allows regressions with dynamic betas (or slope coefficients) and residuals with GARCH conditional volatility. The model fits in the (quasi) score-driven approach recently proposed in the literature, and it is semi-parametric in the sense that the distributions of the innovations are not necessarily specified. The time-varying betas are allowed to depend on past shocks and exogenous variables. We establish the existence of a stationary solution for the ACB model, the invertibility of the score-driven filter for the time-varying betas, and the asymptotic properties of one-step and multistep QMLEs for the new ACB model. The finite sample properties of these estimators are studied by means of an extensive Monte Carlo study. Finally, we also propose a strategy to test for the constancy of the conditional betas. In a financial application, we find evidence for time-varying conditional betas and highlight the empirical relevance of the ACB model in a portfolio and risk management empirical exercise.

Lower tariffs typically raise productivity, production, and trade, increasing the benefits from building infrastructure. Infrastructure spending by governments should therefore increase after countries open up to trade. I test this hypothesis empirically using a trade reform in India and find that a 1 percentage point reduction in tariffs increased states’ infrastructure spending by 0.5% between 1991 and 2001. To understand the mechanisms behind my empirical findings, I develop and calibrate a multi-region model of international trade, private capital accumulation, and infrastructure spending, in which each government chooses such spending to maximize their state’s welfare. I find if governments choose infrastructure following the reform optimally, infrastructure would have increased by 60% on average. The actual increase, based on my empirical findings, was about 29%. Counterfactual exercises show that raising aggregate infrastructure towards its optimal following the trade reform will result in state GDP to increase by 7% points on average.