We consider different approaches for assessing variable importance in clustering. We focus on clustering using binary decision trees (CUBT), which is a non-parametric top-down hierarchical clustering method designed for both continuous and nominal data. We suggest a measure of variable importance for this method similar to the one used in Breiman’s classification and regression trees. This score is useful to rank the variables in a dataset, to determine which variables are the most important or to detect the irrelevant ones. We analyze both stability and efficiency of this score on different data simulation models in the presence of noise, and compare it to other classical variable importance measures. Our experiments show that variable importance based on CUBT is much more efficient than other approaches in a large variety of situations.
In the aftermath of the U.S. financial crisis, both a sharp drop in employment and a surge in corporate cash have been observed. In this paper, based on U.S. data, we argue that the negative relationship between the corporate cash ratio and employment is systematic, both over time and across firms. We develop a dynamic general equilibrium model where heterogenous firms need cash and external liquid funds in their production process. We analyze the dynamic impact of aggregate shocks and the cross-firm impact of idiosyncratic shocks. We show that external liquidity shocks generate a negative comovement between the cash ratio and employment, as documented in the data.
After-tax income inequality has risen since the mid-1990's, as increases in market income inequality have not been offset by greater fiscal redistribution. This paper argues that the substantial increase in consumer goods diversity has mitigated mounting political pressures for redistribution. Within a probabilistic voting framework, we demonstrate that if the share of diversified goods in the consumption bundle increases sufficiently with income, then an increase in goods diversity can reduce the political equilibrium tax rate. Focusing on OECD countries, we find empirical support for both the model's micro-political foundations and the implied relation between goods diversity and fiscal policy outcomes. This article is protected by copyright. All rights reserved.
Bilateral bargaining between a multiple-worker firm and individual employees leads to overhiring. With a concave production function, the firm can reduce the marginal product by hiring an additional worker, thereby reducing the bargaining wage paid to all existing employees. We show that this externality is amplified when firms can adjust hours per worker as well as employment. Firms keep down workers’ wage demands by reducing the number of hours per worker and the resulting labor disutility. Our finding is particularly relevant for European economies where hours adjustment plays an important role.
Natural experiments provide robust identifying assumptions for the estimation of policy effects. Yet their use for policy design is often limited by the difficulty of extrapolating on the basis of reduced-form estimates. In this study, we exploit an age condition in the eligibility for social assistance in France, which lends itself to a regression discontinuity (RD) design. We suggest to make the underlying labor supply model explicit, i.e. to translate the reduced-form discontinuity in terms of discontinuous changes in disposable incomes. This exercise shows the potential of combining natural experiments and behavioral models. In particular, we can test the external validity of the combined approach. We find that it predicts the effect of a subsequent reform, which extends transfers to the working poor, remarkably well. The model is then used to simulate the extension of social assistance to young people and finds that a transfer program with an in-work component would not create further disincentives to work in this population.
Standard results about portfolio optimization suggest that the allocation to real estate in a mixed-asset portfolio should be around 15–20%. However, the institutional investors share in real estate is significantly smaller, around 7–9%. Many researches have addressed this point even if as of today no consensus has emerged. In this paper, we built-up an allocation model that can explain the empirical observed weights. For this purpose, we account for the term structure of all standard financial assets and also of real estate asset class (expected returns, volatilities and correlations depending on the time to maturity). We propose a dynamic portfolio optimization model that allows analyzing portfolio weights with respect to the whole term structure modelling, due to its tractability and its good fit when being adequately calibrated. In this framework, we provide explicit and operational solutions to the dynamic mixed-asset portfolio allocation (cash, real estate, stock and bond). The results show that accounting for investment horizon and mean-reverting dynamics allows to better examine how portfolio allocations depend on both risk aversion and investment horizon.
We consider a game where a finite number of retailers choose a location, given that their potential consumers are distributed on a network. Retailers do not compete on price but only on location, therefore each consumer shops at the closest store. We show that when the number of retailers is large enough, the game admits a pure Nash equilibrium and we construct it. We then compare the equilibrium cost borne by the consumers with the cost that could be achieved if the retailers followed the dictate of a benevolent planner. We perform this comparison in terms of the Price of Anarchy (i.e., the ratio of the worst equilibrium cost and the optimal cost) and the Price of Stability (i.e., the ratio of the best equilibrium cost and the optimal cost). We show that, asymptotically in the number of retailers, these ratios are bounded by two and one, respectively.
A pure Hotelling game is a spatial competition between a finite number of players who simultaneously select a location in order to attract as many consumers as possible. In this paper, we study the case of a general distribution of consumers on a network generated by a metric graph. Because players do not compete on price, the continuum of consumers shop at the closest player’s location. If the number of sellers is large enough, we prove the existence of an approximate equilibrium in pure strategies, and we construct it.
We study the determination of public tuition fees through majority voting in a vertical differentiation model where agents' returns on educational investment differ and public and private universities coexist and compete in tuition fees. The private university offers higher educational quality than its competitor, incurring higher unit cost per trained student. The tuition fee for the state university is fixed by majority voting while that for the private follows from profit maximization. Then agents choose to train at the public university or the private one or to remain uneducated. The tax per head adjusts in order to balance the state budget. Because there is a private alternative, preferences for education are not single-peaked and no single-crossing condition holds. An equilibrium is shown to exist, which is one of three types: high tuition fee (the “ends” are a majority), low tuition fee (the “middle” is a majority), or mixed (votes tie). The cost structure determines which equilibrium obtains. The equilibrium tuition is either greater (majority at the ends) or smaller (majority at the middle) than the optimal one.
We consider tests of the hypothesis that the tail of size distributions decays faster than any power function. These are based on a single parameter that emerges from the Fisher–Tippett limit theorem, and discriminate between leading laws considered in the literature without requiring fully parametric models/specifications. We study the proposed tests taking into account the higher order regular variation of the size distribution that can lead to catastrophic distortions. The theoretical bias corrections realign successfully nominal and empirical test behavior, and inform a sensitivity analysis for practical work. The methods are used in an examination of the size distribution of cities and firms.