T cell activation is initiated upon ligand engagement of the T cell receptor (TCR) and costimulatory receptors. The CD28 molecule acts as a major costimulatory receptor in promoting full activation of naive T cells. However, despite extensive studies, why naive T cell activation requires concurrent stimulation of both the TCR and costimulatory receptors remains poorly understood. Here, we explore this issue by analyzing calcium response as a key early signaling event to elicit T cell activation. Experiments using mouse naive CD4+ T cells showed that engagement of the TCR or CD28 with the respective cognate ligand was able to trigger a rise in fluctuating calcium mobilization levels, as shown by the frequency and average response magnitude of the reacting cells compared with basal levels occurred in unstimulated cells. The engagement of both TCR and CD28 enabled a further increase of these two metrics. However, such increases did not sufficiently explain the importance of the CD28 pathways to the functionally relevant calcium responses in T cell activation. Through the autocorrelation analysis of calcium time series data, we found that combined but not separate TCR and CD28 stimulation significantly prolonged the average decay time (τ) of the calcium signal amplitudes determined with the autocorrelation function, compared with its value in unstimulated cells. This increasement of decay time (τ) uniquely characterizes the fluctuating calcium response triggered by concurrent stimulation of TCR and CD28, as it could not be achieved with either stronger TCR stimuli or by coengaging both TCR and LFA-1, and likely represents an important feature of competent early signaling to provoke efficient T cell activation. Our work has thus provided new insights into the interplay between the TCR and CD28 early signaling pathways critical to trigger naive T cell activation.
Stated preference surveys are usually carried out in one session, without any follow-up interview after respondents have had the opportunity to experience the public goods or policies they were asked to value. Consequently, a stated preference survey needs to be designed so as to provide respondents with all the relevant information, and to help them process this information so they can perform the valuation exercise properly. In this paper, we study experimentally an elicitation procedure in which respondents are provided with a sequence of different types of information (social cues and objective information) that allows them to sequentially revise their willingness-to-pay (WTP) values. Our experiment was carried out in large groups using an electronic voting system which allows us to construct social cues in real time. To analyse the data, we developed an anchoring-type structural model that allows us to estimate the direct effect (at the current round) and the indirect effect (on subsequent rounds) of information. Our results shed new light on the interacted effect of social cues and objective information: social cues have little or no direct effect on WTP values but they have a strong indirect effect on how respondents process scientific information. Social cues have the most noticeable effect on respondents who initially report a WTP below the group average but only after receiving additional objective information about the valuation task. We suggest that the construction and the provision of social cues should be added to the list of tools and controls for stated preference methods.
This article examines the link between entrepreneurial motivation and business performance in the French microfinance context. Using hand-collected data on business microcredits from a Microfinance Institution (MFI), we provide an indirect measure of entrepreneurial success through loan repayment performance. Controlling for the endogeneity of entrepreneurial motivation in a bivariate probit model, we find that “necessity entrepreneurs” are more likely to have difficulty repaying their microcredits than “opportunity entrepreneurs”. However, type of motivation does not appear to make a difference to business survival. We test for the robustness of our results using parametric duration models and show that necessity entrepreneurs experience difficulties in loan repayment earlier than their opportunity counterparts, corroborating our initial findings. Our results are also robust to a sharper analysis of motivation, focusing on unemployment (on the necessity side) and non-pecuniary benefits from success (on the opportunity side).
Coastal lagoons ecosystems, while representing benefits for the local populations, have been subjected to high anthropogenic pressures for decades. Hence, conservation measures of these ecosystems are urgently needed and should be combined with their sustainable uses. To address these issues, new research avenues for decision support systems have emphasized the role of the assessment of ecosystem services for establishing conservation priorities by avoiding monetarization approaches. These approaches, because they flatten the various values of nature by projecting them on the single monetary dimension, are often rejected by the stakeholders. We undertake a Q analysis to identify levels of consensus and divergence among stakeholders on the prioritization of ecosystem services provided by two French Mediterranean coastal lagoons areas. The results highlighted that there is a strong consensus among categories of stakeholders in the study sites about the paramount importance of regulation and maintenance services. Three groups of stakeholders, each sharing the same points of view regarding ecosystem services conservation, were identified for each study site. As a non-monetary valuation, Q methodology is very instrumental for the new pluralistic approach of decision support by capturing the values expressed by the stakeholders, without triggering a rejection reflex due to the monetarization.
In 2002 we published a paper in which we used state space time series methods to analyse the teenage employment-federal minimum wage relationship in the US (Bazen and Marimoutou, 2002). The study used quarterly data for the 46 year period running from 1954 to 1999. We detected a small, negative but statistically significant effect of the federal minimum wage on teenage employment, at a time when some studies were casting doubt on the existence of such an effect. In this note we re-estimate the original model with a further 16 years of data (up to 2015). We find that the model satisfactorily tracks the path of the teenage employment-population ratio over this 60 year period, and yields a consistently negative and statistically significant effect of minimum wages on teenage employment. The conclusion reached is the same as in the original paper, and the elasticity estimates very similar: federal minimum wage hikes lead to a reduction in teenage employment with a short run elasticity of around – 0.13. The estimated long run elasticity of between – 0.37 and – 0.47 is less stable, but is nevertheless negative and statistically significant.
We consider a network game with local complementarities. A policymaker, aiming at minimizing or maximizing aggregate effort, contracts with a single agent on the network to trade effort change against transfer. The policymaker has to find the best agent and the optimal contract to offer. Our study shows that for all utilities with linear best-responses, it only takes two statistics about the position of each agent on the network to identify the key player: the Bonacich centrality and the self-loop centrality. We also characterize key players under linear quadratic utilities for various contractual arrangements.
In this article, a misspecification test in conditional volatility and GARCH-type models is presented. We propose a Lagrange Multiplier type test based on a Taylor expansion to distinguish between (G)ARCH models and unknown GARCH-type models. This new test can be seen as a general misspecification test of a large set of GARCH-type univariate models. It focuses on the short-term component of the volatility. We investigate the size and the power of this test through Monte Carlo experiments and we compare it to two other standard Lagrange Multiplier tests, which are more restrictive. We show the usefulness of our test with an illustrative empirical example based on daily exchange rate returns.
Holocene climate variability in the Mediterranean Basin is often cited as a potential driver of societal change, but the mechanisms of this putative influence are generally little explored. In this paper we integrate two tools–agro-ecosystem modeling of potential agricultural yields and spatial analysis of archaeological settlement pattern data–in order to examine the human consequences of past climatic changes. Focusing on a case study in Provence (France), we adapt an agro-ecosystem model to the modeling of potential agricultural productivity during the Holocene. Calibrating this model for past crops and agricultural practices and using a downscaling approach to produce high spatiotemporal resolution paleoclimate data from a Mediterranean Holocene climate reconstruction, we estimate realistic potential agricultural yields under past climatic conditions. These serve as the basis for spatial analysis of archaeological settlement patterns, in which we examine the changing relationship over time between agricultural productivity and settlement location. Using potential agricultural productivity (PAgP) as a measure of the human consequences of climate changes, we focus on the relative magnitudes of 1) climate-driven shifts in PAgP and 2) the potential increases in productivity realizable through agricultural intensification. Together these offer a means of assessing the scale and mechanisms of the vulnerability and resilience of Holocene inhabitants of Provence to climate change. Our results suggest that settlement patterns were closely tied to PAgP throughout most of the Holocene, with the notable exception of the period from the Middle Bronze Age through the Early Iron Age. This pattern does not appear to be linked to any climatically-driven changes in PAgP, and conversely the most salient changes in PAgP during the Holocene cannot be clearly linked to any changes in settlement pattern. We argue that this constitutes evidence that vulnerability and resilience to climate change are strongly dependent on societal variables.
We study count processes in insurance, in which the underlying risk factor is time varying and unobservable. The factor follows an autoregressive gamma process, and the resulting model generalizes the static Poisson-Gamma model and allows for closed form expression for the posterior Bayes (linear or nonlinear) premium. Moreover, the estimation and forecasting can be conducted within the same framework in a rather efficient way. An example of automobile insurance pricing illustrates the ability of the model to capture the duration dependent, nonlinear impact of past claims on future ones and the improvement of the Bayes pricing method compared to the linear credibility approach.
Why, in some urban communities, do rich and poor households cohabit while, in others, we observe sorting by income? Does income inequality impact residential choices and community segregation? To answer these questions, I develop a two‐community general equilibrium framework of school quality, residential choice, and tax decision with probabilistic voting. The model predicts that an equilibrium with income mixing in which households segregate across public schools and low‐ and high‐income households choose to live in the same community might emerge in highly unequal societies. In this particular equilibrium, income‐mixing communities perform lower public school quality than communities populated by middle‐income households. The effect of inequality on the quality of public schooling in the income‐mixing community is ambiguous and depends on the relative endowments of private goods, such as housing, in the two communities.