Maison de l'économie et de la gestion d'Aix
424 chemin du viaduc, CS80429
13097 Aix-en-Provence Cedex 2
Laurent
Publications
This paper introduces the class of quasi score-driven (QSD) models. This new class inherits and extends the basic ideas behind the development of score-driven (SD) models and addresses a number of unsolved issues in the score literature. In particular, the new class of models (i) generalizes many existing models, including SD models, (ii) disconnects the updating equation from the log-likelihood implied by the conditional density of the observations, (iii) allows testing of the assumptions behind SD models that link the updating equation of the conditional moment to the conditional density, (iv) allows QML estimation of SD models, (v) and allows explanatory variables to enter the updating equation. We establish the asymptotic properties of the QLE, QMLE and MLE of the proposed QSD model as well as the likelihood ratio and Lagrange multiplier test statistics. The finite sample properties are studied by means of an extensive Monte Carlo study. Finally, we show the empirical relevance of QSD models to estimate the conditional variance of 400 US stocks.
Deviations of asset prices from the random walk dynamic imply the predictability of asset returns and thus have important implications for portfolio construction and risk management. This paper proposes a real-time monitoring device for such deviations using intraday high-frequency data. The proposed procedures are based on unit root tests with in-fill asymptotics but extended to take the empirical features of high-frequency financial data (particularly jumps) into consideration. We derive the limiting distributions of the tests under both the null hypothesis of a random walk with jumps and the alternative of mean reversion/explosiveness with jumps. The limiting results show that ignoring the presence of jumps could potentially lead to severe size distortions of both the standard left-sided (against mean reversion) and right-sided (against explosiveness) unit root tests. The simulation results reveal satisfactory performance of the proposed tests even with data from a relatively short time span. As an illustration, we apply the procedure to the Nasdaq composite index at the 10-minute frequency over two periods: around the peak of the dot-com bubble and during the 2015–2106 stock market sell-off. We find strong evidence of explosiveness in asset prices in late 1999 and mean reversion in late 2015. We also show that accounting for jumps when testing the random walk hypothesis on intraday data is empirically relevant and that ignoring jumps can lead to different conclusions.
Beta coefficients are the cornerstone of asset pricing theory in the CAPM and multiple factor models. This chapter proposes a review of different time series models used to estimate static and time-varying betas, and a comparison on real data. The analysis is performed on the USA and developed Europe REIT markets over the period 2009–2019 via a two-factor model. We evaluate the performance of the different techniques in terms of in-sample estimates as well as through an out-of-sample tracking exercise. Results show that dynamic models clearly outperform static models and that both the state space and autoregressive conditional beta models outperform the other methods.
The logarithmic prices of financial assets are conventionally assumed to follow a drift–diffusion process. While the drift term is typically ignored in the infill asymptotic theory and applications, the presence of temporary nonzero drifts is an undeniable fact. The finite sample theory for integrated variance estimators and extensive simulations provided in this paper reveal that the drift component has a nonnegligible impact on the estimation accuracy of volatility, which leads to a dramatic power loss for a class of jump identification procedures. We propose an alternative construction of volatility estimators and observe significant improvement in the estimation accuracy in the presence of nonnegligible drift. The analytical formulas of the finite sample bias of the realized variance, bipower variation, and their modified versions take simple and intuitive forms. The new jump tests, which are constructed from the modified volatility estimators, show satisfactory performance. As an illustration, we apply the new volatility estimators and jump tests, along with their original versions, to 21 years of 5-minute log returns of the NASDAQ stock price index.
This paper shows that a large dimensional vector autoregressive model (VAR) of finite order can generate fractional integration in the marginalized univariate series. We derive high-level assumptions under which the final equation representation of a VAR(1) leads to univariate fractional white noises and verify the validity of these assumptions for two specific models.
This paper proposes a new model with time-varying slope coefficients. Our model, called CHAR, is a Cholesky-GARCH model, based on the Cholesky decomposition of the conditional variance matrix introduced by Pourahmadi (1999) in the context of longitudinal data. We derive stationarity and invertibility conditions and prove consistency and asymptotic normality of the Full and equation-by-equation QML estimators of this model. We then show that this class of models is useful to estimate conditional betas and compare it to the approach proposed by Engle (2016). Finally, we use real data in a portfolio and risk management exercise. We find that the CHAR model outperforms a model with constant betas as well as the dynamic conditional beta model of Engle (2016).
An estimator of the ex-post covariation of log-prices under asynchronicity and microstructure noise is proposed. It uses the Cholesky factorization of the covariance matrix in order to exploit the heterogeneity in trading intensities to estimate the different parameters sequentially with as many observations as possible. The estimator is positive semidefinite by construction. We derive asymptotic results and confirm their good finite sample properties by means of a Monte Carlo simulation. In the application we forecast portfolio Value-at-Risk and sector risk exposures for a portfolio of 52 stocks. We find that the dynamic models utilizing the proposed high-frequency estimator provide statistically and economically superior forecasts.
The properties of dynamic conditional correlation (DCC) models, introduced more than a decade ago, are still not entirely known. This paper fills one of the gaps by deriving weak diffusion limits of a modified version of the classical DCC model. The limiting system of stochastic differential equations is characterized by a diffusion matrix of reduced rank. The degeneracy is due to perfect collinearity between the innovations of the volatility and correlation dynamics. For the special case of constant conditional correlations, a nondegenerate diffusion limit can be obtained. Alternative sets of conditions are considered for the rate of convergence of the parameters, obtaining time-varying but deterministic variances and/or correlations. A Monte Carlo experiment confirms that the often used quasi-approximate maximum likelihood (QAML) method to estimate the diffusion parameters is inconsistent for any fixed frequency, but that it may provide reasonable approximations for sufficiently large frequencies and sample sizes.
We propose a bootstrap-based test of the null hypothesis of equality of two firms’ conditional risk measures (RMs) at a single point in time. The test can be applied to a wide class of conditional risk measures issued from parametric or semiparametric models. Our iterative testing procedure produces a grouped ranking of the RMs, which has direct application for systemic risk analysis. Firms within a group are statistically indistinguishable from each other, but significantly more risky than the firms belonging to lower ranked groups. A Monte Carlo simulation demonstrates that our test has good size and power properties. We apply the procedure to a sample of 94 U.S. financial institutions using ΔCoVaR, MES, and %SRISK. We find that for some periods and RMs, we cannot statistically distinguish the 40 most risky firms due to estimation uncertainty.