Laurent

Publications

Risk Measure InferenceJournal articleChristophe Hurlin, Sébastien Laurent, Rogier Quaedvlieg et Stephan Smeekes, Journal of Business & Economic Statistics, Volume 35, Issue 4, pp. 499-512, 2017

We propose a bootstrap-based test of the null hypothesis of equality of two firms’ conditional risk measures (RMs) at a single point in time. The test can be applied to a wide class of conditional risk measures issued from parametric or semiparametric models. Our iterative testing procedure produces a grouped ranking of the RMs, which has direct application for systemic risk analysis. Firms within a group are statistically indistinguishable from each other, but significantly more risky than the firms belonging to lower ranked groups. A Monte Carlo simulation demonstrates that our test has good size and power properties. We apply the procedure to a sample of 94 U.S. financial institutions using ΔCoVaR, MES, and %SRISK. We find that for some periods and RMs, we cannot statistically distinguish the 40 most risky firms due to estimation uncertainty.

Weak Diffusion Limits of Dynamic Conditional Correlation ModelsJournal articleChristian M. Hafner, Sébastien Laurent et Francesco Violante, Econometric Theory, Volume 33, Issue 03, pp. 691-716, 2017

The properties of dynamic conditional correlation (DCC) models, introduced more than a decade ago, are still not entirely known. This paper fills one of the gaps by deriving weak diffusion limits of a modified version of the classical DCC model. The limiting system of stochastic differential equations is characterized by a diffusion matrix of reduced rank. The degeneracy is due to perfect collinearity between the innovations of the volatility and correlation dynamics. For the special case of constant conditional correlations, a nondegenerate diffusion limit can be obtained. Alternative sets of conditions are considered for the rate of convergence of the parameters, obtaining time-varying but deterministic variances and/or correlations. A Monte Carlo experiment confirms that the often used quasi-approximate maximum likelihood (QAML) method to estimate the diffusion parameters is inconsistent for any fixed frequency, but that it may provide reasonable approximations for sufficiently large frequencies and sample sizes.

Positive semidefinite integrated covariance estimation, factorizations and asynchronicityJournal articleKris Boudt, Sébastien Laurent, Asger Lunde, Rogier Quaedvlieg et Orimar Sauri, Journal of Econometrics, Volume 196, Issue 2, pp. 347-367, 2017

An estimator of the ex-post covariation of log-prices under asynchronicity and microstructure noise is proposed. It uses the Cholesky factorization of the covariance matrix in order to exploit the heterogeneity in trading intensities to estimate the different parameters sequentially with as many observations as possible. The estimator is positive semidefinite by construction. We derive asymptotic results and confirm their good finite sample properties by means of a Monte Carlo simulation. In the application we forecast portfolio Value-at-Risk and sector risk exposures for a portfolio of 52 stocks. We find that the dynamic models utilizing the proposed high-frequency estimator provide statistically and economically superior forecasts.

Introduction to the special issue on recent developments in Financial EconometricsJournal articleSerge Darolles, Christian Gourieroux et Sébastien Laurent, Annals of Economics and Statistics, Issue 123-124, pp. 7-8, 2016

-

Do We Need High Frequency Data to Forecast Variances?Journal articleDenisa Banulescu-Radu, Christophe Hurlin, Bertrand Candelon et Sébastien Laurent, Annals of Economics and Statistics, Issue 123/124, pp. 135-174, 2016

In this paper we study various MIDAS models for which the future daily variance is directly related to past observations of intraday predictors. Our goal is to determine if there exists an optimal sampling frequency in terms of variance prediction. Via Monte Carlo simulations we show that in a world without microstructure noise, the best model is the one using the highest available frequency for the predictors. However, in the presence of microstructure noise, the use of very high-frequency predictors may be problematic, leading to poor variance forecasts. The empirical application focuses on two highly liquid assets (i.e., Microsoft and S&P 500). We show that, when using raw intraday squared log-returns for the explanatory variable, there is a “high-frequency wall” – or frequency limit – above which MIDAS-RV forecasts deteriorate or stop improving. An improvement can be obtained when using intraday squared log-returns sampled at a higher frequency, provided they are pre-filtered to account for the presence of jumps, intraday diurnal pattern and/or microstructure noise. Finally, we compare the MIDAS model to other competing variance models including GARCH, GAS, HAR-RV and HAR-RV-J models. We find that the MIDAS model – when it is applied on filtered data –provides equivalent or even better variance forecasts than these models. JEL: C22, C53, G12 / KEY WORDS: Variance Forecasting, MIDAS, High-Frequency Data.

RÉSUMÉ. Nous considérons dans cet article des modèles de régression MIDAS pour examiner l'influence de la fréquence d'échantillonnage des prédicteurs sur la qualité des prévisions de la volatilité quotidienne. L'objectif principal est de vérifier si l'information incorporée par les prédicteurs à haute fréquence améliore la qualité des précisions de volatilité, et si oui, s'il existe une fréquence d'échantillonnage optimale de ces prédicteurs en termes de prédiction de la variance. Nous montrons, via des simulations Monte Carlo, que dans un monde sans bruit de microstructure, le meilleur modèle est celui qui utilise des prédicteurs à la fréquence la plus élevée possible. Cependant, en présence de bruit de microstructure, l'utilisation des měmes prédicteurs à haute fréquence peut ětre problématique, conduisant à des prévisions pauvres de la variance. L'application empirique se concentre sur deux actifs très liquides (Microsoft et S & P 500). Nous montrons que, lors de l'utilisation des rendements intra-journaliers au carré pour la variable explicative, il y a un « mur à haute fréquence » – ou limite de fréquence – au-delà duquel les prévisions des modèles MIDAS-RV se détériorent ou arrětent de s'améliorer. Une amélioration pourrait ětre obtenue lors de l'utilisation des rendements au carré échantillonnés à une fréquence plus élevée, à condition qu'ils soient préfiltrés pour tenir compte de la présence des sauts, de la saisonnalité intra-journalière et/ou du bruit de microstructure. Enfin, nous comparons le modèle MIDAS à d'autres modèles de variance concurrents, y compris les modèles GARCH, GAS, HAR-RV et HAR-RV-J. Nous constatons que le modèle MIDAS – quand il est appliqué sur des données filtrées – fournit des prévisions de variance équivalentes ou měme meilleures que ces modèles.

On the Univariate Representation of BEKK Models with Common FactorsJournal articleAlain Hecq, Franz C. Palm et Sébastien Laurent, Journal of Time Series Econometrics, Volume 8, Issue 2, pp. 91-113, 2016

Simple low order multivariate GARCH models imply marginal processes with a lot of persistence in the form of high order lags. This is not what we find in many situations however, where parsimonious univariate GARCH(1,1) models for instance describe quite well the conditional volatility of some asset returns. In order to explain this paradox, we show that in the presence of common GARCH factors, parsimonious univariate representations can result from large multivariate models generating the conditional variances and conditional covariances/correlations. The diagonal model without any contagion effects in conditional volatilities gives rise to similar conclusions though. Consequently, after having extracted a block of assets representing some form of parsimony, remains the task of determining if we have a set of independent assets or instead a highly dependent system generated with a few factors. To investigate this issue, we first evaluate a reduced rank regressions approach for squared returns that we extend to cross-returns. Second we investigate a likelihood ratio approach, where under the null the matrix parameters have a reduced rank structure. It emerged that the latter approach has quite good properties enabling us to discriminate between a system with seemingly unrelated assets (e.g. a diagonal model) and a model with few common sources of volatility.

Testing for jumps in conditionally Gaussian ARMA-GARCH models, a robust approachJournal articleSébastien Laurent, Christelle Lecourt et Franz C. Palm, Computational Statistics & Data Analysis, Volume 100, Issue C, pp. 383-400, 2016

Financial asset prices occasionally exhibit large changes. To deal with their occurrence, observed return series are assumed to consist of a conditionally Gaussian ARMA-GARCH type model contaminated by an additive jump component. In this framework, a new test for additive jumps is proposed. The test is based on standardized returns, where the first two conditional moments of the non-contaminated observations are estimated in a robust way. Simulation results indicate that the test has very good finite sample properties, i.e. correct size and high proportion of correct jump detection. The test is applied to daily returns and detects less than 1% of jumps for three exchange rates and between 1% and 3% of jumps for about 50 large capitalization stock returns from the NYSE. Once jumps have been filtered out, all series are found to be conditionally Gaussian. It is also found that simple GARCH-type models estimated using filtered returns deliver more accurate out-of sample forecasts of the conditional variance than GARCH and Generalized Autoregressive Score (GAS) models estimated from raw data.

Which continuous-time model is most appropriate for exchange rates?Journal articleDeniz Erdemlioglu, Sébastien Laurent et Christopher J. Neely, Journal of Banking & Finance, Volume 61, Issue S2, pp. S256-S268, 2015

This paper evaluates the most appropriate ways to model diffusion and jump features of high-frequency exchange rates in the presence of intraday periodicity in volatility. We show that periodic volatility distorts the size and power of conventional tests of Brownian motion, jumps and (in)finite activity. We propose a correction for periodicity that restores the properties of the test statistics. Empirically, the most plausible model for 1-min exchange rate data features Brownian motion and both finite activity and infinite activity jumps. Test rejection rates vary over time, however, indicating time variation in the data generating process. We discuss the implications of results for market microstructure and currency option pricing.

Estimating and forecasting ARCH models using G@RCH 6BookSébastien Laurent, 2014, Timberlake Consultants, 2014
Econometric Modeling of Exchange Rate Volatility and JumpsBook chapterDeniz Erdemlioglu, Sébastien Laurent et Christopher J. Neely, In: Handbook of Research Methods and Applications in Empirical Finance, A. R. Bell, C. Brooks et M. Prokopczuk (Eds.), 2013-04, Volume 16, pp. 373-427, Edward Elgar Publishing, 2013

Volatility measures the dispersion of asset price returns. Recognizing the importance of foreign exchange volatility for risk management and policy evaluation, academics, policymakers, regulators and market practitioners have long studied and estimated models of foreign exchange volatility and jumps. Financial economists have sought to understand and characterize foreign exchange volatility, because the volatility process tells us about how news affects asset prices, what information is important and how markets process that information. Policymakers are interested in measuring asset price volatility to learn about market expectations and uncertainty about policy. For example, one might think that a clear understanding of policy objectives and tools would tend to reduce market volatility, other things being equal. More practically, understanding and estimating asset price volatility is important for asset pricing, portfolio allocation and risk management. Traders and regulators must consider not only the expected return from their trading activity but also the trading strategy’s exposure to risk during periods of high volatility. Traders’ risk-adjusted performance depends upon the accuracy of their volatility predictions. Therefore, both traders and regulators use volatility predictions as inputs to models of risk management, such as value-at-risk (VaR).