Skip to main content

Sébastien Laurent

Faculty Aix-Marseille UniversitéInstitut d'administration des entreprises (IAE)

Econometrics, Finance and mathematical methods
Laurent
Status
Professor
Research domain(s)
Econometrics, Finance
Thesis
2002, Maastricht University
Download
CV
Address

Maison de l'économie et de la gestion d'Aix
424 chemin du viaduc, CS80429
13097 Aix-en-Provence Cedex 2

Abstract Drift and volatility are two mainsprings of asset price dynamics. While volatilities have been studied extensively in the literature, drifts are commonly believed to be impossible to estimate and largely ignored in the literature. This paper shows how to detect drift using realized autocovariance implemented on high-frequency data. We use a theoretical treatment in which the classical model for the efficient price, an Itō semimartingale possibly contaminated by microstructure noise, is enriched with drift and volatility explosions. Our theory advocates a novel decomposition for realized variance into a drift and a volatility component, which leads to significant improvements in volatility forecasting.
Keywords Volatility Forecasting, Serial Covariance, High-frequency Data, Drift
Abstract We propose the stepwise Cauchy combination test (StepC), a new procedure for multiple testing with dependent test statistics and sparse signals. Unlike the global version, StepC pinpoints which p-values drive rejections, while maintaining strong familywise error control. It is less conservative under dependence and more powerful than conventional multiple testing corrections. In simulations and in applications to drift burst detection and testing for nonzero alphas, StepC consistently boosts power and yields more meaningful rejections, making it a practical alternative for large-scale financial datasets.
Keywords Nonasymptotic approximation, Sequential rejection, Multiple hypothesis testing, Familywise error, Dependence
Abstract This paper introduces an autoregressive conditional beta (ACB) model that allows regressions with dynamic betas (or slope coefficients) and residuals with GARCH conditional volatility. The model fits in the (quasi) score-driven approach recently proposed in the literature, and it is semi-parametric in the sense that the distributions of the innovations are not necessarily specified. The time-varying betas are allowed to depend on past shocks and exogenous variables. We establish the existence of a stationary solution for the ACB model, the invertibility of the score-driven filter for the time-varying betas, and the asymptotic properties of one-step and multistep QMLEs for the new ACB model. The finite sample properties of these estimators are studied by means of an extensive Monte Carlo study. Finally, we also propose a strategy to test for the constancy of the conditional betas. In a financial application, we find evidence for time-varying conditional betas and highlight the empirical relevance of the ACB model in a portfolio and risk management empirical exercise.
Keywords Betas, GARCH model, Time-varying parameters, Score driven model
Abstract Despite their high predictive performance, random forest and gradient boosting are often considered as black boxes which has raised concerns from practitioners and regulators. As an alternative, we suggest using partial linear models that are inherently interpretable. Specifically, we propose to combine parametric and non‐parametric functions to accurately capture linearities and non‐linearities prevailing between dependent and explanatory variables, and a variable selection procedure to control for overfitting issues. Estimation relies on a two‐step procedure building upon the double residual method. We illustrate the predictive performance and interpretability of our approach on a regression problem.
Keywords Machine leaning, Lasso, Autometrics, GAM
Abstract Two recent contributions have found conditions for large dimensional networks or systems to generate long memory in their individual components. We build on these and provide a multivariate methodology for modeling and forecasting series displaying long range dependence. We model long memory properties within a vector autoregressive system of order 1 and consider Bayesian estimation or ridge regression. For these, we derive a theory-driven parametric setting that informs a prior distribution or a shrinkage target. Our proposal significantly outperforms univariate time series long-memory models when forecasting a daily volatility measure for 250 U.S. company stocks over twelve years. This provides an empirical validation of the theoretical results showing long memory can be sourced to marginalization within a large dimensional system.
Keywords Model Forecasting, Vector autoregressive, Ridge regression, Bayesian estimation
Abstract This paper introduces the class of quasi score-driven (QSD) models. This new class inherits and extends the basic ideas behind the development of score-driven (SD) models and addresses a number of unsolved issues in the score literature. In particular, the new class of models (i) generalizes many existing models, including SD models, (ii) disconnects the updating equation from the log-likelihood implied by the conditional density of the observations, (iii) allows testing of the assumptions behind SD models that link the updating equation of the conditional moment to the conditional density, (iv) allows QML estimation of SD models, (v) and allows explanatory variables to enter the updating equation. We establish the asymptotic properties of the QLE, QMLE and MLE of the proposed QSD model as well as the likelihood ratio and Lagrange multiplier test statistics. The finite sample properties are studied by means of an extensive Monte Carlo study. Finally, we show the empirical relevance of QSD models to estimate the conditional variance of 400 US stocks.
Keywords Score-driven models, GARCH, Fat-tails, Asymmetry, QLE, QMLE
Abstract In this paper, we investigate whether small- or mid-cap stocks can be considered as an alternative asset class that allows for the enhancement of the mean-variance characteristics of an investor’s portfolio. Using all the French stocks listed on the Euronext Stock Market from January 2000 to December 2018, we first examine this issue from the perspective of an investor that invests in familiar asset classes such as domestic large-cap stocks and is willing to add small- and/or mid-cap domestic stocks to his or her portfolio. We also examine this issue from the perspective of a French investor who has internationally diversified his portfolio using only international large- cap indices. Finally, we attempt to measure whether the size-based performance of a portfolio varies over time. We show that investors may benefit from adding micro- or small-cap stocks to their portfolios due to the fact that size-based portfolio performance varies over time. In particular, we find that small and mid-cap stocks behave differently to largecap stocks during and after the financial crisis, meaning that they can be considered as an alternative profitable asset class in portfolio management. This result is robust to different methodologies used to classify size-based portfolios as well as to different sets of benchmark assets. Except in the case of large-cap stocks at the beginning of the financial crisis, the spanning hypothesis cannot be rejected for small-, mid- or large-cap portfolios during the period spanning from 2007-2012, but the hypothesis is rejected for all small-cap portfolios during the period spanning from 2012-2018. Our results show that French small-cap stocks behave differently than mid- and large-cap stocks behave, and they also differ when their behavior is compared to that of other international asset classes.
Keywords Asset allocation, Portfolio diversification, Small- and mid-cap stocks
Abstract Deviations of asset prices from the random walk dynamic imply the predictability of asset returns and thus have important implications for portfolio construction and risk management. This paper proposes a real-time monitoring device for such deviations using intraday high-frequency data. The proposed procedures are based on unit root tests with in-fill asymptotics but extended to take the empirical features of high-frequency financial data (particularly jumps) into consideration. We derive the limiting distributions of the tests under both the null hypothesis of a random walk with jumps and the alternative of mean reversion/explosiveness with jumps. The limiting results show that ignoring the presence of jumps could potentially lead to severe size distortions of both the standard left-sided (against mean reversion) and right-sided (against explosiveness) unit root tests. The simulation results reveal satisfactory performance of the proposed tests even with data from a relatively short time span. As an illustration, we apply the procedure to the Nasdaq composite index at the 10-minute frequency over two periods: around the peak of the dot-com bubble and during the 2015–2106 stock market sell-off. We find strong evidence of explosiveness in asset prices in late 1999 and mean reversion in late 2015. We also show that accounting for jumps when testing the random walk hypothesis on intraday data is empirically relevant and that ignoring jumps can lead to different conclusions.
Abstract Beta coefficients are the cornerstone of asset pricing theory in the CAPM and multiple factor models. This chapter proposes a review of different time series models used to estimate static and time-varying betas, and a comparison on real data. The analysis is performed on the USA and developed Europe REIT markets over the period 2009–2019 via a two-factor model. We evaluate the performance of the different techniques in terms of in-sample estimates as well as through an out-of-sample tracking exercise. Results show that dynamic models clearly outperform static models and that both the state space and autoregressive conditional beta models outperform the other methods.
Keywords Autoregressive conditional beta, Dynamic conditional beta, State space, Multivariate GARCH, REITs, Real estate
Abstract The logarithmic prices of financial assets are conventionally assumed to follow a drift–diffusion process. While the drift term is typically ignored in the infill asymptotic theory and applications, the presence of temporary nonzero drifts is an undeniable fact. The finite sample theory for integrated variance estimators and extensive simulations provided in this paper reveal that the drift component has a nonnegligible impact on the estimation accuracy of volatility, which leads to a dramatic power loss for a class of jump identification procedures. We propose an alternative construction of volatility estimators and observe significant improvement in the estimation accuracy in the presence of nonnegligible drift. The analytical formulas of the finite sample bias of the realized variance, bipower variation, and their modified versions take simple and intuitive forms. The new jump tests, which are constructed from the modified volatility estimators, show satisfactory performance. As an illustration, we apply the new volatility estimators and jump tests, along with their original versions, to 21 years of 5-minute log returns of the NASDAQ stock price index.
Keywords Jumps, Volatility estimation, Finite sample theory, Nonzero drift, Diffusion process
Abstract This paper proposes a new model with time-varying slope coefficients. Our model, calledCHAR, is a Cholesky-GARCH model, based on the Cholesky decomposition of the conditional variance matrix introduced by Pourahmadi (1999) in the context of longitudinal data. We derive stationarity and invertibility conditions and prove consistency and asymptotic normality of the Full and equation-by-equation QML estimators of this model. We then show that this class of models is useful to estimate conditional betas and compare it to the approach proposed by Engle (2016). Finally, we use real data in a portfolio and risk management exercise. We find that the CHAR model outperforms a model with constant betas as well as the dynamic conditional beta model of Engle (2016).
Abstract This paper proposes a new model with time-varying slope coefficients. Our model, called CHAR, is a Cholesky-GARCH model, based on the Cholesky decomposition of the conditional variance matrix introduced by Pourahmadi (1999) in the context of longitudinal data. We derive stationarity and invertibility conditions and prove consistency and asymptotic normality of the Full and equation-by-equation QML estimators of this model. We then show that this class of models is useful to estimate conditional betas and compare it to the approach proposed by Engle (2016). Finally, we use real data in a portfolio and risk management exercise. We find that the CHAR model outperforms a model with constant betas as well as the dynamic conditional beta model of Engle (2016).
Keywords Covariance, Conditional betas, Multivariate-GARCH
Abstract This paper shows that a large dimensional vector autoregressive model (VAR) of finite order can generate fractional integration in the marginalized univariate series. We derive high-level assumptions under which the final equation representation of a VAR(1) leads to univariate fractional white noises and verify the validity of these assumptions for two specific models.
Keywords Marginalization, Long memory, Final equation representation, Vector autoregressive model
Abstract We propose a bootstrap-based test of the null hypothesis of equality of two firms? conditional Risk Measures (RMs) at a single point in time. The test can be applied to a wide class of conditional risk measures issued from parametric or semi-parametric models. Our iterative testing procedure produces a grouped ranking of the RMs, which has direct application for systemic risk analysis. Firms within a group are statistically indistinguishable form each other, but significantly more risky than the firms belonging to lower ranked groups. A Monte Carlo simulation demonstrates that our test has good size and power properties. We apply the procedure to a sample of 94 U.S. financial institutions using ?CoVaR, MES, and %SRISK. We find that for some periods and RMs, we cannot statistically distinguish the 40 most risky firms due to estimation uncertainty.
Keywords Economie quantitative
Abstract The 70th European Meeting of the Econometric Society (ESEM) will take place in Lisbon, Portugal, August 21-25, 2017. The Meeting is hosted by ISCTE - IUL and the University of Lisbon and will run in parallel with the 32nd Annual Congress of the European Economic Association (EEA). Participants will be able to attend all sessions of both events. The program chairs are Kfir Eliaz (Tel Aviv University and University of Michigan) and Imran Rasul (University College London and Institute for Fiscal Studies). The members of the local organizing committee from ISEG-UTL and the University of Lisbon are Mário Centeno, Vitor Escária, Alexandra Ferreira Lopes, Francisco Lima, and Luís Martins.
Abstract This paper proposes a new observation-driven model with time-varying slope coefficients. Ourmodel, called CHAR, is a Cholesky-GARCH model, based on the Cholesky decomposition ofthe conditional variance matrix introduced by Pourahmadi (1999) in the context of longitudinaldata. We derive stationarity and invertibility conditions and proof consistency and asymptoticnormality of the Full and equation-by-equation QML estimators of this model. We then showthat this class of models is useful to estimate conditional betas and compare it to the approachproposed by Engle (2016). Finally, we use real data in a portfolio and risk management exercise.We find that the CHAR model outperforms a model with constant betas as well as the dynamicconditional beta model of Engle (2016).
Keywords Covariance, Conditional betas, Multivariate-GARCH
Abstract The properties of dynamic conditional correlation (DCC) models, introduced more than a decade ago, are still not entirely known. This paper fills one of the gaps by deriving weak diffusion limits of a modified version of the classical DCC model. The limiting system of stochastic differential equations is characterized by a diffusion matrix of reduced rank. The degeneracy is due to perfect collinearity between the innovations of the volatility and correlation dynamics. For the special case of constant conditional correlations, a nondegenerate diffusion limit can be obtained. Alternative sets of conditions are considered for the rate of convergence of the parameters, obtaining time-varying but deterministic variances and/or correlations. A Monte Carlo experiment confirms that the often used quasi-approximate maximum likelihood (QAML) method to estimate the diffusion parameters is inconsistent for any fixed frequency, but that it may provide reasonable approximations for sufficiently large frequencies and sample sizes.
Keywords Economie quantitative
Abstract An estimator of the ex-post covariation of log-prices under asynchronicity and microstructure noise is proposed. It uses the Cholesky factorization of the covariance matrix in order to exploit the heterogeneity in trading intensities to estimate the different parameters sequentially with as many observations as possible. The estimator is positive semidefinite by construction. We derive asymptotic results and confirm their good finite sample properties by means of a Monte Carlo simulation. In the application we forecast portfolio Value-at-Risk and sector risk exposures for a portfolio of 52 stocks. We find that the dynamic models utilizing the proposed high-frequency estimator provide statistically and economically superior forecasts.
Keywords Positive semidefinite, Non-synchronous trading, Integrated covariance, Cholesky decomposition, Reali
Abstract The class of Cholesky-GARCH models, based on the Cholesky decomposition conditional variance matrix, are studied. We first consider the onestep and multi-step QML estimators. We prove the consistency and the asymptotic normality of the two estimators and derive the correspondingstationarity conditions. We then show that this class of models is useful to estimate conditional betas and compare it to other approaches proposedin the financial literature. Finally, we use real data to show that our model performs very well compared to other multivariate GARCH models.
Abstract In this paper we study various MIDAS models for which the future daily variance is directly related to past observations of intraday predictors. Our goal is to determine if there exists an optimal sampling frequency in terms of variance prediction. Via Monte Carlo simulations we show that in a world without microstructure noise, the best model is the one using the highest available frequency for the predictors. However, in the presence of microstructure noise, the use of very high-frequency predictors may be problematic, leading to poor variance forecasts. The empirical application focuses on two highly liquid assets (i.e., Microsoft and S&P 500). We show that, when using raw intraday squared log-returns for the explanatory variable, there is a “high-frequency wall” – or frequency limit – above which MIDAS-RV forecasts deteriorate or stop improving. An improvement can be obtained when using intraday squared log-returns sampled at a higher frequency, provided they are pre-filtered to account for the presence of jumps, intraday diurnal pattern and/or microstructure noise. Finally, we compare the MIDAS model to other competing variance models including GARCH, GAS, HAR-RV and HAR-RV-J models. We find that the MIDAS model – when it is applied on filtered data –provides equivalent or even better variance forecasts than these models. JEL: C22, C53, G12 / KEY WORDS: Variance Forecasting, MIDAS, High-Frequency Data. RÉSUMÉ. Nous considérons dans cet article des modèles de régression MIDAS pour examiner l'influence de la fréquence d'échantillonnage des prédicteurs sur la qualité des prévisions de la volatilité quotidienne. L'objectif principal est de vérifier si l'information incorporée par les prédicteurs à haute fréquence améliore la qualité des précisions de volatilité, et si oui, s'il existe une fréquence d'échantillonnage optimale de ces prédicteurs en termes de prédiction de la variance. Nous montrons, via des simulations Monte Carlo, que dans un monde sans bruit de microstructure, le meilleur modèle est celui qui utilise des prédicteurs à la fréquence la plus élevée possible. Cependant, en présence de bruit de microstructure, l'utilisation des měmes prédicteurs à haute fréquence peut ětre problématique, conduisant à des prévisions pauvres de la variance. L'application empirique se concentre sur deux actifs très liquides (Microsoft et S & P 500). Nous montrons que, lors de l'utilisation des rendements intra-journaliers au carré pour la variable explicative, il y a un « mur à haute fréquence » – ou limite de fréquence – au-delà duquel les prévisions des modèles MIDAS-RV se détériorent ou arrětent de s'améliorer. Une amélioration pourrait ětre obtenue lors de l'utilisation des rendements au carré échantillonnés à une fréquence plus élevée, à condition qu'ils soient préfiltrés pour tenir compte de la présence des sauts, de la saisonnalité intra-journalière et/ou du bruit de microstructure. Enfin, nous comparons le modèle MIDAS à d'autres modèles de variance concurrents, y compris les modèles GARCH, GAS, HAR-RV et HAR-RV-J. Nous constatons que le modèle MIDAS – quand il est appliqué sur des données filtrées – fournit des prévisions de variance équivalentes ou měme meilleures que ces modèles.
Keywords High-frequency Data, MIDAS, Variance Forecasting
Abstract Simple low order multivariate GARCH models imply marginal processes with a lot of persistence in the form of high order lags. This is not what we find in many situations however, where parsimonious univariate GARCH(1,1) models for instance describe quite well the conditional volatility of some asset returns. In order to explain this paradox, we show that in the presence of common GARCH factors, parsimonious univariate representations can result from large multivariate models generating the conditional variances and conditional covariances/correlations. The diagonal model without any contagion effects in conditional volatilities gives rise to similar conclusions though. Consequently, after having extracted a block of assets representing some form of parsimony, remains the task of determining if we have a set of independent assets or instead a highly dependent system generated with a few factors. To investigate this issue, we first evaluate a reduced rank regressions approach for squared returns that we extend to cross-returns. Second we investigate a likelihood ratio approach, where under the null the matrix parameters have a reduced rank structure. It emerged that the latter approach has quite good properties enabling us to discriminate between a system with seemingly unrelated assets (e.g. a diagonal model) and a model with few common sources of volatility.
Keywords Economie quantitative
Abstract Financial asset prices occasionally exhibit large changes. To deal with their occurrence, observed return series are assumed to consist of a conditionally Gaussian ARMA-GARCH type model contaminated by an additive jump component. In this framework, a new test for additive jumps is proposed. The test is based on standardized returns, where the first two conditional moments of the non-contaminated observations are estimated in a robust way. Simulation results indicate that the test has very good finite sample properties, i.e. correct size and high proportion of correct jump detection. The test is applied to daily returns and detects less than 1% of jumps for three exchange rates and between 1% and 3% of jumps for about 50 large capitalization stock returns from the NYSE. Once jumps have been filtered out, all series are found to be conditionally Gaussian. It is also found that simple GARCH-type models estimated using filtered returns deliver more accurate out-of sample forecasts of the conditional variance than GARCH and Generalized Autoregressive Score (GAS) models estimated from raw data.
Keywords Test, Jumps, GARCH, Forecasting
Abstract This paper evaluates the most appropriate ways to model diffusion and jump features of high-frequency exchange rates in the presence of intraday periodicity in volatility. We show that periodic volatility distorts the size and power of conventional tests of Brownian motion, jumps and (in)finite activity. We propose a correction for periodicity that restores the properties of the test statistics. Empirically, the most plausible model for 1-min exchange rate data features Brownian motion and both finite activity and infinite activity jumps. Test rejection rates vary over time, however, indicating time variation in the data generating process. We discuss the implications of results for market microstructure and currency option pricing.
Keywords Volatility, Jumps, Intraday periodicity, High-frequency Data, Exchange rates, Brownian motion
Abstract no abstract
Keywords Economie quantitative
Abstract In this paper, we investigate the impact of monetary policy signals stemming from the Bundesbank Council and the FOMC on the intradaily Deutsche Mark-dollar volatility (five minutes frequency). For that, we estimate an AR(1)-GARCH(1,1) model, which integrates a polynomials structure depending on signal variables, on the deseasonalized exchange rate returns series. This structure allows us to test the signals persistence one hour after their occurrence and to reveal a dissymmetry between the effect of the Bundesbank and the Federal Reserve signals on the exchange rate volatility.
Abstract This paper proposes a new model with time-varying slope coefficients. Our model, called CHAR, is a Cholesky-GARCH model, based on the Cholesky decomposition of the conditional variance matrix introduced by Pourahmadi (1999) in the context of longitudinal data. We derive stationarity and invertibility conditions and prove consistency and asymptotic normality of the Full and equation-by-equation QML estimators of this model. We then show that this class of models is useful to estimate conditional betas and compare it to the approach proposed by Engle (2016). Finally, we use real data in a portfolio and risk management exercise. We find that the CHAR model outperforms a model with constant betas as well as the dynamic conditional beta model of Engle (2016).
Keywords Multivariate-GARCH, Conditional betas, Covariance
Abstract This paper shows that a large dimensional vector autoregressive model (VAR) of finite order can generate fractional integration in the marginalized univariate series. We derive high-level assumptions under which the final equation representation of a VAR(1) leads to univariate fractional white noises and verify the validity of these assumptions for two specific models.
Keywords Long memory, Vector autoregressive model, Marginalization, Final equation representation
Abstract Logarithms of prices of financial assets are conventionally assumed to follow drift-diffusion processes. While the drift term is typically ignored in the infill asymptotic theory and applications, the presence of nonzero drifts is an undeniable fact. The finite sample theory and extensive simulations provided in this paper reveal that the drift component has a nonnegligible impact on the estimation accuracy of volatility and leads to a dramatic power loss of a class of jump identification procedures. We propose an alternative construction of volatility estimators and jump tests and observe significant improvement of both in the presence of nonnegligible drift. As an illustration, we apply the new volatility estimators and jump tests, along with their original versions, to 21 years of 5-minute log-returns of the NASDAQ stock price index.
Keywords Diffusion process, Nonzero drift, Finite sample theory, Volatility estimation, Jumps