Maison de l'économie et de la gestion d'Aix
424 chemin du viaduc
In this paper we study various MIDAS models for which the future daily variance is directly related to past observations of intraday predictors. Our goal is to determine if there exists an optimal sampling frequency in terms of variance prediction. Via Monte Carlo simulations we show that in a world without microstructure noise, the best model is the one using the highest available frequency for the predictors. However, in the presence of microstructure noise, the use of very high-frequency predictors may be problematic, leading to poor variance forecasts. The empirical application focuses on two highly liquid assets (i.e., Microsoft and S&P 500). We show that, when using raw intraday squared log-returns for the explanatory variable, there is a “high-frequency wall” – or frequency limit – above which MIDAS-RV forecasts deteriorate or stop improving. An improvement can be obtained when using intraday squared log-returns sampled at a higher frequency, provided they are pre-filtered to account for the presence of jumps, intraday diurnal pattern and/or microstructure noise. Finally, we compare the MIDAS model to other competing variance models including GARCH, GAS, HAR-RV and HAR-RV-J models. We find that the MIDAS model – when it is applied on filtered data –provides equivalent or even better variance forecasts than these models. JEL: C22, C53, G12 / KEY WORDS: Variance Forecasting, MIDAS, High-Frequency Data.
RÉSUMÉ. Nous considérons dans cet article des modèles de régression MIDAS pour examiner l'influence de la fréquence d'échantillonnage des prédicteurs sur la qualité des prévisions de la volatilité quotidienne. L'objectif principal est de vérifier si l'information incorporée par les prédicteurs à haute fréquence améliore la qualité des précisions de volatilité, et si oui, s'il existe une fréquence d'échantillonnage optimale de ces prédicteurs en termes de prédiction de la variance. Nous montrons, via des simulations Monte Carlo, que dans un monde sans bruit de microstructure, le meilleur modèle est celui qui utilise des prédicteurs à la fréquence la plus élevée possible. Cependant, en présence de bruit de microstructure, l'utilisation des měmes prédicteurs à haute fréquence peut ětre problématique, conduisant à des prévisions pauvres de la variance. L'application empirique se concentre sur deux actifs très liquides (Microsoft et S & P 500). Nous montrons que, lors de l'utilisation des rendements intra-journaliers au carré pour la variable explicative, il y a un « mur à haute fréquence » – ou limite de fréquence – au-delà duquel les prévisions des modèles MIDAS-RV se détériorent ou arrětent de s'améliorer. Une amélioration pourrait ětre obtenue lors de l'utilisation des rendements au carré échantillonnés à une fréquence plus élevée, à condition qu'ils soient préfiltrés pour tenir compte de la présence des sauts, de la saisonnalité intra-journalière et/ou du bruit de microstructure. Enfin, nous comparons le modèle MIDAS à d'autres modèles de variance concurrents, y compris les modèles GARCH, GAS, HAR-RV et HAR-RV-J. Nous constatons que le modèle MIDAS – quand il est appliqué sur des données filtrées – fournit des prévisions de variance équivalentes ou měme meilleures que ces modèles.
This paper evaluates the most appropriate ways to model diffusion and jump features of high-frequency exchange rates in the presence of intraday periodicity in volatility. We show that periodic volatility distorts the size and power of conventional tests of Brownian motion, jumps and (in)finite activity. We propose a correction for periodicity that restores the properties of the test statistics. Empirically, the most plausible model for 1-min exchange rate data features Brownian motion and both finite activity and infinite activity jumps. Test rejection rates vary over time, however, indicating time variation in the data generating process. We discuss the implications of results for market microstructure and currency option pricing.
The ranking of multivariate volatility models is inherently problematic because when the unobservable volatility is substituted by a proxy, the ordering implied by a loss function may be biased with respect to the intended one. We point out that the size of the distortion is strictly tied to the level of the accuracy of the volatility proxy. We propose a generalized necessary and sufficient functional form for a class of non-metric distance measures of the Bregman type which ensure consistency of the ordering when the target is observed with noise. An application to three foreign exchange rates is provided.
Volatility measures the dispersion of asset price returns. Recognizing the importance of foreign exchange volatility for risk management and policy evaluation, academics, policymakers, regulators and market practitioners have long studied and estimated models of foreign exchange volatility and jumps. Financial economists have sought to understand and characterize foreign exchange volatility, because the volatility process tells us about how news affects asset prices, what information is important and how markets process that information. Policymakers are interested in measuring asset price volatility to learn about market expectations and uncertainty about policy. For example, one might think that a clear understanding of policy objectives and tools would tend to reduce market volatility, other things being equal. More practically, understanding and estimating asset price volatility is important for asset pricing, portfolio allocation and risk management. Traders and regulators must consider not only the expected return from their trading activity but also the trading strategy’s exposure to risk during periods of high volatility. Traders’ risk-adjusted performance depends upon the accuracy of their volatility predictions. Therefore, both traders and regulators use volatility predictions as inputs to models of risk management, such as value-at-risk (VaR).
Large one-off events cause large changes in prices, but may not affect the volatility and correlation dynamics as much as smaller events. In such cases, standard volatility models may deliver biased covariance forecasts. We propose a multivariate volatility forecasting model that is accurate in the presence of large one-off events. The model is an extension of the dynamic conditional correlation (DCC) model. In our empirical application to forecasting the covariance matrix of the daily EUR/USD and Yen/USD return series, we find that our method produces more precise out-of-sample covariance forecasts than the DCC model. Furthermore, when used in portfolio allocation, it leads to portfolios with similar return characteristics but lower turnovers, and hence higher profits.
We propose three residual-based tests for conditional asymmetry. The distribution is assumed to fall into the class of skewed distributions of Fernández and Steel (1998). In this class, asymmetry is measured by the ratio between the probabilities of being larger and smaller than the mode. Estimation is performed under the null hypothesis of constant asymmetry of the innovations and, in a second step, tests for conditional asymmetry are performed on generalized residuals through parametric and nonparametric methods. We derive the asymptotic distribution of the tests that incorporates the uncertainty of the estimated parameters. A Monte Carlo study shows that neglecting this uncertainty severely biases the tests. An empirical application on a basket of daily returns reveals that financial data often present dynamics in the conditional skewness.
This paper investigates the link between jumps in the exchange rate process and rumours of central bank interventions. Using the case of Japan, we analyse specifically whether jumps trigger false reports of intervention (i.e. an intervention is reported when it did not occur). Intraday jumps are extracted using a non-parametric technique recently proposed by Lee and Mykland in 2008 and by Andersen et al . in 2007, and later modified by Boudt et al . in 2011. Rumours are identified by using a unique database of Reuters and Dow Jones newswires. Our results suggest that a significant number of jumps on the YEN/USD have been falsely interpreted by the market as being the result of a central bank intervention. The paper has policy implications in terms of central bank interventions. We show that in times where the central bank is known to intervene, some investors may attach a lot of weight to central bank interventions as a source of exchange rate movement, leading to a false ‘intervention explanation’ for observed jumps.
This paper addresses the question of the selection of multivariate GARCH models in terms of variance matrix forecasting accuracy with a particular focus on relatively large scale problems. We consider 10 assets from NYSE and NASDAQ and compare 125 model based one-step-ahead conditional variance forecasts over a period of 10 years using the model confidence set (MCS) and the Superior Predictive Ability (SPA) tests. Model performances are evaluated using four statistical loss functions which account for different types and degrees of asymmetry with respect to over/under predictions. When considering the full sample, MCS results are strongly driven by short periods of high market instability during which multivariate GARCH models appear to be inaccurate. Over relatively unstable periods, i.e. dot-com bubble, the set of superior models is composed of more sophisticated specifications such as orthogonal and dynamic conditional correlation (DCC), both with leverage effect in the conditional variances. However, unlike the DCC models, our results show that the orthogonal specifications tend to underestimate the conditional variance. Over calm periods, a simple assumption like constant conditional correlation and symmetry in the conditional variances cannot be rejected. Finally, during the 2007-2008 financial crisis, accounting for non-stationarity in the conditional variance process generates superior forecasts. The SPA test suggests that, independently from the period, the best models do not provide significantly better forecasts than the DCC model of Engle (2002) with leverage in the conditional variances of the returns.