Long run forecast of the covariance matrix

78733745: Long run forecast of the covariance matrix


Chapter 1: Introduction6

1.1 Introduction6

1.2 Background information and company context9

1.3 Problem Statement11

1.4 Rationale for the study12

1.5 Study objectives13

1.6 Scope of study14

1.7 Research design14

1.8 Limitations of the study15

Chapter 2: Literature Review

1 Introduction

The dynamics of the time-varying volatility of financial assets play a main

role in diverse fields, such as derivative pricing and risk management. Consequently,

the literature focused on estimating and forecasting conditional

variance is vast. The most popular method for modelling volatility belongs

to the family of GARCH models (see Bollerslev et al. 1992 for a review of

this topic), although other alternatives (such as stochastic volatility models)

also provide reliable estimates. The success of GARCH processes is

unquestionably tied to the fact that they are able to fit the stylized features

exhibited by volatility in a fairly parsimonious and convincing way, through

quite a feasible method. The seminal models developed by Engle (1982)

and Bollerslev (1986) were rapidly generalized in an increasing degree of

sophistication to reflect further empirical aspects of volatility.

One of the more complex features that univariate GARCH-type models

have attempted to fit is the so-called long-memory property. The volatility

of many financial assets exhibits a strong temporal dependence which is

revealed through a slow decay to zero in the autocorrelation function of

the standard proxies of volatility (usually squared and absolute valued

returns) at long lags. The basic GARCH model does not succeed in

fitting this pattern because it implicitly assumes a fast, geometric decay

in the theoretical autocorrelations. Engle and Bollerslev (1986) were

the first concerned with this fact and suggested an integrated GARCH

model (IGARCH) by imposing unit roots in the conditional variance.

The theoretical properties of IGARCH models, however, are not entirely

satisfactory in fitting actual financial data, so further models were later

developed to face temporal dependence. Ballie, Bollerslev and Mikkelsen

(1996) proposed the so-called fractionally integrated GARCH models

(FIGARCH) for volatility in the same spirit as fractional ARIMA models

which were evolved for modelling the mean of time series (see Baillie, 1996).

These models imply an hyperbolic rate of decay in the autocorrelation

function of squared residuals, and generalize the basic framework by still

using a parsimonious parameterization.

There has been a great interest in modelling the temporal dependence

in the volatility of financial series, mostly in the univariate framework1.

The analysis of the long-memory property in the multivariate framework,

however, has received much less attention, even though the estimation

of time-varying covariances between asset returns is crucial for risk

management, portfolio selection, optimal hedging and other important

applications. The main reason is that modelling conditional variance in

1An alternative approach for modelling long-memory through GARCH-type models is

based on the family of stochastic volatility (see Breidt, Crato and de Lima, 1998). An

extension of FIGARCH models has been considered in Ding, Granger and Engle (1993).

2 The multivariate modelling of long-memory

Although long-memory has been observed in the volatility of a wide range

of assets, the literature on the topic is mainly focused on foreign exchange

rate time series (FX hereafter). There exists a great deal of empirical

literature focused on modelling and forecasting the volatility of exchangerate

returns in terms of the FIGARCH models in the univariate framework.

An exhaustive review of the literature is beyond the aim of this paper.

Some recent empirical works on this issue can be found in Vilasuso (2002)

and Beine et al. (2002). On the other hand, the literature dealing with the

multivariate case is scarce.

The modelling of long-memory in the multivariate framework was firstly

studied by Teyssière (1997), who implemented several long memory volatility

processes in a bivariate context, focusing on daily FX time series. He

used an approach initially based on the multivariate constant conditional

correlation model (Bollerslev, 1990), which allows for long-memory ARCH

dynamics in the covariance equation. He also weakened the assumption

of constant correlations and estimated time-varying patterns. Teyssière

(1998) estimated several trivariate FIGARCH models on some intraday FX

rate returns. This author finds a common degree of long-memory in the

marginal variances, while the covariances do not share the same level of

persistence with the conditional variances. More recently, Pafka and Mátyás

(2001) analyzed a multivariate diagonal FIGARCH model on three FX timeseries

through quite a complex computational procedure. The multivariate

modelling on other time series has focused on the crude oil returns (Brunetti

and Gilbert, 2001). A bivariate constant correlation FIGARCH model is

fitted on these data to test for fractional cointegration in the volatility

of the NYMEX and IPE crude oil markets2. To our knowledge, there is

no other literature concerned with modelling temporal dependences in the

multivariate context.

The previous research affords a valuable contribution to the better

understanding of long-run dependences in multivariate volatility. A major

shortcoming in applying these approaches in practice, however, lies in

the overwhelming computational burden involved, which simply makes the

straightforward extension of these methods to large portfolios unfeasible

(note that only two or three assets are considered in the empirical

applications of these methods). The procedure we shall discuss is specifically

2.1 The orthogonal multivariate model

We firstly introduce notation and terminology. Consider a portfolio of K

financial assets and denote by rt = (r1t, r2t, …, rKt)????, t = 1, …,T, a weaklystationary

random vector with each component representing the return of

each portfolio asset at time t. Denote by Ft the set of relevant information

up to time t, and define the conditional covariance matrix of the process

by E(rtr????t|Ft−1) = Et−1 (rtr????t) = Ht. Denote as E(rtr????t) = Ω the (finite)

unconditional second order moment of the random vector. Note that only

second-order stationarity is required, which is the basic assumption in the

literature concerned with estimating covariance matrices of asset returns.

Other procedures proposed for estimating the covariance matrix require

much stronger assumptions (see, for instance, Ledoit and Wolf, 2003), as the

existence of higher-order moments and even iid-ness in the driving series.

As the covariance matrix Ω is positive definite, it follows by the spectral

decomposition that Ω = PΛP????, where P is an orthonormal K×K matrix of

eigenvectors, and Λ is a diagonal matrix with the corresponding eigenvalues

of Ω in its diagonal. Lastly, assume that the columns of P are ordered by

size of the eigenvalues of Λ, so the first column is the one related to the

highest eigenvalue, and so on.

The orthogonal model by Alexander is based on applying the principal

component analysis (PCA) to generate a set of uncorrelated factors from

the original series3. The PCA analysis is a well-known method widely used

in practice, and several investment consultants, such as Advanced Portfolio

Technologies, use procedures based on principal components. The basic

strategy in the Alexander model consists of linearly transforming the original

data into a set of uncorrelated latent factors so-called principal components

whose volatility can then be modelled in the univariate framework. With

these estimations, the conditional matrix Ht is easily obtained by the inverse

map of the linear transformation.

The set of principal components, yt = (y1t, y2t, …, yKt)????, is simply

defined through the linear application yt = P????rt. It follows easily that

E(yt) = 0 and E(yty???? t ) = Λ by the orthogonal property of P. The columns

of the matrix P were previously ordered according to the corresponding

eigenvalues size, so that ordered principal components have a decreasing

ability to explain the total variability and the main sources of variability.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *