Monday, March 30, 2015

Estimating Volatilities and Correlation

I have skipped basic econometric lesson and focused directly on how historical data and various weighting schemes can be used in estimating volatility. Although it is very basic, for your easy reference let me brief main points this post will cover

a) Explain how historical data and various weighting schemes can be used in estimating volatality.

b) Describe the exponentially weighted moving average (EWMA) model for estimating volatility and its properties. 

c) Describe the generalized auto regressive conditional heteroscedasticity [GARCH(p,q)] model for estimating volatility and its properties

d)  Estimate volatility using GARCH (p,q) model

e) Explain mean reversion and how it is captured in the GARCH (1,1) model. 

f) Explain how GARCH models perform in volatility forecasting.

g) Describe how correlations and covariances are calculated, and explain the consistency conditions for covariances.


How to Estimate volatility 
Volatility is instantaneously unobservable. In general, our basic choice is to either infer an implied volatility (based on an observed market price) or estimate the current volatility based on a historical series of returns. We will use historical series of returns to compute historical volatility. There are two broad steps to computing historical volatility :

1. Compute the series of periodic (e.g daily) returns;
2. Choose a weighting scheme (to translate a series into a single metric)

1) Compute the series of periodic returns (e.g., 1 period = 1 day)

In many cases, we assume one period equals one day. In this case, we are estimating a daily volatility. We can either compute the "Continuously compounded daily return" or "the simple percentage change." If Si-1 is yesterday's price and Si is today's price,

Continuously compounded return (aka, log return): Ui= ln(Si/Si-1)

The simple percentage return is given by: Ui= (Si- Si-1)/Si-1

2) Choose a weighting scheme

The series can be either un-weighted (each return is equally weighted) or weighted. A weight scheme puts more weight on recent returns because they tend to be more relevant. For example: If Today's price is largely explained by yesterday price then its weight will be higher, but the weight for the day before price will be lower than yesterdays price.

The "standard" un-weight (or equally weighted) scheme

The un-weighted (which is really equally weighted) variance is a "Standard" historical variance. In this case, the variance is given by :


Note: There are some other authors who have simplified the formulate, by replacing (m-1) with (m). and assume conditional mean is zero. However both approaches are correct.

The weighted scheme (a better approach, generally)

The simple historical approach does not apply different weight to each return (put another way, it gives equal weights to each return). But we generally prefer to apply greater weights to more recent returns:

Alfa  parameters are simply weights; the sum of alpha parameters must equal one because they are weights. 



The glaring flaw in a simple historical volatility (i.e, an un-weighted or equally-weighted variance) is that the most distant return gets the same weight as yesterday's return.

We can now add another factor to the model: the long-run average variance rate. The idea here is that the variance is "mean regressing": think of it the variance as having a "gravitational pull" towards its long-run average. We add another term to the equation above, in order to capture the long-run average variance. The added term is the weighted long-run variance.

The added term is gamma (the weighting) multiplied by (x) the long run variance because the variance is weighted factor.

This is known as an ARCH (m) model. Often omega (seems like 'w' to non statistician) replaces the firm term. So here is a reformatted ARCH (m) model:

This is the same ARCH(m) only the product of gamma and the long-run variance is replaced by a single constant, omega.

Why does this matter ? Because you may see GARCH (1,1) represented with a single constant (i.e, the omega term), and you want to realize the constant will not be the long-run variance itself; the constant will be the product of the long-run variances and a weight.

EWMA model for estimating volatility and its properties

In exponentially weighted moving average (EWMA), the weights decline (in constant proportion, given by lambda), but the lamda sum to one (1). The exponentially weighted moving average (EWMA) is given by:

................infinite series

The ratio between any two consecutive weights is constant: lambda (symbol which looks like inverted V with feather :) ). The recent return gets the greater weight here, for additional day we can see one extra lambda gets multiplied to the subsequent lambda in next period. Since Lambda here we assume is less than one - less weight is assigned to lambda of previous period.

The Infinite series elegantly reduces to recursive EWMA




For example if Lambda = 0.94 then it means variance in current period is explained 94% by previous period variance. Similarly 1-0.94= 0.06 or 6% variance in previous period (recent) return explains the variance in current period. So intuitively we can see if returns are abnormally higher in previous period than it will pull up the estimate.

The Risk Metrics variance model developed by JP Morgan is just the branded version of EWMA.

The generalized auto regressive conditional heteroscedasticity (GARCH(p,q)) model for estimating volatility and its properties. Method to estimate volatility using GARCH model. 

The EWMA is a special case of GARCH (1,1) where gamma=0 and (alpha + beta =1). GARCH (1,1) is the weighted sum of a long run-variance (weight=gamma), the most recent squared-return (weight=alpha), and the most recent variance (weight=beta)





This GARCH (1,1) is a case of the ARCH(m) : the firm term is constant (i.e, the weighted long-run variance) and the second and third terms are recursively giving exponentially decreasing weights to the historical series of returns. Also alpha+beta is less than 1 (One)

In GARCH (1,1) we are saying that there exists a long run variance (unconditional variance), which we can call it as gravitation pull. So now we have additional dynamic, that is we have now moved from two weights to three weights.

In GARCH (1,1) : Lambda multiplied Long run variance is denoted as Omega



It might be pretty confusing sometimes with omega. Where we tend to think that Omega is long run variance, but it is the weight multiplied by long run variance. So what that means is that long run variance is

(Remember in GARCH(1,1) we know that alpha+beta is less than one. So what ever remains as the residual (1-alpha-beta) is weight of long run variance)

So the way we fit GARCH(1,1) is the method of Maximum Likelihood estimate method (we can also do liner regression). In Maximum Likelihood methods we choose parameters that maximize the likelihood of the observations occurring.

GARCH model can be used to forecast volatility because that is a gravitational pull. So the further out we go and the greater weight we have on unconditional on long run variance, more we can expect that the forecast equals the long run variance.

So the specific formula here becomes as



This is described as estimate of future variance on (n+k) days is equal to unconditional long run variance plus, product of "sum of (alpha and beta) raise to the power of k days and difference of current  and long run variance"

For example : Let weight of alpha and beta are 0.12 and 0.70 respectively (this means weight assigned for the nearest return is 12 percentage and that for nearest variance is 77%), and Omega is 0.000007.

If we calculate long run volatility we shall get =   (0.000007/(1-0.12-0.77)) =  0.007977= 79.78%

Now, if we try forward volatility forecast for 10 days forward (Let's assume current variance is 0.227)

E[Variance ten days forward]= 0.007977 + (0.12+0.77)^2 (0.227^2 - 0.007978) = 0.143 = 14.3%

We can also use EWMA to update the co-variance




No comments:

Post a Comment