Sunday, May 31, 2015

In reality- returns distributions

For simplicity and convenience VaR is usually calculated by making an assumptions concerning actual form of the probability distribution of expected return to be normally or log-normally distributed. For example: JP Morgan Risk Metrics (similar to EWMA approach) which is used to forecast the volatility of the portfolio returns, assumes that returns on securities follow a conditionally normal distribution. Additionally Variance Co-variance approach, Historical simulation approach and  Monte Carlo Simulation are all based on distributional assumptions.

However compared to normal (bell-shaped) distribution, actual asset tends to be Fat tailed, Skewed and Time varying. A Fat tailed distribution is characterized by having more probability weight (observations) in its tails relative to the normal distribution. A skewed distribution refers- in this context of financial returns- to the observation that declines in asset prices are more sever than increases. This is in contrast to the symmetry that is built into the normal distribution. Time varying unstable distribution means that the parameters (e.g mean, volatility) vary over time due to variability in market conditions.

Normal Returns
Actual Financial Returns
Symmetrical Distribution
Skewed
“Normal” Tails
Fat-tailed (leptokurtosis)
Stable Distribution
Time-varying parameters

For example: Interest rate distributions are not constant over time
The 10 years interest rate data are collected (1982-1993) and we plot the  daily change in the three-month treasury rate. We observe that the average change is approximately zero, but the probability 'mass' is greater at both tails. It is also greater at the mean; i.e, the actual mean occurs more frequently than predicted by the normal distribution.


The reason that we see fat tails in actual return distribution is because conditional mean and volatility are time varying. It means there is conditional distribution of market returns and it depends on some economic or market or other state. However, given the assumption that markets are efficient time varying conditional mean is refuted by some authors.
Returns are unconditional and normally distributed        Returns are conditional on some information
The implication of heavy tail is that Value at Risk is underestimated.

For example: If normal distribution says VaR is -10% at 95% confidence level, and in the case there is a fat tail distribution, then the expected VAR loss is understated.

Monday, May 25, 2015

Value at Risk (VAR)

# Materials from the first section explaining basic calculation of Value at Risk (assuming normal distribution) are referred from MIT open course-ware. 

Second and third sections will explain how asset return distributions tend to deviate from the normal distribution.

 It shall then cover following:

a) Compare, contrast and calculate parametric and non-parametric approaches for estimating conditional volatility, including: HYBRID METHODS. 
b) Explain the process of return aggregation in the context of volatility forecasting methods.Explain how implied volatility can be used to predict future volatility. 
c) Explain long horizon volatility/VaR and the process of mean reversion according to an AR(1) model.
d) Explain and give examples of linear and non‐ linear
derivatives.
e) Explain how to calculate VaR for linear derivatives. Describe the delta‐ normal approach to calculating VaR for non‐ linear derivatives.
f) Explain the full revaluation method for computing VaR.
g) Explain structural Monte Carlo, stress testing and scenario analysis methods for computing VaR.

VAR has often been called 'New Science of Risk Management', as it tells about odds of loosing money. VaR measures the amount of potential loss that could happen in an investment or a portfolio of investments over a given period time.

Various methods used in calculating VaR

a) One-asset VAR 
i) Price based instruments
ii) Yield based instruments
b) Variance/Covariance
c) Monte Carlo Simulation 
d) Historical Simulation 

Example on why knowing the volatility and Value at Risk is important:  I try to clear this concept by grossly simplifying balance sheet.

We know from accounting equation that assets equals to Equity plus liabilities. So that means whatever we own is made up of what we paid from our pocket and what we took as loan. Now if the stuffs company paid from the pocket looses it's value and we keep on loosing money in market then it becomes a troubling situation.With use of VAR we are trying to protect the money paid from our pocket by knowing how much the market could possibly move against us. This will enable us to know how much capital we need to support the position.



Methodology in nutshell: 
We have frequency distribution of the market returns (percentage change in  index). With the assumption that the returns distribution is normal we first calculate the variance of the returns and then standard deviation. Second we measure 1 % likely worst case outcome in the future by integrating normal function from negative infinity to negative 2.33 standard deviation. That means we multiplying standard deviation with the 2.33.  Further if we want to see 5% likely worst case outcome we take the integral from negative infinity to 1.645 standard deviation.

(Remember t-table with df (degree of freemdom) on left side and probability at the top to get that 2.33 and 1.645 value ??)

(I shall elaborate about calculation with market specific example in the next section)



Calculating Var from realistic data

First let's graph indexes extracted from Mexican Market, the observations are from 01/1995 to 12/1996 (daily). We observe that the financial time series is following a random walk (a mathematical formalization of a path that consists of a succession of random steps). As the random walk is not bounded, predicting the future path will be difficult if we focus only on the levels.  So in this case we have to use the returns (percentage change in the index),  and when we graph the frequency distribution of returns we get the (somewhat) normal curve. The advantage of the normal curve is that we automatically know where the worst 5% and 1% lie on the curve. They are a function of our desired confidence and the standard deviation.

(X-axis is the returns and Y-axis is frequency of the returns)

Once we have time series of returns, we can gauge their relative dispersion with a measure called variance. Variance is calculated by subtracting the average return from each individual return, squaring that figure, summing the squares across all observations, and dividing the sum by the number of observations. The square root of variance is called standard deviation or the volatility. In a normal distribution, 2.33 * the standard deviation represents the largest possible movement 99% of the time (1.64* the standard deviation for 95%).

We find that the variance and standard deviation for returns from 1995/January to 1996/December were 0.000324 and  0.018012 respectively. Now when we multiply 0.018012 with 2.33 (for 99% confidence internal) we get 4.1968% , which let us conclude that we could expect to lose no more than 4.1968% of the value of our position 99% of the time.