# CFA Level III, here I come!

Good evening everyone!

I know it’s been a long time since I last posted on the site and I would like to apologize for not having been adding new content since the end of May.

Quite frankly the reason why I’ve been away from the blog from a long time is twofold. First, I am planning my wedding for the end of the year and it is taking me most of my spare time – for those of you who are married, you know what I mean. Second, I had a very bad feeling when I came out of the Level II exam and I was actually quite disappointed. So, I decided to take a break and to focus on the wedding before heading to my summer holidays and to wait for the results. In the end, once it’s done, there is nothing you can do about it.

Tuesday, late in the afternoon, I finally got the crucial CFA e-mail, which noticed me that I had passed the Level II along with 43% of that level’s candidates. I literally jumped out of my chair as if I had scored a goal in the champions league final. Let’s face it, I most certainly had a bit of luck here.

Looking at my “detailed” results (which, as you well know, I’m not allowed to share), I then realized that my work had paid off. Although I did not post about the accounting part because I feel I am not expert enough to really publish something about it, I really paid a lot of attention to this section of the curriculum. And it paid off. As a matter of fact, there is so much weight on this topic in the exam that you can definitely get yourself in a nice position by being confident on the classic exercises. I did practice a lot using Schweser’s QBank and practice exam. In the real exam, I felt accounting questions were not that vicious and hence I managed to score high there. So, for those of you who are taking the Level II next June, spend a lot of time on accounting if you’re not very good at it. The material is quite huge, but as your practice it will start making sens and quite surprisingly there is not much to learn by heart in a way: it is more logic that it might seem at first glance.

Free cash flows are also a major topic of the Level II. Again, when you look at the formulas you might get scared at first glance. However, you should realize that by learning the main formula, you can derive all the other without learning them by heart. Again, even this basic formula might seem quite complicated at the beginning, but it really make sens once you get used to it and I found it quite easy to apply once you understand the rationale behind it.

Economics and Alternative Investments were the most complicated parts of the curriculum to master in my opinion. In fact, I believe that both these topics require a lot of material to be learnt by heart if you’re not practicing them in your daily job. And I hate learning stuff by heart. Derivatives are of the same kind, but with my quantitative finance academic background, I already knew all the material. Again, if you’re not familiar to derivatives, you will have to grasp a lot of concepts which take time to get accustomed to.

All in all, I would say that the basic strategy is really not to give up on any topic of the curriculum. Of course, there will be part of the material you will not be comfortable with on exam day, but you definitely should have an idea of what’s going on everywhere. If you can’t master something, at least grasp the global picture. This is key because it will most of the time allow you to discard one of the three possible answers. Once this is done, even if you choose randomly between the remaining two answers, your probably of success drastically goes up to 50% instead of the initial 33%. Plus, don’t forget that item sets are composed of questions of various difficulties. Hence, being able to score on the easy questions is critical as it can considerably improve your overall result. Remember that if you get battered in a particular topic, it probably won’t matter what you’ve done in the rest, you will fail the exam; the passing criteria are not public and I suspect there is a minimum score for each of the topics.

To sum up, I’m really happy to be done with this Level II because I heard it was the most difficult of the three for candidates with my profile. I must say that I won’t be disappointed of not having Economics and Financial Accounting in the Level III material, but I will have to work extra hard on my writing skills as the Level III’s morning session consists in writing small essays… not in question sets.

I’ll be back soon with more posts on something else than the CFA!

Cheers,

Jeremie

# CFA Level II: Valuating Bonds with Embedded Options

Good evening everybody,

Tonight’s focus will be dedicated to valuing bonds with embedded options. I chose this topic because it involves binomial trees, which is an important an very testable point of the Level II curriculum.  Binomial trees are encountered in two different topics: Fixed Income and Derivatives. Although the concept is the same for the two instances,  the main difference is the fact that the interest rate tree is given, whereas you have to construct the tree for stock values yourself. Besides, in interest rate trees, the probability of going to up or down nodes is 0.5. For options on stocks, you have to determine the risk-neutral probability value.

For simple bonds with no embedded options, in each node, you write:

• The current price of the bond for the remaining payments (including face value)
• The coupon at that node
• The interest rate (more specifically the forward rate) at that not (which is given).

You use the interest rate tree using backward induction, i.e. you start by the node at the right of the tree (the ones the further away from now) where you know the price: the face value. It’s the only place where you know what the price will be. At a given node before the final layer, you have to compute the price of the bond which is given by computing the average (because the probability of ups and downs is 0.5) of the present value of the prices at the two following nodes. To compute the present value, you use the forward rate given at the current nodes. Therefore, there are no forward rates given for nodes at the final layer. Using this process sequentially for each node from the right to the left of the tree, you end up computing the initial node’s value: the price of the bond today.

Because the interest rate trees are built to be arbitrage trees, they are computed such that the benchmark security’s price you get using the tree is the market price.

So if you compute the price of a bond which is different from the benchmark, you will get a different value. This is where spreads come into play. Nominal spreads are computed using YTM. You compute the constant yield implied by the market price of the bond and its benchmark, and you compute the difference between them. However, this is not a good measure as it assumes the yield curve is flat. The Z-spread, for zero-volatility spread, is the spread that is added to each of the term structure’s rate to make its price equal to the benchmark’s price. You can do exactly this by adding the Z-spread to each of the nodes’ rates (you use a value to add to each node’s rate until you find the right value, by trial and error).

For bonds with embedded options, you do exactly the same as for the option-free bond, but the price at the nodes where the call can be exercised you compute the price using the option’s criteria. For example, for a bond callable at 100 in 2 years, if you compute that the price at the second node is 101.5, then you have to use the price which is equal to $\min(101.5,100)=100$.

Recall the following for bonds with embedded options:

$$V_\text{callable}=V_\text{option-free bond} – V_\text{call}$$

$$V_\text{putable}=V_\text{option-free bond} + V_\text{put}$$

The thing is, the market value of the bond with the embedded option will often be different than is theoretical, arbitrage-free, value.  The reason for this comes mainly from the fact that the interest rate tree assumes some interest rate volatility. Remember the following very important facts:

• Option-free bonds prices are unaffected by interest rate volatility, they are priced as of today and that’s it.
• The price of the embedded option (call or put) is positively related to interest rate volatility.

When we make the valuation of the bond with the embedded option, we essentially compute the value of the cash flow assuming the interest rate volatility. In a sense, we price the option-feature of the bond, assuming the interest rate volatility of the tree. However, market can change its expectation on the volatility of the bond, and hence, the price is not the same anymore, because although the option-free bond value has not changed, the value of the embedded option is different.

Similarly to the Z-spread, if you find the value you have to add to each node’s rate to get the same value from the tree as the market value, you get the OAS, the option-adjusted spread, which is the spread of the bond “removing” its option feature. You have to use this  spread to compare bonds with embedded option with bonds without embedded options (such as their relative benchmark) or even bonds with embedded options with each other. The price of the option is relative to the volatility of interest rate, not to the credit risk and liquidity risk of the bond; it shall hence be remove to compare inherent quality of the fixed income security.

That’s it for this post.

# CFA Level II: Forward Markets and Contracts

Good evening,

A few days away from the exam, I am taking a bit of time to post the main picture of some topics on the curriculum, which I think can be simply explained. This post is dedicated to forwards.

The rationale behind forwards is very simple. Assume there is an asset $S$ which is worth $S_0$ today. You want to enter a contract with somebody to agree to buy the asset at time $T$ at the forward price $FP$. This might seem a bit complicated at first glance for people not familiar with finance mainly because you do not know what $S_T$ (the price of $S$ at time $T$) will be. Well, the truth in the matter is that… it doesn’t matter. Indeed, there is a way you can replicate the action of buying a stock at time $T$ by doing the following:

1. At time $t=0$:
1. Borrow $S_0$ at the current rate $R$
2. Buy the asset at $S_0$
2. At time $t=T$:
1. Repay what you borrowed at time 0 with interest for $S_0 \cdot (1+R)^T$
2. Keep the stock at the value $S_T$.

So, the net investment at $t=0$ is 0, and at $t=T$, you have the stock and you pay $S_0 \cdot (1+R)^T$. So, this is exactly exactly the same thing as buying the stock forward. Hence, you can deduce that:

$$FP=S_0 \cdot (1+R)^T$$

The results for the law of one price, and we call that an arbitrage argument because if the price was any different from stated above, the you could make instant risk-free profit by doing the strategy previously stated (or its opposite).

For example, assume a stock is worth $S_0=100\$$today, that the interest rate R=10\%, and that you want to buy the stock forward in 1 year. Then, FP=100 \cdot.(1+10\%)=110. If somebody is willing to buy it forward for 115, then enter the forward contract as a seller, thus agreeing selling S for 115 in a year. Borrow 100 today, buy the stock, hold the stock, repay your loan plus interest in a year for 110 and give the stock to the counterparty for the agreed 115. You get a free lunch of 115-110=5. If somebody wants to sell the stock forward for 105, you should agree to enter the contract as a buyer. Sell the stock short for 100 today, invest the proceeds for the interest rate today, collect the interest invested of 110 in a year, and buy back the stock as agreed for 105. You make a free lunch of 110-105=5. That’s it. It’s easy. The forward price is the price today invested at the interest rate. What we just did implies that the value today of a forward contract is 0, by definition. However, the value of the contract will evolve between time 0 and expiration T. Clearly at expiration, the value of the contract is given by:$$V_T=S_T-FP$$When time t is between 0 and T, we get the following result:$$V_t = S_t – \frac{FP}{(1+R)^{T-t}}$$This is quite logical and you can always check that the value at t=0 is 0:$$V_0=S_0 – \frac{FP}{(1+R)^{T-0}}=S_0 – \frac{S_0 \cdot (1+R)^T}{(1+R)^T}=0$$And for t=T$$V_T=S_T – \frac{FP}{(1+R)^{T-T}}=S_T – FP$$Notice that this is the value for the long side, i.e. for the person agreeing on buying the asset at expiration for the forward price. Because derivatives are zero-sum games, the value of the short side is the opposite of the value of the long side. That’s it. I’ll come back with variants later. # CFA Level 2: Quantitative Methods – Autoregressive Processes Hello again everybody, We’re getting towards the final straight line before the exam, and I will post here the content of all the little flash cards that I created for myself. Starting back where I left, in the Quantitative Methods this post will be about Autoregressive Processes sometime denoted AR. These processes are of the following form:$$x_{t+1} = b_0 + b_1 x_t$$Where x_t is the value of process X at time t, and b_0 and b_1 are the parameters we are trying to estimate. To estimate the parameters b_0 and b_1, we proceed as follows: 1. Estimate the parameters using linear regression 2. Calculate the auto-correlations of the residuals 3. Test whether these auto-correlations are significant: Note that we cannot use the Durbin-Watson test we used previously in this section of the CFA curriculum; we will be using a t-test that works this way:$$t = \frac{\rho_{\epsilon_t, \epsilon_{t+k}}}{\frac{1}{\sqrt{T}}}=\rho_{\epsilon_t, \epsilon_{t+k}} \sqrt{T}$$Where \epsilon_t is the residual term of the regression at time t, and T is the number of observation. The t statistic has T-2 degrees of freedom. If they are statistically significant, then we cannot continue our analysis because of reasons I’ll explain a bit later in the post. With AR processes, you are trying to actually predict the next values of a given process using a linear relationship between successive values an by applying simple linear regression. The thing is, if you want to be able to trust your estimated b_0 and b_1 parameters, you need the process to be covariance-stationary. Now, a bit of math. If a process has a finite mean-reverting level, then it is covariance-stationary. What is the mean-reverting level? Well it simple the value x_t at which x_{t+1}=x_t. So, let’s write this in an equation:$$x_{t+1} = x_t = b_0 + b_1 x_t (1-b_1) x_t = b_0 \iff x_t=\frac{b_0}{1-b_1}$$So, X is covariance stationary if b_1 \neq 1. The test for auto-correlations we did in the point 3) guarantees that the process is covariance-stationary if the auto-correlations are not statistically significant. What if the process X is not covariance-stationary? Well you create a new process Y where:$$y_t = x_t – x_{t-1}$$So, you have a new model$$y_t = b_0 + b_1 y_{t-1} + \epsilon_t$$which models the next change in the process X which is then covariance stationary. You can use that for the analysis. This little “trick” is called first differencing. That’s it, stay tuned for more soon! # CFA Level II: Quantitative Methods, ANOVA Table Good evening everyone, Following my last post on multiple regression, I would like to talk about ANOVA tables as they are a very important part of the Level II curriculum on quantitative methods. ANOVA stands for ANAlysis Of VAriance; it helps to understands how well a model does at explaining the dependent variable. First of all, recall that Y=\{Y_i\} ~ i=1,…,n denote the real values of the dependent variables and recall that \hat{Y}=\{\hat{Y}_i\} ~ i=1,…,n are the values estimated by the model. We define the following values: Total Sum of Squares (SST) :$$\text{SST}=\sum_{i=1}^n (Y_i – \bar{Y})^2$$This is the total variation of the process Y, i.e. the squared deviations from Y_i from the mean of the process denoted \bar{Y}. With the regression, this total variation is what we are trying to reproduce. Regression Sum of Squares (RSS):$$\text{RSS}=\sum_{i=1}^n (\hat{Y}_i – \bar{Y})^2$$This is the variation explained by the regression model. If the model fitted perfectly the dependent variable, we would have \text{RSS}=\text{SST}. Sum of Squared Errors (SSE):$$\text{SSE}=\sum_{i=1}^n (Y_i – \hat{Y}_i )^2=\sum_{i=1}^n \epsilon_i ^2$$Finally this is the unexplained variation; the sum of the differences between the real values of the process Y_i and the values estimated by the model \hat{Y}_i. As expected, the total variation is equal to the sum of the explained variation and the unexplained variation:$$\text{SST}=\text{RSS} + \text{SSE}$$Note that the CFA does not require candidates to be able to compute this values (it would take too long) but I thought that having the definitions helps understanding the concepts. From these values, we can get the first important statistic we want to look at when discussing the quality of a regression model:$$\text{R}^2=\frac{\text{RSS}}{\text{SST}}$$The \text{R}^2 measures the part of the total variation that is being explained by the regression model. Its value is bounded from 0 to 1, and the closer it gets to 1 the better the model fits the real data. Now we also want to compute the average of \text{RSS} and \text{SSE} (the mean sum of squares):$$\text{MSR} = \frac{\text{RSS}}{k}\text{MSE} = \frac{\text{SSE}}{n-k-1}$$where n is the size of the sample and k is the number of dependent variables used in the model. These values are “intermediary” computations and are use for different statistics computations. First we can compute the standard error of the error terms \epsilon (SEE):$$ \text{SEE}=\sqrt{\text{MSE}} $$Note that this is just the application of the classic computation of the standard deviation with k+1 degrees of freedom. If the model fits well the data, then \text{SEE} will be close to 0 (its lower bound). Now, there is an important test in regression analysis which is called the F-statistic. Basically, this test has the null hypothesis that all the coefficients of the regression are statistically insignificant: H_0 : b_i=0 ~ \forall i. It is computed as follows:$$\text{F}=\frac{\text{MSR}}{\text{MSE}}$\text{F}$is a random variable distributed under an F-Statistic with$k$degrees of freedom in the numerator and$n-k-1$degrees of freedom in the denominator. The critical value of the variable can be found in the F distribution form attached to the CFA exam. It is very important to understand that if you reject$H_0\$, you say that at least one of the coefficients is statistically significant. This, by no mean, implies that all of them are!

To sum up, you can look at the following table known as the ANOVA table:

Source of variation Degrees of Freedom Sum of Squares Mean Sum of Squares