# CFA Level III: Implementation Shortfall

Good evening,

A quick post tonight to discuss a topic of Trading, Rebalancing and Monitoring part of the Level III curriculum called Implementation Shortfall. The reason why I chose to do this is because it took me some time to overcome the naming conventions of the CFA institute, which are, with all due respect, very counterintuitive in my opinion.

The idea is very simple: you would like to be able to measure the quality of the execution of a trade compared to an ideal execution.

From what I’ve seen in mock exams and exercises, they always give you a little story like the following one:

• At some point, the investment manager decides to buy 10 Manchester United stocks, which trades at 20.
• This is called the benchmark price (BP), for some reason.
• Then (usually the following day), a limit order is placed in the market, say at 19.95 and is not executed at all. The market closes on that day at 20.10. Too bad.
• You pay 0.05 per share of commission.
• The following day, the order is revised at like 20.15 and 8 stocks (i.e not the whole 10) are filled at that price, and the market closes at 20.20.

What happens there? Well, assume you are able to magically implement your trading ideas instantly at no cost: this is called the paper portfolio. What is your profit at the end of the story?

• You buy 10 shares at 20 for 200.
• At the end of the story, your stocks are worth 20.20 each, which gives you a total of 202.
• You earned 202 – 200 = 2

In the real world, it did not work out that way:

• You bought 8 stocks at 20.15 for 161.2
• You pay 0.4 in commission
• At the end of the story, your stocks are worth 20.20 each, which gives 161.6
• You earned 161.6 – 161.2 – 0.4 = 0

The implementation shortfall is defined as follows:

$$\frac{\text{paper portfolio gain}-\text{real portfolio gain}}{\text{paper portfolio investment}}=\frac{2}{200}=1.0\%$$

This means that 1.0% of the potential investment was lost (or, more precisely, not won)  in the implementation, due to different frictions.

The CFA Institute then provides a way to split this difference in different components.

First the explicit costs, which consists in all the obvious transaction costs that are expressed in the trade:

$$\frac{\text{commission}}{\text{paper portfolio investment}}=\frac{0.4}{200}=0.2\%$$

That’s fine. But then comes the bizarre naming conventions.

Some extra costs come from the fact between the moment when the investment manager decides to buy the stock and the day when the order is partially filled, the market moved.

The slippage or delay costs is the difference between the benchmark price and the closing price, the day before the execution day (which is called, poorly the decision price, I don’t understand why) divided by the benchmark price, times the percentage of the order that was filled. In order case we have:

$$\frac{20.10-20.00}{20} \cdot \frac{8}{10}=0.4\%$$

It is the portfolio of the implementation shortfall that was lost because of the delay between the time the manager saw the opportunity and the day the trade was partially executed.

Then, the realized loss is the difference between the execution price and the closing price the previous day (so-called decision price), divided by the benchmark price times the percentage of the order that was filled:

$$\frac{20.15-20.10}{20} \cdot \frac{8}{10}=0.2\%$$

This is what was lost during the execution day.

Finally the missed trade opportunity cost is the difference between the price at the end of the story and the benchmark price divided by the benchmark price time the proportion of the order that was not filled:

$$\frac{20.20-20.00}{20} \cdot \frac{2}{10}=0.2\%$$

This is what was lost by not being executed.

If you sum all the components, you get 0.2% + 0.4% + 0.2% + 0.2% = 1.0%, the total implementation shortfall.

So you are able to see that, in this example, the main component of the implementation shortfall is the delay between the trade idea and the trade execution day. The limit order at 19.95 was too ambitious and resulted in a loss.

Notice also that all the examples I saw are examples where the market goes in the trade direction (i.e. market goes up after a buy decision). It could be possible that the market goes adversely, which would result in a negative implementation shortfall… i.e a gain.

That’s all for today.

I’ll be back soon with more.

Cheers,

Jeremie

# Monte-Carlo Method on Panini stickers distribution

Good afternoon everyone,

Today I want to write a funny post which combines two of the most important topics of June 2014: the CFA Level III curriculum and the FIFA World Cup. I know, that a long shot.

I assume that pretty much every reader of this blog had the opportunity to collect Panini stickers before major sporting events. The idea is quite simple. You have an album which you need to fill with stickers that you buy by packs. For the 2014 FIFA World Cup in Brazil, each pack contains 5 stickers and the album has roughly 650 stickers.

When I was a child, I used to buy or to be offered I pack every day and filling the album took weeks. Today, I start off by buying a whole box of 100 packs, which makes it 500 stickers at a time. The thing is, as you open the packs, you would expect to have duplicate stickers that you will exchange with friends for stickers that you don’t already own. This year though, several of my friends and myself bought boxes and got absolutely no duplicate, which mean that we all had an album filled with 500 stickers straightaway.  It then looked clear to me that Panini did this on purpose, but I wanted to make sure using probability thoery.

# Question

What is the probability that, after 500 stickers, I still have no duplicate sticker?

Alternatively, you could ask: did Panini make sure that you have no duplicate if you buy a whole box of 500 stickers?

# Basic Assumption

For this experiment, I assume that getting the event of getting any sticker is equiprobable. This maximizes the chance of having no duplicate after several draws. Formally, I define a random variable $X$ which can take any value between 1 and $m$ (the number of existing stickers) and which represents the event of drawing the sticker’s number. Then we have that the probability of drawing sticker number $i$ is:

$$\mathbb{P}(X=i)=\frac{1}{m}$$

In our case, $m=650$ and the probability of getting any sticker is roughly 0.15%.

# Single draws

To begin, I remove the concept of packs and assume that I get every sticker randomly with the probability mentioned above. In this setup, it is quite easy to compute the probability of having no duplicate after $n$ draws:

After 1 draw, the probability of having a duplicate is 100%; of course, because you have no stickers at all.

After 2 draws, the probability of having no duplicate is:

$$\frac{650-1}{650}=\frac{649}{650}=99.85\%$$

The idea is simple: on the numerator is the number of cards that we don’t already have and on the denominator is the number of possible cards. When then multiply this probability by the probability of having no duplicate so far (which was 100%, i.e. 1, in this case).

After 3 draws we have:

$$99.85 \% \cdot \frac{650-2}{650}=99.54\%$$

We can generalize this by saying that for $m$ existing different stickers, the probability of having no duplicate after $n$ draws is:

$$\prod_{i=1}^n \frac{m – (i-1)}{m}$$

That’s rather easy to compute using any programming language such as Matlab or C#.

Coming back to our world cup 2014 example, we can draw the following chart to see how the probability of having no duplicate evolves along with the number of single draws performed:

As you can see, in this setup the probability of having no duplicate is almost 0% after 100 draws. So, there is little chance that this occurred if Panini did not make something special about it.

# Draws from packs

On argument that could be made is that you should not have any duplicate within a single pack. In this case, you are sure to have 5 distinct cards  every time you open a pack, and it should slightly boost your probability to have no duplicates after $n$ draws. This can be shown using a simple example: assume you draw 5 single cards, the probability of having no duplicate using the formula above gives 98.47%. If you get cards by pack, drawing 5 cards corresponds to drawing 1 pack, which by assumption guarantees having no duplicates (probability of 100%).

So, how do we compute the probability of having no duplicate after opening $n$ packs? Well I guess it is possible to formulate it mathematically, and frankly when I thought about this I was not really in the mood of digging into all these permutations. So, I decided to go with another approach: a monte-carlo simulation. How does that work? The idea is quite simple: I am going to create a computer program that simulates the process of opening packs of stickers. Here is how the program will work:

## Creating a pack

I assume the pack contains $p$ stickers, with an equal probability of drawing any sticker.

• Start with an empty pack
• As long as I don’t have $p$ stickers in the pack
• Draw a sticker randomly
• If the sticker is not already in the pack
• Add the sticker to the pack
• Return the full pack

## Running a simulation

Now that I have a procedure to create a pack, I will define what I mean by running a simulation. The idea is to simulate the action of opening packs randomly generated on the fly until I get a duplicate in the stickers I got:

• Start with an empty album
• Until I have a duplicate a duplicate sticker
• Generate a random pack
• Open the pack and put the stickers in the album
• Return the number of opened packs

So, what I’m doing here is to simulate a random variable $Y$ which is the number of opened pack before getting a duplicate.

Back to our Panini example, we really simulate opening packs as we do from the box of 100 packs. What we would like to do is to estimate the probability distribution of this variable $Y$ in order to be able to discuss whether it is statistically possible that we got no duplicate after 100 packs. This is where Monte-Carlo simulation comes into action.

## Monte-Carlo simulation

The principle of Monte-Carlo simulation is to simulate the outcome of a random variable $k$ times, where $k$ is very large. We divide the number of each outcome we got  by the total number of simulations, $k$. This gives us an estimation of the probability of each outcome.

This is exactly what I did for the Panini example, and I got the following results:

In the graph above, you see the probability of having the first duplicate after opening $n$ packs, i.e. $\mathbb{P}(Y=n)$. To be able to make a final assessment, we would like to see the estimated cumulative distribution of $Y$ which corresponds to the probability of having at list a duplicate after opening $n$ packs:

As you can see, after 15 packs, there is a probability of having no duplicate of almost zero. That is, a probability of about 0 of having no duplicate after $5 \cdot 15 = 75$ stickers. This means that there is almost no chance that my friends and I had no duplicate after opening a whole box of 100 packs.

So, there is not much difference with the simple model, but that’s not really the point. The idea is that it was hard to get a closed form of the probability of having at least duplicate after $n$ packs, but that by using the Monte-Carlo simulation, we managed to estimate this. This is sometimes also used for simulating scenarios under some model assumptions!

I hope you enjoyed this funny example!

Cheers,

Jeremie

# CFA Level III: Interest Rate Parity

Hello everyone,

Today I’m gonna talk about some economic concepts that were mentioned at least since Level II and that are quite useful in the whole curriculum and in finance in general when it comes to dealing with currency management.

# Covered Interest Rate Parity

The idea is quite simple, we will compute the forward exchange rate between two currencies using an arbitrage argument, say EUR and USD. The spot exchange rate is denoted $S_{\text{EUR}/\text{USD}}$: it corresponds to the number of euros you get today for 1 US dollar. Furthermore, the risk-free rate in USD is denoted $R_{\text{USD}}$ and the risk-free rate in EUR is denoted $R_{\text{EUR}}$. The question is, after some time $T$, how many euros will I get for 1 US dollar? This rate is called the forward rate and is denoted $F_{\text{EUR}/\text{USD}}$.

Well, you can price that quite easily using an arbitrage argument! The idea is simple:

• I’m going to borrow today $\frac{1}{1+R_\text{USD}}$ USD which means that at time $T$, I will have to pay back 1 USD.
• Then I’m going to convert what I just borrowed in EUR, which gives me $S_{\text{EUR}/\text{USD}} \cdot \frac{1}{1+R_\text{USD}}$ euros
• I then invest this at the risk-free rate in EUR, and I get at time $T$ $(1+R_\text{EUR}) S_{\text{EUR}/\text{USD}} \cdot \frac{1}{1+R_\text{USD}}$ euros

What I wrote above means that:

$$F_{\text{EUR}/\text{USD}}= S_{\text{EUR}/\text{USD}} \cdot \frac{1+R_\text{EUR}}{1+R_\text{USD}}$$

That’s as simple as it sounds, we have determined the forward price of US dollars in euros at time $T$.

A very easy way of remembering the formula above is noticing that the rate in the numerator and in the denominator are from the same currency as is shown in the rate label: EUR/USD.

Also, recall from this post that in this case (EUR/USD), the US dollar is the asset being priced in euros; the US dollar is an asset like anything else.

Finally, we understand from the formula above that:

$$R_\text{EUR} > R_\text{USD} \implies F_{\text{EUR}/\text{USD}}> S_{\text{EUR}/\text{USD}}$$

This is very useful because you very often have to say what currency is trading at premium or at discount in another currency. First I used to always get that wrong. In fact, it’s very easy.  The currency being traded is the one in the denominator of the label, here USD. Then, if the forward price is higher (lower) than the spot price, it is of course trading at premium (discount).

So, if we say that the USD is trading at premium in EUR, it means that we can have more EUR in the forward market than in the spot market for 1 USD.

# Uncovered interest rate parity

This is in a sense an extension of the covered interest rate parity we just discussed which says that:

$$\mathbb{E}({S_{\text{EUR}/\text{USD}}}_T)= {S_{\text{EUR}/\text{USD}}}_0 \cdot \frac{1+R_\text{EUR}}{1+R_\text{USD}}$$

Notice that here the implication is different than previously, because we say that we expect that the spot rate at time $T$ will be equal to the forward price at today. This comes to says that the currency that has the higher (lower) interest rate is expected to depreciate (appreciate):

$$R_\text{EUR} > R_\text{USD} \implies \mathbb{E}({S_{\text{EUR}/\text{USD}}}_T)> {S_{\text{EUR}/\text{USD}}}_0$$

Indeed, a higher spot rate in the future means that you would get more euros for the same amount of US dollars which means that the euro has depreciated!

A lot of traders disagree with that statement, and this comes to a very famous trading strategy called the carry trade. The idea is really simple as well: these traders do not think that the currency with the higher interest rate will depreciate. They hence short the currency with lower interest rate and invest in the currency with higher interest rate. In the curriculum, they say that this strategy tends to work most of the time, generating positive income. However, they say that, when for some reason, the interest rate goes in the expected direction, they tend to do so very violently and that it can lead to very large losses.

That’s it for today, I hope this little post will help you in mastering this concept, which is key to a lot of different topics at different levels of the CFA curriculum.

Cheers,

Jeremie

# News from the front line

Hello everyone!

I know it’s been a while since I last contributed to the blog and I would like to apologize for that but I’ve been pretty busy over the past 6-8 months.

So, to make amends, I updated the blog’s look and feel and I hope you will like this new version, which is also fully responsive, allowing you to visualize the blog on your computer as well as on your favorite mobile device.

So, what happened during these last months?

First I got married on January 2nd, which, as you can imagine, was quite time-consuming in terms of organisation before, during an after the event! Anyway, being a happy married man, I came back in January ready to start this year’s CFA campaign. I must say it wasn’t really easy because the challenge is pretty different this time around: the CFA Level III is much more oriented towards asset allocation and portfolio management, which is essentially what I’ve been doing at the office on a daily basis for the past 3 years. Quite surprisingly, it doesn’t really make it really easier. Indeed, when you face a topic you don’t know, you automatically focus on the whole material. When you feel you already know about a topic, it’s difficult to read or watch the whole material looking for the specific keywords the CFA Institute is looking for. I still went for the brute force method by going through all the material, but I suspect a more efficient approach could be used consisting in doing the exercises first and coming back to the parts that you were not able to answer. That’s just an idea.

Another major news came up in my life by the end of February. Indeed, I accepted a job offer from a commodity company in Hong Kong. Hence, I am preparing my relocation which will be completed by the beginning of August, although I will officially start my new job right after the CFA Level III exam. I am sure you can imagine that here again, I spend a lot of time preparing to leave Switzerland, which can be quite intense in terms of paperwork and administration.

Anyway, I will try to add a few posts on the important new topics I encountered in the Level III curriculum in the coming few days, and I also have a few interesting posts coming your way about softer topics.

Cheers,

Jeremie

# LINQPad, an essential tool for .Net programmers

Good evening everyone,

Since I started this blog I wrote a few posts about computer science and programming in particular, but I recently realized that I hardly ever discuss which tools I recommend.

The reason why I thought it was a waste of time is because most of the time, blogger will talk about a given tool because the editor contacted them and gave them a free license against a review. With proper disclosure of the arrangement and an objective point of view of the product, I don’t see any problem with that, but I though there was already enough material available online to discuss well-known tools.

However, I think some of them remain unknown and in some instances quite unfairly so. The tool I want to talk quickly about in this post is a perfect example: it’s call LINQPad.

First, I want to say that I did not have any contact with the editor, and that I bought my license (which enables more options, I’ll come back to this, but the main features are free).  Hence, this “review” is totally objective.

LINQPad is a lightweight software which can be run as a single .exe (whithout any installation required except the .net framework). It has mainly two purposes:

• It is a C# and F# Scratchpad
• It helps browse and mine data sources easily

## C# and F# Scratchpad

Basically, LINQPad allows you to write a snippet of code in C# or F# and to display some output without having to create and compile a Visual Studio project. This is really incredibly useful.

In a previous post, I discussed the advantages of functional programming. If you wanted to try my example, you would have to open Visual Studio, create a C# console project, write the code in the Main method, compile and see the result in the console. Using LINQPad as a scratchpad, I can simply copy and paste the code, use the Dump() built-in extension method for any object to display it as output and hit F5. I made a little screenshot of the screen’s important parts:

This example is fairly basic, but the cool thing is that you can access almost anything in the .Net framework and, if you need to, even reference some of your own DLLs or a even a Nuget package.

## Data browsing and mining

Another power feature of LINQPad is its ability to allow you to “import” data source and interact with them easily. As a matter of fact, it comes with several built-in or third party providers which will help you through a simple wizard to configure the access to a given data source which you will then be able to query using LINQ. These data sources can be various: SQL, MySQL, RavenDB, Web Services and so on. Once the wizard is complete, you can access just as you would browse any type of collection which supports LINQ.

Again, you can write some scripts to retrieve some records, process them and display what you need. This is extremely useful if you have a database with a lot of data and you want to try different ways of displaying the information for example. It’s also very useful if you built some software but you didn’t have the time to write the administration part, you will still be able to extract the data and use it before you get the chance to create beautiful grid and everything -or before you actually give up because LINQPad might be enough if you don’t need to be over user-friendly.

That’s it, have a look at it, and buy it if you want to have some nice features like the intellisense and helpers of the sort. It really helps me a lot and avoids me creating a lot of useless visual studio projects.

Cheers,

Jeremie