# LINQPad, an essential tool for .Net programmers

Good evening everyone,

Since I started this blog I wrote a few posts about computer science and programming in particular, but I recently realized that I hardly ever discuss which tools I recommend.

The reason why I thought it was a waste of time is because most of the time, blogger will talk about a given tool because the editor contacted them and gave them a free license against a review. With proper disclosure of the arrangement and an objective point of view of the product, I don’t see any problem with that, but I though there was already enough material available online to discuss well-known tools.

However, I think some of them remain unknown and in some instances quite unfairly so. The tool I want to talk quickly about in this post is a perfect example: it’s call LINQPad.

First, I want to say that I did not have any contact with the editor, and that I bought my license (which enables more options, I’ll come back to this, but the main features are free).  Hence, this “review” is totally objective.

LINQPad is a lightweight software which can be run as a single .exe (whithout any installation required except the .net framework). It has mainly two purposes:

• It is a C# and F# Scratchpad
• It helps browse and mine data sources easily

Basically, LINQPad allows you to write a snippet of code in C# or F# and to display some output without having to create and compile a Visual Studio project. This is really incredibly useful.

In a previous post, I discussed the advantages of functional programming. If you wanted to try my example, you would have to open Visual Studio, create a C# console project, write the code in the Main method, compile and see the result in the console. Using LINQPad as a scratchpad, I can simply copy and paste the code, use the Dump() built-in extension method for any object to display it as output and hit F5. I made a little screenshot of the screen’s important parts:

This example is fairly basic, but the cool thing is that you can access almost anything in the .Net framework and, if you need to, even reference some of your own DLLs or a even a Nuget package.

## Data browsing and mining

Another power feature of LINQPad is its ability to allow you to “import” data source and interact with them easily. As a matter of fact, it comes with several built-in or third party providers which will help you through a simple wizard to configure the access to a given data source which you will then be able to query using LINQ. These data sources can be various: SQL, MySQL, RavenDB, Web Services and so on. Once the wizard is complete, you can access just as you would browse any type of collection which supports LINQ.

Again, you can write some scripts to retrieve some records, process them and display what you need. This is extremely useful if you have a database with a lot of data and you want to try different ways of displaying the information for example. It’s also very useful if you built some software but you didn’t have the time to write the administration part, you will still be able to extract the data and use it before you get the chance to create beautiful grid and everything -or before you actually give up because LINQPad might be enough if you don’t need to be over user-friendly.

That’s it, have a look at it, and buy it if you want to have some nice features like the intellisense and helpers of the sort. It really helps me a lot and avoids me creating a lot of useless visual studio projects.

Cheers,

Jeremie

# CFA Level III, here I come!

Good evening everyone!

I know it’s been a long time since I last posted on the site and I would like to apologize for not having been adding new content since the end of May.

Quite frankly the reason why I’ve been away from the blog from a long time is twofold. First, I am planning my wedding for the end of the year and it is taking me most of my spare time – for those of you who are married, you know what I mean. Second, I had a very bad feeling when I came out of the Level II exam and I was actually quite disappointed. So, I decided to take a break and to focus on the wedding before heading to my summer holidays and to wait for the results. In the end, once it’s done, there is nothing you can do about it.

Tuesday, late in the afternoon, I finally got the crucial CFA e-mail, which noticed me that I had passed the Level II along with 43% of that level’s candidates. I literally jumped out of my chair as if I had scored a goal in the champions league final. Let’s face it, I most certainly had a bit of luck here.

Looking at my “detailed” results (which, as you well know, I’m not allowed to share), I then realized that my work had paid off. Although I did not post about the accounting part because I feel I am not expert enough to really publish something about it, I really paid a lot of attention to this section of the curriculum. And it paid off. As a matter of fact, there is so much weight on this topic in the exam that you can definitely get yourself in a nice position by being confident on the classic exercises. I did practice a lot using Schweser’s QBank and practice exam. In the real exam, I felt accounting questions were not that vicious and hence I managed to score high there. So, for those of you who are taking the Level II next June, spend a lot of time on accounting if you’re not very good at it. The material is quite huge, but as your practice it will start making sens and quite surprisingly there is not much to learn by heart in a way: it is more logic that it might seem at first glance.

Free cash flows are also a major topic of the Level II. Again, when you look at the formulas you might get scared at first glance. However, you should realize that by learning the main formula, you can derive all the other without learning them by heart. Again, even this basic formula might seem quite complicated at the beginning, but it really make sens once you get used to it and I found it quite easy to apply once you understand the rationale behind it.

Economics and Alternative Investments were the most complicated parts of the curriculum to master in my opinion. In fact, I believe that both these topics require a lot of material to be learnt by heart if you’re not practicing them in your daily job. And I hate learning stuff by heart. Derivatives are of the same kind, but with my quantitative finance academic background, I already knew all the material. Again, if you’re not familiar to derivatives, you will have to grasp a lot of concepts which take time to get accustomed to.

All in all, I would say that the basic strategy is really not to give up on any topic of the curriculum. Of course, there will be part of the material you will not be comfortable with on exam day, but you definitely should have an idea of what’s going on everywhere. If you can’t master something, at least grasp the global picture. This is key because it will most of the time allow you to discard one of the three possible answers. Once this is done, even if you choose randomly between the remaining two answers, your probably of success drastically goes up to 50% instead of the initial 33%. Plus, don’t forget that item sets are composed of questions of various difficulties. Hence, being able to score on the easy questions is critical as it can considerably improve your overall result. Remember that if you get battered in a particular topic, it probably won’t matter what you’ve done in the rest, you will fail the exam; the passing criteria are not public and I suspect there is a minimum score for each of the topics.

To sum up, I’m really happy to be done with this Level II because I heard it was the most difficult of the three for candidates with my profile. I must say that I won’t be disappointed of not having Economics and Financial Accounting in the Level III material, but I will have to work extra hard on my writing skills as the Level III’s morning session consists in writing small essays… not in question sets.

I’ll be back soon with more posts on something else than the CFA!

Cheers,

Jeremie

# CFA Level II: Valuating Bonds with Embedded Options

Good evening everybody,

Tonight’s focus will be dedicated to valuing bonds with embedded options. I chose this topic because it involves binomial trees, which is an important an very testable point of the Level II curriculum.  Binomial trees are encountered in two different topics: Fixed Income and Derivatives. Although the concept is the same for the two instances,  the main difference is the fact that the interest rate tree is given, whereas you have to construct the tree for stock values yourself. Besides, in interest rate trees, the probability of going to up or down nodes is 0.5. For options on stocks, you have to determine the risk-neutral probability value.

For simple bonds with no embedded options, in each node, you write:

• The current price of the bond for the remaining payments (including face value)
• The coupon at that node
• The interest rate (more specifically the forward rate) at that not (which is given).

You use the interest rate tree using backward induction, i.e. you start by the node at the right of the tree (the ones the further away from now) where you know the price: the face value. It’s the only place where you know what the price will be. At a given node before the final layer, you have to compute the price of the bond which is given by computing the average (because the probability of ups and downs is 0.5) of the present value of the prices at the two following nodes. To compute the present value, you use the forward rate given at the current nodes. Therefore, there are no forward rates given for nodes at the final layer. Using this process sequentially for each node from the right to the left of the tree, you end up computing the initial node’s value: the price of the bond today.

Because the interest rate trees are built to be arbitrage trees, they are computed such that the benchmark security’s price you get using the tree is the market price.

So if you compute the price of a bond which is different from the benchmark, you will get a different value. This is where spreads come into play. Nominal spreads are computed using YTM. You compute the constant yield implied by the market price of the bond and its benchmark, and you compute the difference between them. However, this is not a good measure as it assumes the yield curve is flat. The Z-spread, for zero-volatility spread, is the spread that is added to each of the term structure’s rate to make its price equal to the benchmark’s price. You can do exactly this by adding the Z-spread to each of the nodes’ rates (you use a value to add to each node’s rate until you find the right value, by trial and error).

For bonds with embedded options, you do exactly the same as for the option-free bond, but the price at the nodes where the call can be exercised you compute the price using the option’s criteria. For example, for a bond callable at 100 in 2 years, if you compute that the price at the second node is 101.5, then you have to use the price which is equal to $\min(101.5,100)=100$.

Recall the following for bonds with embedded options:

$$V_\text{callable}=V_\text{option-free bond} – V_\text{call}$$

$$V_\text{putable}=V_\text{option-free bond} + V_\text{put}$$

The thing is, the market value of the bond with the embedded option will often be different than is theoretical, arbitrage-free, value.  The reason for this comes mainly from the fact that the interest rate tree assumes some interest rate volatility. Remember the following very important facts:

• Option-free bonds prices are unaffected by interest rate volatility, they are priced as of today and that’s it.
• The price of the embedded option (call or put) is positively related to interest rate volatility.

When we make the valuation of the bond with the embedded option, we essentially compute the value of the cash flow assuming the interest rate volatility. In a sense, we price the option-feature of the bond, assuming the interest rate volatility of the tree. However, market can change its expectation on the volatility of the bond, and hence, the price is not the same anymore, because although the option-free bond value has not changed, the value of the embedded option is different.

Similarly to the Z-spread, if you find the value you have to add to each node’s rate to get the same value from the tree as the market value, you get the OAS, the option-adjusted spread, which is the spread of the bond “removing” its option feature. You have to use this  spread to compare bonds with embedded option with bonds without embedded options (such as their relative benchmark) or even bonds with embedded options with each other. The price of the option is relative to the volatility of interest rate, not to the credit risk and liquidity risk of the bond; it shall hence be remove to compare inherent quality of the fixed income security.

That’s it for this post.

# CFA Level II: Forward Markets and Contracts

Good evening,

A few days away from the exam, I am taking a bit of time to post the main picture of some topics on the curriculum, which I think can be simply explained. This post is dedicated to forwards.

The rationale behind forwards is very simple. Assume there is an asset $S$ which is worth $S_0$ today. You want to enter a contract with somebody to agree to buy the asset at time $T$ at the forward price $FP$. This might seem a bit complicated at first glance for people not familiar with finance mainly because you do not know what $S_T$ (the price of $S$ at time $T$) will be. Well, the truth in the matter is that… it doesn’t matter. Indeed, there is a way you can replicate the action of buying a stock at time $T$ by doing the following:

1. At time $t=0$:
1. Borrow $S_0$ at the current rate $S_0$
2. Buy the asset at $S_0$
2. At time $t=T$:
1. Repay what you borrowed at time 0 with interest for $S_0 \cdot (1+R)^T$
2. Keep the stock at the value $S_T$.

So, the net investment at $t=0$ is 0, and at $t=T$, you have the stock and you pay $S_0 \cdot (1+R)^T$. So, this is exactly exactly the same thing as buying the stock forward. Hence, you can deduce that:

$$FP=S_0 \cdot (1+R)^T$$

The results for the law of one price, and we call that an arbitrage argument because if the price was any different from stated above, the you could make instant risk-free profit by doing the strategy previously stated (or its opposite).

For example, assume a stock is worth $S_0=100\$$today, that the interest rate R=10\%, and that you want to buy the stock forward in 1 year. Then, FP=100 \cdot.(1+10\%)=110. If somebody is willing to buy it forward for 115, then enter the forward contract as a seller, thus agreeing selling S for 115 in a year. Borrow 100 today, buy the stock, hold the stock, repay your loan plus interest in a year for 110 and give the stock to the counterparty for the agreed 115. You get a free lunch of 115-110=5. If somebody wants to sell the stock forward for 105, you should agree to enter the contract as a buyer. Sell the stock short for 100 today, invest the proceeds for the interest rate today, collect the interest invested of 110 in a year, and buy back the stock as agreed for 105. You make a free lunch of 110-105=5. That’s it. It’s easy. The forward price is the price today invested at the interest rate. What we just did implies that the value today of a forward contract is 0, by definition. However, the value of the contract will evolve between time 0 and expiration T. Clearly at expiration, the value of the contract is given by:$$V_T=S_T-FP$$When time t is between 0 and T, we get the following result:$$V_t = S_t – \frac{FP}{(1+R)^{T-t}}$$This is quite logical and you can always check that the value at t=0 is 0:$$V_0=S_0 – \frac{FP}{(1+R)^{T-0}}=S_0 – \frac{S_0 \cdot (1+R)^T}{(1+R)^T}=0$$And for t=T$$V_T=S_T – \frac{FP}{(1+R)^{T-T}}=S_T – FP$$Notice that this is the value for the long side, i.e. for the person agreeing on buying the asset at expiration for the forward price. Because derivatives are zero-sum games, the value of the short side is the opposite of the value of the long side. That’s it. I’ll come back with variants later. # CFA Level 2: Quantitative Methods – Autoregressive Processes Hello again everybody, We’re getting towards the final straight line before the exam, and I will post here the content of all the little flash cards that I created for myself. Starting back where I left, in the Quantitative Methods this post will be about Autoregressive Processes sometime denoted AR. These processes are of the following form:$$x_{t+1} = b_0 + b_1 x_t$$Where x_t is the value of process X at time t, and b_0 and b_1 are the parameters we are trying to estimate. To estimate the parameters b_0 and b_1, we proceed as follows: 1. Estimate the parameters using linear regression 2. Calculate the auto-correlations of the residuals 3. Test whether these auto-correlations are significant: Note that we cannot use the Durbin-Watson test we used previously in this section of the CFA curriculum; we will be using a t-test that works this way:$$t = \frac{\rho_{\epsilon_t, \epsilon_{t+k}}}{\frac{1}{\sqrt{T}}}=\rho_{\epsilon_t, \epsilon_{t+k}} \sqrt{T}$$Where \epsilon_t is the residual term of the regression at time t, and T is the number of observation. The t statistic has T-2 degrees of freedom. If they are statistically significant, then we cannot continue our analysis because of reasons I’ll explain a bit later in the post. With AR processes, you are trying to actually predict the next values of a given process using a linear relationship between successive values an by applying simple linear regression. The thing is, if you want to be able to trust your estimated b_0 and b_1 parameters, you need the process to be covariance-stationary. Now, a bit of math. If a process has a finite mean-reverting level, then it is covariance-stationary. What is the mean-reverting level? Well it simple the value x_t at which x_{t+1}=x_t. So, let’s write this in an equation:$$x_{t+1} = x_t = b_0 + b_1 x_t (1-b_1) x_t = b_0 \iff x_t=\frac{b_0}{1-b_1}$$So, X is covariance stationary if b_1 \neq 1. The test for auto-correlations we did in the point 3) guarantees that the process is covariance-stationary if the auto-correlations are not statistically significant. What if the process X is not covariance-stationary? Well you create a new process Y where:$$y_t = x_t – x_{t-1}$$So, you have a new model$$y_t = b_0 + b_1 y_{t-1} + \epsilon_t$\$

which models the next change in the process X which is then covariance stationary. You can use that for the analysis.

This little “trick” is called first differencing.

That’s it, stay tuned for more soon!