☐finance

#fromthefrontier: Marc Andreessen on bitcoin

INTRO:

#fromthefrontier

News used to be about speed. Now it's about understanding.

Whether it's new technologies, changing policy or quickly developing news stories the modern world is becoming very complex. The problem is journalism hasn't adapted to the fact that it's no longer enough to just churn out the latest headlines, readers need a larger context to help them understand.

#fromthefrontier is a series of articles that:

  1. CONTEXT : a brief summary of the key ideas, concepts and history of the technology, market, issue etc.
  2. EXPERT: Gives the key views of an expert 
  3. UPDATES: Gives news updates from the expert

This article is on legendary Silicon Valley entrepreneur (@Netscape, @Opsware) turned venture capitalist (@Andreessen Horowitz) Marc Andreessen's opinions on the cryptocurrency bitcoin.


CONTEXT:

#bitcoin

 

Bitcoin is a new and alternative form of digital currency and payment system.

Historically, digital currencies have suffered from the double-spending problem where John might send the same digital dollar to both Alice and Katy without them knowing and thus 'double-spend'. Of course, physical currencies don't suffer from this problem because the transfer of the store of value involves the physical transfer of a coin or a paper note. Digital currencies solved this problem by having a centralized ledger run by a Visa or Mastercard where they would keep track of all the transactions to make sure there is no fraud or double-spending. In return for running this service Visa would take a cut on all the transactions.

Unlike other digital currencies bitcoin is decentralized and has a public distributed ledger called a block chain allowing users to transact directly (peer-to-peer) without the need for an intermediary. For this reason bitcoin is often referred to as a cryptocurrency which is a subset of digital currencies where cryptography (rather than a middle-man) is used to secure the transactions and to prevent fraudulent creation of new units. However, it should be noted that the ledger is maintained by third-party miners who are paid in bitcoins which creates a small inflationary effect on the value of the bitcoins.

Bitcoin was created by Satoshi Nakamoto, the pseudonym for an unknown person or team of people, who in 2008 wrote a paper called 'Bitcoin: A Peer-to-Peer Electronic Cash System' with bitcoin's software itself was released in early 2009.. 

Since then bitcoin has grown rapidly with the number of transactions in bitcoin rising from a few thousand per month in 2009 to more than a several million in 2015.

Number of transactions per month of bitcoin (logarithmic scale).

Number of transactions per month of bitcoin (logarithmic scale).

Nonetheless, bitcoin has been controversial with software problems, legal issues and perhaps, most significantly for investors, the price has been extremely volatile peaking to as much as $975.45 in Nov 2013 but falling off from that dramatically since.

Despite his bitcoin has slowly become more mainstream with companies like WordPress in November 2012, Chinese internet giant Baidu in October 2013 and Microsoft in December 2014 accepting the currency. Venture capital investments in bitcoin related companies has been increasing significantly and is on track to be even more in 2015.

For a good introduction watch this documentary 'The Rise and Rise of Bitcoin'.


EXPERT: 

#marcandreessen

 

At the heart of Andreessen's argument for bitcoin is the comparison that Bitcoin now looks a lot like personal computers in the 1970s and the internet in the 1990s. They too had lots of legal and regulatory problems, unproven commercial applications and highly uncertain futures but look how they turned out!

At a deeper level Andreessen's view is rooted in economist Carlota Perez's model of 'Technological Revolutions & Financial Capital' in which technological revolutions occur in two stages: the installation phase and the deployment phase. Crucially, whether it's steam engines, the combustion engine or bitcoin all new technologies in the installation phase are neither accepted or understood by the broader society which is still trapped in the thinking of the previous wave. It's only in the deployment stage that these technologies are adapted to and therefore their true is unlocked. The significance of this model for bitcoin, of course, is that Andreessen, as a venture capitalist and someone who is betting on the future, is looking for technologies that are in the installation phase because by the time they are in the deployment phase it is too late. 

This section is divided into three parts 1. Andreessen on bitcoin technology 2. Andreessen on criticisms of bitcoin 3. Andreessen on current and future use cases.

1. Andreessen on bitcoin technology

Bitcoin, like most widely used stores of value including gold and fiat money, is intrinsically valueless and is only useful if lots of people use and accept it. However, Andreessen argues that bitcoin is more than just a hyped up speculative asset because as the first practical solution to the so-called 'the Byzantine Generals Problem' it is a genuine breakthrough in computer science. The significance of bitcoin is that it provides a way for unrelated parties to trust each other over an untrusted network and transfer (not copy) a digital asset which could be anything from contracts, to passwords and of course money. This in turn means you can have a system of payments without a middle man and therefore no transaction costs.

2. Andreessen on criticisms of bitcoin.

There are several criticisms of bitcoin Andreessen disagrees with. 

  1. Bitcoin is too volatile. Andreessen argues that volatility is a function of speculation which has actually helped with adoption of the currency whilst the transaction use case is still weak. He concedes that in the long-run volatilty needs to be lower but as transaction volume increases when compared to speculative volume that should happen naturally. Also with derivative (hedging) and a wider acceptance of the currency should help stabilize things
  2. Black market/illegal activity. Andreessen argues this is over-stated and actually compared to gold, cash or diamonds digital currency is much more trackable because there is a ledger which anyone can see recording all the transactions.

He does concede though that:

  1. Bitcoin may not be the final form of digital currency and that some future cryptocurrency built on the block chain technology may replace it. 
  2. Bitcoin has a genuine chicken and egg problem that all networks have which is a currency isn't useful to consumers until merchants accept it but merchants won't accept it until consumers use it. Thus in the short-run, less revolutionary but better adapted to the current system systems like Apple Pay may at first win out.

3. Andreessen on the use cases of bitcoin.

Current uses case of bitcoin include:

  1. Speculation. As previously argued this may be how bitcoin gets around the chicken and egg problem before its primary use case as a currency emerges.
  2. Lower transaction costs. Without the need for a middle-man managing the ledger there are no need for transaction costs.  
  3. Bitcoin can be used just as a payment system without any exposure to bitcoin price volatility.
  4. Bitcoin can be used to prevent credit card fraud because the information transferred is related to the bitcoins not to the transacting parties.

Futures use cases of bitcoin include:

  1. Increased access to modern financial services both for unbanked people in the US and people in less developed countries, especially those with corrupt, unstable and undeveloped financial systems.
  2. The low transaction costs are particularly significant for international remittance where transaction costs can be very expensive.
  3. With limited to no transaction costs suddenly micro-transactions, tiny fractions of an american dollar in value are possible and could be used from anything a tiny cost to sending emails to preventing spamming to a way to determine who gets access to that car parking spot.
  4. Finally, the low transaction costs allow for a new way for charities and movements to easily and freely raise money for good causes.

UPDATES:

#marcandreessen

September 2015

Andreessen tweet.

  • He teases those who fail to see that the commercial value is in a technology's applications not the technology itself.

January 2015 

Andreessen delivers tweetstorm on bitcoin. 

  • Fights against three negative bitcoin arguments 1. bitcoin price fallen a lot 2. bitcoin is too volatile 3. not enough use cases.

November 2014

Andreessen's fellow VC at Andreessen Horowitz, Ben Horowitz at Stanford on bitcoin.

  • Explains the problem bitcoin solves and compares it to the internet.

October 2014

Andreessen at a Salesforce conference.

  • Admits to chicken and egg problems with bitcoin and long-term and high risk nature of the bet but argues that the reward makes it still worthwhile.
  • Cryptocurrency will definitely happen however.

March 2014

Andreessen at Pandomonthly interview. 

  • Admits to lots of legal issues, but argues bitcoin is a response to the over regulation of traditional finance, in the long-run more clamping down might make bitcoin even more compelling. Hasn't made a bet yet because conflict of interest issues mean they can only make one bet. 

January 2014

Andreessen writes in the New York Times about 'Why Bitcoin Matters'.

  • Includes how bitcoin works, rebuttals of criticisms, examples of use cases and speculation on the futures of bitcoin.

 

Investing in the London Housing Market

This essay is an exploratory look into the housing market, using London as the starting point. Housing is interesting because there is no obvious fundamental to tie the price to. This is in contrast to say stocks where the price of a company should, at least in practice, be tied to the current and discounted future expected profits. Of course like any market I would still expect housing to be internally consistent, where no house sells for much more or less than other similar houses in the same area. This internal consistency however, does not protect you from bubbles and the mass delusion that my house is not mispriced because look at what next door sold for! Clearly having some fundamental to tie house pricing to would be very useful as a way to spot bubbles. Ideally though we would like to be able to go further and use predictions about the fundamental to in turn predict future house pricing.

Most housing investors tend to have one of two strategies. The first is to buy an old house and renovate it and the second is to predict which areas are ‘on the up,’ which areas are about to be gentrified. This is more of an art than a science but classic clues are improved transport links, better shops and bars and good (but perhaps in need of work) housing stock. I even heard one investor who bought based on wherever a new Starbucks was about to be built. Looking at the data for house prices between 1995 and 2014 we find that there is no obvious relationship between the initial mean house price in 1995 and the amount the index has increased by 2014. Some cheaper areas were gentrified a lot other areas where not.

All of these strategies though are fundamentally unsatisfying because it still feels a lot like guesswork. My mum manages and invests in property for a living and it is interesting talking to her because she would much rather invest a million pounds on a flat in Chelsea (a comparatively expensive area) than say three houses in Lewisham (a comparatively cheap area). Her argument is that people will always want to live in Chelsea. Initially I thought this argument seemed a little naïve but actually it bears a striking resemblance to Warren Buffett’s approach to investing in stocks. Buffett argues that, just in the same way it is difficult to predict which area will be gentrified next, it is very difficult to predict which companies will succeed. In fact even if you are sure that say, the car or the plane (as they most certainly did) are going to change the world that does not make it any easier to pick the winners from the losers. So Buffett’s approach is to ask what will technology not change? Who is on top now and is likely to remain there? This strategy has led to safe and unsexy investments in companies like Coca Cola and Wrigley’s chewing gum rather than the roulette table of trying to predict the next Facebook or Google.

Nonetheless it seemed to me that imbedded in my Mum’s preference for Chelsea was the hidden assumption that rich people are going to get richer faster than poor people will. And this gave me the idea that the fundamental that could be used to tie down housing prices should be income and in particular how well different strata of the income distribution are doing and are expected to do in the future. Put simply if you expect there to be more inequality with the rich accelerating away from the poor then you would expect expensive areas like Chelsea to be a better investment. If you expect income inequality to go down with lower income people getting richer faster than higher income people a cheaper area like Lewisham may be a better investment.

Income and property prices

The first step was to investigate whether there was a relationship between property prices in an area and the incomes of people in that area. Intuitively it seems like there should be but there are also good reasons to think there may not be. Firstly property markets are not particularly liquid because people buy and sell houses relatively infrequently. Furthermore, housing’s relatively low running costs mean that once you own a property even if your income is relatively low you can continue to live there. Perhaps you bought your house with your low income decades before and then just been fortunate to have the value of the house rise a lot since. Similarly even if your income is much higher than your house suggests you may have already put down roots in an area, with all your friends and family nearby so even though you could afford a more expensive house you do not move. And finally you would expect there to be quite a lot of variation in how much of their income people would want to spend on housing versus other goods. What we find though is actually there is quite a strong relationship between mean house price and mean income across the boroughs of London.

Above is a regression of mean house prices in the 32 boroughs of London on the mean income of people in those boroughs. The house price data is from the Land Registry’s House Price Index and the income data from HMRC’s Survey of Personal Incomes. The House Price data is released on a monthly basis and so it was averaged out for the tax year to match the HMRC’s data. In this case the data is from 2011. What we find is a very high R² of 0.936 suggesting the model has a lot of explanatory power. The t-statistics are statistically significant with very small p-values. And in fact, this is not surprising because if you eye-ball the graph the data certainly looks fairly linear.

However it is important to do some robustness checks to make sure the assumptions of linear regression are being met, particularly to check whether the errors look random or not. Which as you can see from the rvf plot the residuals clearly do not.

I thought the lack of normality and randomness of the errors might be caused by non-normal mean income and mean house price variables. So I tried using ladder, gladder and qladder functions on Stata to suggest possible transformations.

First I found that mean income can be considerably improved by transforming it to 1/meanincome² where from ladder we can see that we get a χ² of just 0.69 which is pretty good.

This is also reflected in the gladder plot which shows visually the different transformations and how normal they look. As with ladder 1/square looks the best.

The 1/square transformation looks especially good in the qladder plot which is sensitive to deviations at the extremes.

However unfortunately we find that for mean house price no similarly good transformation exists.

1/square is again the best, but this time only the best of a bad bunch. The qladder plot in particular is not that good.

And in fact I found that if we regress the new variables 1/meanhouseprice² on 1/meanincome² in addition to losing explanatory power the rvfplot does not improve that much because the errors still do not look very random.

Use income as predictive variable of property prices?

Given that there is clearly some sort of relationship between property prices and income I thought it might be interesting to try and use income as a predictive variable for future property prices. In the above example I used income and property data from the same tax year but it would not be that surprising if perhaps there was a lag effect where income increases and then it takes a few years before that is converted into an increase in property prices.

To investigate this I regressed mean house prices for each year across all the different mean income years that I have in my dataset. Thus we take the mean income data for the tax year 2000 as the explanatory variable for 12 regressions for mean house price from the year 2000 to 2012.

What we find is roughly what you would expect which is that the mean income data for the year 2000 has more explanatory power for year 2000 mean house prices than the mean income data for the years following that. However, when we consider the other years we find that this pattern is not maintained.

Instead it seems like some years the mean income data has more explanatory power than other years, or perhaps more intuitively, some years the housing prices are more in sync with income distributions (2005 and 2005) and other years more out of sync with the income data (2000 and 2002).

Surprisingly housing data is more in sync with income for 2005 and 2006, the years just preceding the financial crisis. It seems a reasonable hypothesis that the housing market should be explained by incomes in that area and so years where there is a larger disconnect such as in 2000 and 2001 might suggest a good time to purchase for a potential investor. Having said, the variations in R² are pretty small and I have not done robustness checks as I would expect similar problems transforming the data as before.

One thing is clear, it seems unlikely that changes in London’s overall income distributions can be used to predict changes in future house prices. This is especially true as income data is only available two years afterwards (so for the tax year 2014 only 2012 data is currently available).

Borough by borough income and property price data

The next step I thought would be to try to see if there were any patterns in the borough by borough housing and income data.

What I noticed was that more expensive areas seemed to recover much quicker from the recession than cheaper areas. As an example Kensington & Chelsea and Islington seem to be already back to trend. This is in contrast to many cheaper areas where increases in income in recent years has not be converted into higher house prices. This led me to hypothesis that perhaps poorer people were being disproportionally credit constrained.

As you can see boroughs with mean house prices that were less than £350000 in 2007 by 2012 had suffered decreases in house price value. This is contrast to boroughs with mean house prices in 2007 above about £350000 where their prices increased. And in fact the seemingly linear relationship has a pretty good R² of 0.935. The question then is whether this constraint is a long term phenomena or a short-term one. If it is short-term it would make sense to invest in the cheaper areas whilst the house prices are temporarily lower because of short-term increases in lending standards.

However when I investigated the amount of leverage for the different housing boroughs I found that the least leveraged areas where the most expensive. Admittedly all the mean income data is pre-tax but given the tax rules vary from person to person I could not think of an intelligent way to estimate the average tax rate for each borough.

Conclusion

So in conclusion it seems like the best area to invest in is Kensington & Chelsea. There are several reasons for this.

Firstly in the last twenty years house prices have increased seven times in Kensington & Chelsea compared to just three, four or five times in other areas. Clearly to invest in an area simply because historically it has increased the most is unwise but on the other hand at least it gives a positive trend.

Secondly the leverage of residents in Kensington & Chelsea is the lowest of all the boroughs suggesting that the increase in prices is not a function of lax lending standards pre-recession. Having said that, I suspect many Kensington & Chelsea residents may own multiple properties so they might actually be more leveraged than I initially imagined. Also as previously mentioned, incomes are pre-tax so it is possible I’m underestimating the amount of leverage.

Thirdly increasing inequality particularly as the most wealthy’s incomes are increasing at a faster rate than everyone else’s is likely to lead to even greater increases in house prices in Kensington and Chelsea.

And finally there are very high numbers of foreign buyers, who tend to invest primarily in the prime areas of central London including Kensington and Chelsea. In fact, according to property experts Knight Frank as many as 28% of buyers of properties over £1 million do not live in the UK. Having said that it is possible that foreign demand will slow down with the introduction of a new capital gains tax for foreigners when they sell homes in the UK from April 2015.

The next step in the research will I think to try and investigate these factors further.

Alternative and Complementary Approaches to Sovereign Credit Ratings

‘It is a feature of many systems of thought, and not only primitive ones, that they possess a self-confirming character. Once their initial premises are accepted, no subsequent discovery will shake the believer’s faith, for he can explain it away in terms of the existing system. Neither will his convictions be weakened by the failure of some accepted ritual to accomplish its desired end, for this too can be accounted for. Such systems of belief possess a resilience which makes them virtually immune to external argument.’
Keith Thomas, ‘Religion and the Decline of Magic’

In this essay I am going to outline Standard & Poor’s (S&P) approach to rating sovereign risk and offer my thoughts on potential limitations and weaknesses. I will suggest two alternative and I think complementary approaches to rating sovereign risk that in addition to S&P’s current approach would I believe provide a more comprehensive assessment of sovereign risk.

The essay is divided into two parts. In part one I will attempt an unadulterated explanation of S&P’s current approach and the rationale for doing it this way. In part two, I will offer my criticisms of their approach and suggestions on how to improve it as well include ideas on what would convince me that I’m wrong.

Part One: S&P’s approach to Sovereign Credit Ratings

Sovereign credit ratings are opinions on the future ability and willingness of sovereign governments to service their debt obligations to the market on time and in full. On time and in full is important because no attempt is made to try and predict the exact nature or extent of default. The reasoning is that default is an extreme event (with an average of roughly one a year in the last fifteen years on S&P rated countries – which is pretty much everyone at this point). Default is so extreme that predictions on if it happens, rather than specifically how, are sufficient. Willingness to pay is the crucial quality that separates sovereigns from the usual companies and organizations S&P rates because companies have clear and immediate legal repercussions for not servicing their debt whereas sovereigns face much less clearly defined economic and political costs.

Although it is not explicitly stated, forward-looking estimates should be at least a year and it was explained to me that non-investment grade have a two-year time line and investment grade has 4-5 years. Technically the ratings are not absolute because they are not tied to any specific underlying metrics. And in fact, pre-1975 the ratings were primarily done through peer comparisons before a more formal framework was put in place. However, it is not correct to say they are purely rankings or that they are fit to a curve because although of course they seek to be, in each time period, internally consistent and offer an accurate measure of relative credit worthiness, they should also be (at least since 1975 when modern ratings methods were put in place) fairly consistent over time and different classes or organizations. Ratings are offered for both local currency and foreign currency debt; the latter is of greater interest because it offers easier international comparison. There is also the more mechanical reason that foreign is calculated first and local is usually just an uptick on the foreign view.

Because overall creditworthiness is a function of both political and economic risks S&P’s rating approach is necessarily both quantitative and qualitative. Qualitative approaches are particularly necessary when assessing willingness to pay. In all there are five key criteria that are considered when rating sovereign debt:

Economic structure and growth prospects; Political institutions and considerations; Government budget considerations (fiscal); Monetary flexibility; External liquidity.

Economic structure and growth concerns the underlying economy and ultimately the tax base that the government of a country can draw from. Stronger underlying economies make for more resilient governments. Political institutions and considerations concerns both stability and transparency issues. Generally the more stable and transparent (which often correlates well with western democracies) the more reliably you can expect countries to pay off their debt. Government budget considerations assess factors relating to the government’s balance sheet. Monetary flexibility assesses the effectiveness and availability of monetary tools whilst external liquidity assesses the impact of balance of payments constraints.

These five factors are rated on a scale of one to six where one is the strongest six the weakest. These factors are combined into two averages. One is a rating of the overall health of the sovereign which takes an average of the economic structure and growth prospects score with the political institutions and considerations score to make the Institutional and governance effectiveness and overall profile score. The remaining three scores for fiscal, monetary and external are also averaged to create the flexibility and performance profile which represents the country’s ability to react to shocks. By averaging this way the result is slightly lower weights for the external, fiscal and monetary scores. These two profiles are then mapped onto a grid where bands of diagonal equivalence formulaically determine the final credit rating. The specific weights and the boundaries of the bands seem to have been chosen at the current levels primarily for legacy and arithmetical convenience but there is no obvious first-glance reason to suggest the weights are significantly off. Clearly the obvious advantage of having a systematic approach to weighing the different factors is that it makes the ratings comparable across countries even though arguably there may be some country to country variations in the relative importance of each factor. All in all, there are 18 different ratings ranging from the best AAA to the worst CC. In addition to each rating an outlook is published which can be positive, negative or stable. There are also, for very extreme events, credit watch outlooks if there is scope for a rating to turn on an upcoming event such as an appeal to a court. There are no first principle reasons for why the number of different ratings are set at 18 specifically but the main thinking behind the relatively high number is that, often times, organization classes tend to clump around a certain set of ratings. Therefore by providing a relatively high number of ratings there is more scope to offer differentiation within each organization class (whether sovereign, university, company, supranationals etc).

Part Two: Alternative and Complementary approaches to Sovereign Credit Ratings

In this section I wanted to fight the temptation of accepting the general approach and just nitpicking within it in favour of trying to ask if there are any fundamentally different approaches that could be made to rating sovereigns. My conclusion was I think there are and that rather than replace they could potentially supplement and complement S&P’s current approach.

I have been fortunate enough to sit on a large variety of credit committees including sovereigns, banking and even a university. The experience was immensely valuable and really brought the ratings criteria to life and my general impression was that I was very impressed with both the level of knowledge that each of the analysts had about not only the sovereigns they covered themselves but also the highly intelligent and diverse set of questions they asked each other. Of course, all the discussions are strictly confidential and I shall respect that here but nonetheless I will start this section by making generalized impressions about their approach, which I accept are only my perceptions, but nonetheless despite this might still perhaps have some grain of truth to them.

The problem of using the model that higher general health means a bigger buffer against disease approach

My overall sense of S&P’s approach to rating sovereigns is that although perhaps the exogenous shock that pushes a country over the edge is largely unpredictable there are long-run build ups of poor fiscal standing, exposure to foreign markets, increased political instability which are completely predictable. In fact it was explained to me that there are no ‘fat tail’ defaults; everything is in the realm of prediction. Therefore the general approach is to try to assess the state of the sovereign with the idea that sovereigns in stronger positions have greater buffers to endure the inevitable shocks that come along. By analogy, if a sovereign was a person and default represented death they try and assess a person’s general health and therefore ability to withstand the inevitable barrage of diseases that the person will come in contact with in the course of a lifetime.

My primary issue with this approach is that the exogenous shocks are treated as completely unpredictable. Returning to my analogy, S&P’s approach is to keep track of all the factors that might affect a person’s health from cholesterol, to blood pressure, to liver function etc. and then from this develop a generalized view of their health (and therefore the size of their generalized buffer) against a generalized disease. Instead, I think there should be some thinking on specific diseases and each person’s unique exposure to and risk of each disease.. With the human analogy you would take the main causes of death, from cancer, to heart attack and stroke, and try to assess each person’s individual risk of each separate cause of death. This is important I think because I expect there is probably a high degree of heterogeneity among sovereigns in not only causes of default but also therefore exposure to the different types of default. Person A may have no chance of dying from lung cancer but have very risk of dying from a heart attack. To then take an average of the high risk of a heart attack and the low risk of lung cancer and say the person has an average risk of dying I think does not fully communicate the true risk of death. I suspect S&P would argue that causes of default are not independent and you would probably need several to occur simultaneously rather than just one. Nonetheless having sat in the credit committees I cannot help but feel that the ratings, especially with the extreme attention paid to adjusting them up and down the 17 notches, are more an assessment of the general health of the economy rather than a forward looking prediction about actual default. Using the human analogy I feel that a person who eats 5 fruits a day would probably get a better health rating, using the S&P methodology but the actual relationship of the number of fruits you eat a day and whether you are going to die soon is not clearly established and essentially there is an underlying assumption that better health, no matter how marginal, means lower risk of death.

The problem with short-run trend from equilibrium + shock analysis

The second problem I think is the short-term nature of the ratings. Long-term ratings on sovereigns are important to investors because sovereign bonds, for example, are often five even ten years in length. I concede that no official position is taken on the actual timeline of the ratings but the general agreement is that should be around the three to five years for investment grade and two to three for non-investment grade and certainly have scope of greater than a year. Having listened to the rating committee discussions though, the feeling is that they are six month or perhaps very generously one year predictions. In the admittedly small number of meetings I have attended rarely did anyone ever try to make predictions beyond six months except to say, for example, that there is political risk and who the hell knows what is going to happen and that that uncertainty should be reflected in a notch down in the ratings as a sort of safety margin. One exception to this would be clear inflection points around elections etc. but in general I heard no attempt to make predictions on worldwide trends and how they make effect a specific sovereign. The obvious rationale for this is that any predictions beyond one year would quickly become highly speculative so instead the approach is to give a, in my opinion, very accurate picture of the country’s position today with buffers for uncertainties. Recently in an attempt to add stability to the system new regulations (although I’m not sure if this is for all countries or just for Europe) were introduced to only allow ratings agencies, outside of exceptional circumstances, to change their ratings every six months and at predictable times so that the market has stable expectations. The resulting approach, I think, is that the ratings end up tracking and describing the risk of default rather than actually predicting it. In fact, in reading the research most trend predictions would take the form of change in A has been caused by B. Perhaps it would be beneficial to firstly establish more concretely (probably through statistics) the B explanation and secondly if it held up to go further with the predictions by predicting changes in A with predictions about changes in B.

Of course, the question is how can you predict something which inherently, is probably, quite unpredictable. One of the advantages I think of the six month to one year approach is that you can basically use short-run trend from equilibrium + shock analysis. Over short time scales trends have a very high predictive power especially when coupled with expectations of short-run shocks and their likely effects on bumping up or bumping down the trend. The resulting analysis therefore feels much less based on underlying macroeconomic theory or a view on fundamental dynamics but rather just a short-run extrapolation into the future. The problem, of course, is that sovereign risk is not so short-term, especially if you are signed up to a bond say, that is of ten year or longer time scales.

Now I should caveat all these comments with the reminder that the realm of macroeconomic predictions is a graveyard of great economic thinkers and for that reason I think the short-run trend from equilibrium + shocks approach that S&P takes should actually be the primary ratings method. But rather than implying to the market that these are actually mid to long-run predictions I think it should be more explicitly stated it is a short-run view on the health of the sovereign which in normal circumstances should be very highly correlated with long-run default risk.

With that short-run approach as the meat and vegetables of the rating I think there could then be scope for longer-run more speculative predictions. These would, as I outlined in my piece (available on my blog) on Frodo Risk, involve trying to categorize default risk into different and distinct narratives. And just like you would separately evaluate the risk of cancer, heart attack etc. rather than try to rate a general risk of death you would, for sovereigns, separately analyze the long-run risk of the different default patterns. You might still want to still have one overall rating because ultimately the market does not really care on how a default comes about just if it does but I think starting with a consideration of the distinct causes and then taking some kind of average is superior to a purely general approach. Thus the research on distinct causes of default could involve speculative predictions about global trends and how these might affect the different sovereign countries. Currently S&P’s approach is to assume these global risks are largely unpredictable, which given that no one in positions of influence in academia, business or politics predicted the Great Recession of 2008 is perhaps a reasonable assumption, and treat them like exogenous shocks. Nonetheless even if the long-run analysis just involved a tree-diagram of different scenarios and how each would one effect risk of default I still think that would be valuable resource for investors.

I also think there could be scope to try to use macroeconomic (and perhaps econometric) models to make long-term predictions for potential exogenous shocks and how sovereigns will be, in all likelihood differently, affected. Finally, I understand the aversion to econometric modelling and its related tools, particularly when it comes to the actual ratings process, but I think a small team of motivated econometricians could uncover a wealth of relationships and rules of thumb that could help anchor the sovereign analysts’ considerable knowledge and intuition to fundamentals. It might also ensure that there are no biases or false intuitions. This would be particularly valuable in examining assumptions about what matters in terms of actual default risk. I would concede that the relatively small sample size and number of defaults, at least when compared to say company ratings, would somewhat restrict the scope and power of statistical methods but nonetheless I still think it would be immensely valuable. I have no strong feelings about having 17 notches although I suppose if there was some first principles approach that could be used in deciding the classifications that might be valuable also.

What would convince me that I am wrong?

On the general health point I suppose that if it was shown that you generally need all the averaged factors S&P uses to go bad to have a default and that there is a very strong relationship between general health (as a buffer) and risk of default I think then perhaps the current approach would be justified and sufficient. Also if defaults either followed completely different patterns every time or the same pattern every time, I think you could argue that there would be limited value in trying to consider, using my analogy, different causes of death separately.

On the comments I made about S&P’s short-run trend from equilibrium plus shocks approach I suppose the only reason for not supplementing it is that any long-run predictions would be so speculative and in all likelihood wrong that they would have great risk of confusing rather than informing. Another thing that would change my mind is if it were shown that short-run default risk was highly correlated and therefore itself a predictor of medium to long-run default risk. Given the stability of the investment grade ratings and comparative volatility in the non-investment grade ratings this seems to be true for high ratings but perhaps not true for lower ratings. Again the question of can we do better is not clear but I think if we cannot then there might be a case for being more honest about the long-term reliability of the ratings.

Conclusion

This is only day four of my work experience and just like with my Justin Yifu Lin ‘Comparative Advantage Following’ essay I concede the absurdity of offering my musings but currently it is not obvious to me why I am wrong so I would greatly appreciate any comments that you have. My emotions over the four days have varied immensely. Initially I felt S&P’s approach was completely wrong but after seeing it in action and being immensely impressed with the breadth and depth of the analyst’s knowledge I felt like it was completely right. Having just written the essay though I have, at least for now, settled on the feeling that although S&P’s current approach to rating sovereigns is broadly correct it might be valuably supplemented by both econometric analysis and attempts at predicting the long-run shocks. I would add as my final comment that it is always worth thinking what would the reaction be if something went horribly wrong? How would the press and public react if it was discovered that (quite reasonably I might add) ratings were done in this way. I think there might be a backlash where even though rating agencies can’t be expected to predict all defaults they will nonetheless be blamed for not being able to do so.

Demand derived analysis: A new form of investing

There are many different schools of investing such as fundamental analysis, quantitative analysis and technical analysis. This essay is an exploratory look into what I’m going to call ‘demand-derived analysis.’

The idea came from an essay I wrote on housing, where because housing lacks an obvious fundamental to tie prices to, I instead used incomes. The basic idea was that as housing (in London at least) has relatively fixed/inelastic supply you could assume that in the long-run house prices in an area should be highly correlated with incomes in that area, in particular, your long-run views on housing should simply be a function of your long-run view on incomes. If you believe there is going to be increasing inequality in incomes across London then you should invest in expensive areas like Chelsea/South Kensington. If you believe there is going to be increasing equality in incomes then you should target poorer areas. What’s interesting about this approach is incomes and the associated maximum buying power (as dictated by lending standards etc.) mean above a certain price threshold demand just disappears (it’s not even price elastic it’s just a complete disappearance).

What is not clear to me is how this highly budget sensitive market operates with speculators. I suppose the key question is what percentage of the market are speculators? If the number is very large then perhaps they can collectively lead to self-reinforcing bubbles with the presumption that the bubble will burst to a level that is aligned with most house buyers budget constraints. The implication, seems to be, that even in markets with a high number of speculators there is a long-term envelope which is demand-derived and a function of buyers budget constraints.

The reason I think this is interesting is because none of the investing approaches seem to look to directly measure investors demand-constraint. Instead as with fundamental, quantitative and technical approaches the focus is more on the thing getting traded rather than the constraints of those doing the trading. The financial crisis however, would seem to suggest that – in recessions at least – the investing appetites of market participants seems to not just a factor but the most important factor in determining prices.

BASIC IDEA – DEMAND DERIVED INVESTING

The basic idea then is to try and predict the demand-constraints of market participants. Market participants rather than the safer approach of trying to find the best deals possible and then mapping those out on a spectrum of risk and reward and building portfolios accordingly instead first decide upon their required risk and reward appetites and then try to find assets that match those descriptions. I suspect this, admittedly subtle, distinction leads to lots of self-enforcing bubbles as market participants who need, for example, 10% of their portfolio to be high return, say 20%+ then go find assets that fit that description and collectively make those assets have those returns through their collective buying behaviour. Peter Thiel, the billionaire investor and entrepreneur has in fact argued that one of the reasons for the housing crisis and the technology bubble is that investors are looking for massive returns where there are none. The returns should be found in investing in technologies but Thiel argues that VCs have been too conservative focusing on the world of bytes rather than bits which has led to the weak returns in VCs and the bubbles in other markets.

Thus my demand-derived investing approach starts with the risk-reward appetite of investors. This appetite is not so much a function of the actual risk-reward returns that are possible but rather what risk-rewards are required to be a competitive fund. Therefore if a competitive fund requires:

10% high yield (20%+) 30% mid yield (10-20%) 40% low yield (5-10%) 20% T-bills (0-5%)

Then this will in turn determine the demand for the different asset classes that match these different yields and my main hypothesis is that the investor demand for level of yield is the biggest factor in determining the price of assets that match each yield class.

If that hypothesis is true then

1) the buying power of market participants (including the a priori competitively determined portfolio yield requirements) is the primary factor in determining the prices of assets. Self-reinforcing bubble dynamics can mean that simple excess demand and constrained supply can be sustained for long periods of time – thus even if high yields don’t exist the market’s demand for them can make them appear (in the short-run).

2) each yield band of asset class can be treated as largely independent markets from each other. Within each band assets are direct substitutes because investors chase returns wherever they can find them regardless of which asset class they are found in.

3) In the long-run the appetite for different levels of risk is a function of the perceived level of uncertainty. Thus even if you cannot accurately predict what will happen you can still make money as an investor simply by tracking the increases and decreases in uncertainty around different asset classes and the resulting affect on demand and supply. I.e. demand = f(level of uncertainty) not a f(what will actually happen). Thus in analyzing future events you only need to evaluate whether the outcome is increased or decreased uncertainty rather than the implications of what will actually happen in the different states of the world.

NEXT STEPS

???

Systematic Framework for Narrative Investing

The most basic approach to investing is to have a linear narrative of cause and effect where an opinion about the movement of one variable leads to a knock on effect in the price another variable. Of course, this sort of investing can be dangerous because it’s easy to fall prey to other variables moving against you. Nonetheless the simplicity of developing out a narrative of events seems to be sufficient to make this style of investing remarkably popular. I suppose the underlying assumption is that, if your narrative is correct then on average the other variables with be with you as much as they are against you.

Nonetheless, even simple narratives can often rely on hidden assumptions that I feel it’s probably worthwhile to be systematic about laying out the narrative and the specific evidence that underpins each assumption.

Doing this, especially in highly liquid asset classes, where the trade volumes and consequently market discussion is so much higher means that if there is a change in one of the assumptions you as a trader may be able to react quicker to the changing situation.

EXAMPLE

To illustrate what I mean consider a simple narrative in which the price of iron ore depends upon the Chinese government’s decision to embark on a stimulus package or not.

As you can see the proposed narrative is that a Chinese government stimulus package would lead to an increase in Chinese iron ore demand which in turn leads to an increase in iron ore price. The arrows between represent the assumption that one variable acts on another (β) and crucially how strong this causal relationship is. For example, to what extent does Chinese iron ore demand determine iron ore price. What is really cool about this approach is in carefully laying out the assumptions it becomes very clear what data needs to collected to test each piece of the narrative and in particular the individual investor can start to leverage the huge weight of academic financial literature (particularly Econometrics research) that exists out on the internet largely untapped. Over time this framework allows you to track a simple narrative over several months and provide a context to interpret news events within.

For example, increase in Australian iron ore production not only increases supply but also decreases Chinese iron ore demands importance in determining iron ore prices.

The hidden distortion of equity markets

The structure of our capital markets and norms in accounting practises have led to an hidden distortion in our companies. Put simply, investors look for returns to equity which involves driving down labour’s share of income in favour of capital’s share.

These selection pressures have, as billionaire Hedge fund manager Paul Tudor Jones II has pointed out in his TED lecture on ‘Why we need to rethink capitalism’, led to a significant decline in US share of income going to labour from north of 64% in 1974 to less than 57% today*. As Jones goes onto say ‘higher profit margins do not increase societal wealth. What they actually do is exacerbate income inequality. And that’s not a good thing.’ The profound implications of this is that if labour (through human capital investment) was put at the heart of the investing world, not as a cost but as a potential return to investment, then for the first time you might expect to see those companies that reward their staff the best to be those that are the best financed. The result of this may not just be a significant changes in the incentives structure of our companies but even have consequences on the level of income equality and meritocracy in our societies.

*And by implication, potentially a way to change Piketty’s formula from r > g to one where labour’s share increase and perhaps r</= g

What Do Banks Do? Adair Turner

This essay on Adair Turner’s essay ‘What Do Banks Do? Why Do Credit Booms and Busts Occur? What Can Public Policy Do About It?’ taken from the book ‘The Future of Finance – The LSE Report’

  • PART 1 – SUMMARY OF MODEL & ARGUMENTS
  • PART 2 – QUESTIONS ABOUT THE MODEL
  • PART 3 – WHAT MIGHT CAUSE THE MODEL TO NO LONGER BE TRUE?
  • PART 4 – APPLICATIONS/IMPLICATIONS OF THE MODEL

PART 1 – SUMMARY OF MODEL & ARGUMENTS

Overview

Turner’s argument is as follows

  1. First Principles: Turner lays out the theoretical roles of banks in the economy.
  2. Hypothesis tests: Then he establishes tests for measuring banking’s effectiveness at these roles.
  3. Statistics – Δ in x: Turner describes the massive developments in banking.
  4. Statistics – no Δ in y: But shows there is little evidence these changes are resulting in better economic outcomes.
  5. Null hypothesis false?: Turner questions whether the assumption that more banking is always better is true in several key areas.
  6. Solutions: Turner offers some possible solutions.

1. First Principles: Turner lays out the theoretical roles of banks in the economy

Four categories of financial system activities.

  1. Provision of payment services, both retail and wholesale.
  2. Pure insurance services.
  3. Creation of markets in spot/short-term futures instruments e.g. foreign exchange & commodities.
  4. Financial intermediation between providers of funds and users of funds, savers and borrowers, investors and businesses crucial for capital allocation.

Problems of crisis were with category 4, where intermediation of non-matching assets and liabilities entails four functions.

  1. Pooling of risks.
  2. Maturity transformation via balance-sheet intermediation – banks lend longer than they borrow. Risks are offset by the equity cushion.
  3. Maturity transformation via provision of market liquidity.
  4. Risk-return transformation – different mix of debt and equity investment options for savers than naturally arises from the liabilities of the borrowers.

2. Hypothesis tests: Then he establishes tests for measuring banking’s effectiveness at these roles.

This four transformation functions add value to the economy in three ways.

  1. Investment of pooled assets directly affects capital allocation. Although much capital allocation goes on within firms and their use of retained earnings.
  2. Maturity transformation means higher consumer welfare, particularly consumption smoothing because both savers and borrowers can get personalised maturity mix of assets and liabilities.
  3. All four factors mean individual household sector savers can hold a mix of assets different from the mix of liabilities owed by business users of the funds.

3. Statistics – Δ in x: Turner describes the massive developments in banking.

Financial intensification of the four transformation functions occurred through:

  1. Securitization pooled new assets groups e.g. mortgages.
  2. Transformed risk-return characteristics of assets through tranching.
  3. New forms of contractual balance-sheet maturity transformation through structured investment vehicles (SIVs), conduits and mutual funds which enabled short-term providers of funds to fund longer term credit extensions.
  4. Extensive trading in credit securities providing market liquidity.

Four trends in particular have occurred:

  1. Growth & changing mix of credit intermediation through UK bank balance sheets.
    • Significant financial deepening both loans and deposits as a percentage of GDP. UK balance sheet by 2007 was 500% of GDP compared to 34% in 1964.
    • Significant increases in income leverage of both household and corporate sectors.
    • Leverage growth dominated by increasing debt levels secured against assets in both household (mortgage lending 14% to 79% of GDP) and corporate sectors.
  2. Growth of complex securitization.
    • Over the last two decades the  rise of off bank balance sheet pooling and tranching.
  3. Difficulty in quantifying aggregate maturity transformation from first two changes.
    • Nonetheless undeniably increase in scale and complexity of intra-financial system claims.
  4. Growth in financial trading activity.
    • Value of foreign exchange traded from 11x global trade value in 1980 to 73x today.
    • Interest rate derivates grown from 0 in 1980 to $390 trillion in mid 2009.

4. Statistics – no Δ in y: Shows there is little evidence these changes are resulting in better economic outcomes.

Fundamental problem is volatility in the supply of credit to the real economy and biases in the sectoral mix of that credit. It is assumed that there is a trade-off between capital requirements and credit extension, risk of financial recessions and productive investment.

Bank Credit Extension

  • However, fixed capital formation in building and structures is around 6% of GDP, the same as 1964 when total lending to real estate developers was much lower and without the risk of credit and asset price cycles.
  • Gross plant, machinery, vehicles, ships and aircraft has fallen from more than 9% in the 1960s to less than 6% today.

Complex Securitized Credit

  • No data?

Market making

  • High profitability of market making/liquidity provision suggests 1) end customers value liquidity 2) market makers with market share + skill can use their knowledge valuably.
  • However, what optimal level of liquidity is is unclear.

5. Null hypothesis false?: Turner questions whether the assumption that more banking is always better is true in several key areas.

Bank Credit extension

  • Perhaps there is no trade-off between credit extension and capital requirements. You can have the latter without losing the former.

Complex Securitized Credit

  1. Market completion.
    • Although beneficial theoretically if complex structuring is for tax/capital arbitrage then it is socially useful.
    • Market completion is subject to diminishing marginal returns of increased tailoring.
  2. Increased credit extension.
    • Undoubtedly true, particularly the extension of credit to sub-prime borrowers.
    • However, lifecycle consumption smoothing benefits outweighed by credit + asset price bubbles.
  3. Better risk management.
    • Most compelling argument.
    • However two inherent problems.
      1. Maturity transformation makes financial system more vulnerable to shocks because much of the demand (perhaps half) long-term securities are funded by short-term demand (which disappeared in the crisis).
      2. Self-referential pricing leads to greater inherent instability. Particularly as credit spreads were so clearly incorrect.

Market making

  • Benefits
    1. Increased liquidity means trading at low bid-offer spreads.
    2. Lower costs per transaction mean more trading.
    3. Liquidity is valuable because it means market completion.
    4. High liquidity means efficient price discovery.
    5. Liquidity means reduced volatility because speculators are incentivized to profit from divergences in optimal price.
  • However benefits have limits
    1. Market liquidity, like market completion suffers from declining marginal utility.
    2. Speculation can lead to destablizing and harmful momentum effects.
    3. Active trading creates the same volatility which customers seek liquidity to protect themselves from.

6. Solutions: Turner offers some possible solutions.

Bank Credit Extension – 4 possible approaches.

  1. Interest rate policy takes account of credit/asset price cycles as well as CPI.Downside is cannot differentiate and knock-on effects.
  2. Countercyclical capital requirements. Downside is not varied by sector.
  3. Countercyclical capital requirements varied by sector. Downside is credit supply from foreign banks.
  4. Borrower-focused policies.

Complex Securitized credit.

  • Borrower focused constraints as well as lender policies in case bank balance sheet capital controls are evaded by going off balance sheet with securitized credit.
  • Need to develop macroprudential tools.

Market making

  1. Set trading-book capital requirements in favour of conservatism (over liquidity).
  2. Speculation (including non-bank) should be curtailed perhaps by leverage limits.
  3. Financial transaction taxes

Radical Reform – not sufficient.

  1. ‘Too big to fail’
    • Cost of bailing out banks is at most 2-3% of GDP. Real cost is the increase in public debt burdens by perhaps 50% of GDP because credit dries up.
    • Therefore futures banks should not be put into insolvency as this will lead to a sudden contraction of lending but instead impose losses on subordinated debt holders and senior creditors sufficient to ensure that the bank can maintain operations without tax payer support.
    • Also lots of small banks failing like in 1930-33 could be just as harmful as one large bank failing.
  2. Separating commercial from investment banking.
    • Separation is desirable because trading losses can lead to general credit supply constraints however legislated separation is neither straightforward or sufficient.
      1. Clear distinction between proprietary trading and market-making, customer facilitation and hedging is difficult.
      2. Just as large integrated banks (e.g. Citi, RBS and UBS) played a role so did pure commercial banks (HBOS, Northern Rock and IndyMac).
      3. Destablising interactions could still exist through the market e.g. commercial banks originating credit and non-banks buying it.
  3. Separating deposit banks from commercial banks.
    • John Kay’s proposal is that deposit banks would be 100% backed by the government.
    • Lending banks would by wholesale funds or uninsured retail/commercial deposits.
    • This would perhaps solve the moral hazard problem but not the procyclical, self-referential problem of volatile credit supply.
  4. Abolishing banks: 100% equity support for loans.
    • Kotlikoff’s proposal is that ;ending banks become mutual loan funds i.e. 100% equity funded.
    • Banks therefore would pool risks but not tranche them but this would again not solve the stable credit supply problem.

It is not just a structural problem with our institutions but a problem with liquid markets themselves.

  1. Higher capital and liquidity requirements.
  2. Countercylical macroprudential tools.

PART 2 – QUESTIONS ABOUT THE MODEL

  • Key question is allocation of capital. How do you measure efficiency particularly when it comes to new industries?
  • Liquidity is fundamentally exogenous shock risk.
  • banks work on hte assumption of indendpence but everything is connected so nothing is independent.

PART 3 – WHAT MIGHT CAUSE THE MODEL TO NO LONGER BE TRUE?

PART 4 – APPLICATIONS/IMPLICATIONS OF THE MODEL