A friend of Eumaeus comments on my post yesterday, where I said “I have argued many times in the past that we should look at the default experience of the 1930s (or the 1880s or whenever) in assessing the true default risk of long term credit exposure.” He objects that the PRA have done exactly that, citing Supervisory Statement 8/18:
When using transition data, the PRA expects firms to … compare their modelled 1 in 200 transition matrix and matrices at other extreme percentiles against key historical transition events, notably the 1930s Great Depression (and 1932 and 1933 experience in particular). This should include considering how the matrices themselves compare as well as relevant outputs…
Very well, but the exam question posed by Craig Turnbull was whether, the MA, whose purpose is to provide a measure of long-term credit default risk, really delivers a good measure of this risk.
A pretty much standard answer is that it does. Many academic studies find that when structural models such as the Merton model are calibrated to match historical default rates, they fail to explain the level of observed credit spreads, a result referred to as the ‘credit spread puzzle’.
Such studies typically make use of Moody’s historical default rates, measured over a period of around 30 years and starting from 1970. For example, in a seminal paper on the puzzle, Peter Feldhuetter and Stephen Schaefer note that Chen, Collin-Dufresne, and Goldstein (2009) use default rates from 1970-2001 and find BBB-AAA model spreads of 57-79bps (depending on maturity), which are substantially lower than historical spreads of 94-102bps.
These findings might justify the MA approach. If the ex ante market spread consistently overestimates the ex post realised default rates, then we could take some fraction of that credit spread, i.e. the fundamental spread as representing the ‘true’ likely default risk, with the residual being the MA. Waving aside other considerations such as value to prospective shareholders, which I consider elsewhere, the conclusion seems to follow from the assumptions.
But does the post 1970s default experience really represent the true risk? Feldhuetter and Schaefer found that if we use Moody’s default rates for 1920-2001, model spreads are 91-112bps and in line with historical spreads. The appearance of a credit spread puzzle ‘depends strongly on the period over which historical default rates are measured’. Thus the fundamental spread may provide a ‘good measure’ of credit default risk when calibrated to default rates in the post 1970s (indeed the post WW 2) period. It doesn’t provide a very good measure when calibrated to a period which includes the experience of the 1930s. Feldhuetter’s results suggest we need the whole market spread, i.e. fundamental spread plus MA.
It doesn’t stop there. The figure above is from a wonderful paper by Kay Giesecke, Francis Longstaff, Stephen Schaefer and Ilya Strebulaev, looking at a whole 150 years of credit default data. As you see, the 1930s may have been a bloodbath, but it was a walk in the park compared with the great busts of the 19th century, mostly connected with railway (or ‘railroad’) speculation.
As shown, the U.S. has experienced many severe default events during the study period. The most dramatic of these was clearly the catastrophic railroad crisis of the 1870s that followed the enormous boom in railroad construction of the 1860s. This railroad crisis lasted an entire decade, and two years during this period had default rates on the order of 15 percent. In fact, default rates during the three-year 1873 − 1875 period totaled 35.90 percent. In contrast, default rates for the worst three-year period during the Great Depression only totaled 12.88 percent, and this three-year period only ranks in fourth place among the worst three-year default periods during the study period.
Yikes. So the exam question is not what happened in the 1930s, but rather what would happen to the typical MA portfolio supported by corporate credit in the event of widespread defaults like the 1870s and 80s. Remember Solvency II capital requirements are based on a 1 in 200 event, which would cover the period 1819 – 2019, and my maths suggests that 1873-75 is bang in between. Has the PRA looked at that?
Actually the PRA did look at that. But the mood at the time was that the 1880s was about railways and steam engines and stuff, the 1930s was perhaps about construction or roads, who can say, whereas today it’s high tech and service industries and whatnot, completely different economies. It was all too long ago, times have changed, this time it’s different, etc.
Yet is it really different? The economics is about how well people assess risk, and that fundamental fact doesn’t change whether we assess the profitability of steam engines, or the internet. Actually the two are remarkably similar. For the railways, they looked at how many people would come to market, walking or driving sheep, at risk of bandits, highwaymen, rustlers and so on. Then they estimated how many would prefer to go by rail, no risk of bandits, only the smoke and the occasional explosion or derailment. Based on the contracting costs and the finger in the air ticket price, they would estimate the profitability.
This of course went horribly wrong. Ticket revenues were not as high as projected, the contractors overran, as contractors do. How different is that from assessing the profitability of a startup? You estimate how many people would prefer to use the software rather than pencil and paper, you work out the cost of contracting the IT work. Pretty similar, no? How many IT projects actually come in on time, and on budget? How many startups deliver a product that people actually want?
This brilliant paper by Andrew Odlyzko is a comprehensive study of how investors got it so badly wrong in the collapse of the late 1840s ‘new railway economy’.
Nothing has changed, in my view.