ARC ERM Launch Event 28th February 2019
This is a partial transcript of the presentation launching the ERM research paper at Staple Inn on 28 February 2019.
The presenters were: Andrew Rendell (AR), Gina Craske (GC), Radu Tunaru (RT), Gareth Mee (GM), Tom Kenny (TK), Malcolm Kemp (MK).
Rendell and Craske are members of the Independent Review Committee. Kenny is chair of the ERM working party. Tunaru (Kent University) performed the research, Mee was chair.
03:36 First presentation Gina Craske, setting out the research project aims and objectives.
GC: In terms of background, given the increase in the equity release market and the increased focus from the supervisor in relation to the valuation of the asset under Solvency II, a working party was set up to build on, really, the Hosty paper from 2008. You realise that Hosty paper is 11 years old now. The aim was to consider the methodologies and bases to be used for different metrics for equity release assets and that included looking at the difference between how companies set their assumptions, because we were finding there were some inconsistencies. It wasn’t long before the members of the working group realised that we actually weren’t qualified enough to really and truly to look at house price projections and modelling and volatilities etc. We didn’t have really have the qualifications. So we asked the Institute and Faculty of Actuaries if we could actually have some funding to get some academic research done. And the IFoA agreed to that funding and indeed actually asked the ABI to partner with them on that, and that’s where the research, the results of which are being shown today, all started.
So the objectives of that research really at a very high level were to look at the no-neg equity guarantee. The areas we did want considered were an appropriate stochastic model to use for example, what parameters as well as house price inflation should be considered stochastically, the use of practical approaches including closed form solutions as well as Black-Scholes, and consideration of the real world and the risk neutral.
Those are the main objectives.
Now I guess as the project went on, we realised that there wasn’t really enough time to give enough effort to looking at the stochastic nature of things like prepayment, because if we had gone down that rabbit hole I think we would have been there forever because there was enough to be done. So some work was done on that but I think that’s going to be something for further research. So in terms of timelines, you can see that it took as about two months in the end to award the actual job to the Kent Business School.
A robust process was undertaken to actually choose who would do this work. We looked at – there were a number of proposals – and we looked at the ones that had the most experience, and indeed independence. If you notice on that timeline, choosing who did the work actually took about four times as long as it did to get the first set of results out. That was quite a long period! That’s not to say though that the work that occurred between the first set of results and February wasn’t significant. There was a lot, a lot of discussion, it was massively full on, a lot of interaction with the researchers and with the research group.
In terms of governance, independence was actually a key aspect when considering the governance structure. A number of the review group were – are – actuaries, either in industry or in consultancy. I guess this is pretty inevitable really, because who else has got that interest and the drive to actually get that guidance out there. I don’t know any practitioners that don’t want a more consistent approach and more guidance on these assets, because they are very very difficult to actually value. So as well as the usual adherence to the profession’s code, the group was co-chaired by the IFoA and the ABI, with formal reporting through the IFoA and the ABI, and the project was managed by Vanessa from the IFoA as well.
Now, before I just hand over to Radu, from a personal perspective I am saying this is one of the most interesting projects I have been involved in with the Institute. There has been a massive amount of external interest in this, it’s quite overwhelming, it’s quite amazing, and the time and effort that has been put into this by the review group has been astounding, I mean a lot more than any other of the working parties I’ve been on. And it’s really you can see that they really care, they really care about getting a robust approach out there, to the degree that there were a lot of heated debates and I can’t say that there was always a 100% consensus in what was in the report, but I hope like me that even if you don’t agree with everything that’s in it, that you do find it a very valuable source of information for further debate, and that it does make you think.
9:06 Research project results (Radu Tunaru, omitted).
39:05 Questions to panel, introduction of panellists.
41:30 TK: So I think clearly from a practical point of view there is a strong benefit in having a closed form, it’s easier to use in pricing and other applications within actuarial work. It’s also the methodology that is currently used by the majority of practitioners in the UK so that’s just the pragmatic reality of it. Does that mean that we shouldn’t look to move to new more accurate models potentially? I would say it shouldn’t stop us from doing that, perhaps a compromise would be to use the more accurate models, these simulation models to calibrate closed form solutions so that they are appropriate for the economic conditions that are being used within, and the duration of the ERMs that are being used to value and price, and the LTVs and rates.
So I think that is probably the way it needs to go from a practitioner point of view. I do think we – there’s probably further work that needs to be done by the working party to actually look at how we would want to implement this, how we would recommend that the profession looks to implement this in its day to day work because it’s not obvious, and I think there’s a real capability gap, so Gina mentioned it, the working party which is made up of a large number of practitioners from the UK industry didn’t feel that they had the skill set to do this research and that’s why Radu was involved.
But it also, you could say there’s a large number of people that – outside of the working party also – don’t necessarily have the skill set to take the model, even though Radu, who is a professor of finance, would say it’s fairly straightforward to apply. So I think that the profession has a role to actually provide some education and hopefully that’s what the working party will do to actually upskill the UK profession, so we advocate the capability to use these more sophisticated models.
GM: Any advance on Tom’s answer?
AR: Yes, I agree with all the points about practicality and using Radu’s point about using an implied volatility surface as a means of communication. I guess the other practical point is, in terms of internal models, where you need to re-run values under a large range of scenarios, and where you need to do balance sheet projections to project the no-neg cost into the future, that’s got to be a lot easier with a closed form solution, so I think there’s a calibration and validation point, that the models would be useful.
GM: So I guess Gina maybe you can elaborate on that. Perhaps people in the audience might be sitting here thinking that this could be a very large amount of work to go and recalibrate these models, rebuild them all, when they have probably just spent quite a large amount of money on Solvency II models and maybe IFRS models going forward. So how do you evaluate the cost to the industry versus the potential benefits that Radu Tom and Andrew have outlined.
GC: Good question. I don’t think anybody at the moment is suggesting a wholesale change in the way anybody does their modelling. We are not there yet. But given the reason ‘it’s easier to use’ probably won’t wash going forward. I think that unlike a lot of models that we use as actuaries, so for example annuity models, there is a lot more expert judgment in the models that we use for equity release – the no-neg side of the equity release asset – than these other models.
And good governance and good procedures mean that really you should be testing alternatives, and what these alternatives do to your results. Under Solvency II rules for example you are meant to be testing alternatives for your expert judgment. That doesn’t mean building a whole suite of models to actually do all of your equity release assets through it, but really and truly I do think we need to be thinking about what the impact of other models are, not just in terms of the house price inflation, but also in terms of what happens with prepayments under a stochastic model. There’s a lot of high loan to value policies that can’t actually prepay that easily because nobody will want to actually take them on. So there is I think some work to be done to actually test what we are doing, and against alternative scenarios and alternative models. I don’t think the cost of that would be anywhere close to what’s been spent on Solvency II generally, or IFRS 17. There would be an opportunity cost. We might need to get somebody in who’s quite young and understands some of this stuff, unlike some of the older people like me that have had to catch up with it, but in terms of the actual benefits, you will have alternatives, you will be able to understand the business a lot better.
GM: So Radu, you’ve presented a wonderful model, but maybe there may be still some weaknesses in there. Could you just cast a light on any limitations in what you have proposed so far.
RT: Yes I mean everything comes up costs, so there is a problem with the software that people should know about. So, when you calibrate these models, if you try the same model in Matlab or in R or in Eviews or in any other statistical package, to our surprise you don’t get the same results. There are two levels of understanding here. One is to understand what needs to be done, just to get an understanding. But then, when it comes to put products and money and so on behind it, I think there is a need for very close scrutiny and very possible some kind of industrial programming using Python C++ etc. Or, some of these packages are commercial packages, so if you are comfortable with what Matlab is producing, you stick with that. If you are happy with what Statlab or SAS is producing, you stick with that, so that’s one problem that is still there
The second one is the relationship between house prices and interest rates, because house prices decline … but also there is a clear impact with interest rates which is difficult to model, because the frequency is all the same. The interest rates you have data and plenty of models on a daily basis, whereas house prices you have changing information very patchy, I mean at best monthly, but even there if you look at the signal it stays very flat for a long period. So it’s difficult to put 2 and 2 together, there is more work to be done, and care needs to be done not to inject a lot of model risk by the back door using something super sophisticated interest rate models. So I would say this still needs to be worked on.
GM: Maybe with that in mind, Malcolm, do you see any merit in considering other alternative models such as, for example, the Heston model that’s used in investment banking?
Kemp: Thank you. Can I first of all say that I think this is a very interesting and detailed paper, but I must say that there are some areas where I think it’s overegged the level of detail. So, for example, on the question of using EGARCH and that type of approach, I just looked up Hull and White, which is a standard textbook, and it points out that you can use relatively standard volatility of volatility models of which Heston is one.
If the world is such that volatility of volatility is uncorrelated with asset prices then you can in fact work out the answer as a kind of average of the Black Scholes type of volatilities, so you can end up with a model that is materially simpler, so when I came to look at the analysis, I’m still slightly surprised to find the low level of volatility that appears in the baseline assumption, and I think here it’s very important to explore this desmoothing concept, essentially with … back in the 1990s I think, people started looking at a hypothetical sort of option called a mileage option, and what that highlighted was that (50:40) the way that you hedge positions is dependent on a quantity called the quadratic variation – the cumulative quadratic variation, which is the instantaneous movement in the price, er the square of the price, as time goes on, and you basically spend this quantity as you are hedging dynamically the option.
So [for] most option pricing, if you can work out what this quantity is, and you can measure it, then you can use formulae akin to the Black Scholes. How is that relevant to the desmoothing question, well in order to work out this quantity you basically need some kind of measure of the instantaneous way in which property is responding to the world. You don’t have that information to hand, you’ve just got these smoothed series, so the typical way of addressing this is to desmooth the data.
The other thing to say is that a baseline assumption of – I think it’s about 4% in the paper – if that really was the case then the thing that I struggle with is that pension schemes and the like like I would expect to invest bucketloads in this asset class because it would appear so favourable. That suggests that there is something awry, and in fact when pension funds and other asset and liability investors do their modelling, they typically uprate the volatility of property and one of the theoretical justifications is this desmoothing. So desmoothing seems very important to me in terms of the complexity of the model. Maybe if there were advantages in going down a Heston route then maybe that would make it simpler to understand, for some practitioners, but as I said you can end up with a similar answer if you can just work out this average of this cumulative variation, as is pointed out in Hull and White.
53:00 Mee: I would really like to come back to this point on parameters. I think I’m keen just to hear your views Andrew as a practitioner, given the slightly different argument presented by the two panellists so far.
Rendell: I think to pick another point on the weaknesses, I think there’s a point around granularity. So one can model based upon an index, but there are houses in different geographical locations that behave differently, and individual houses within that behave differently, so I think there’s a point there that should be reflected in terms of the volatility assumption. I think my other point round understanding the model weaknesses is perhaps one of just getting under the skin of it, and doing some diagnostics, so I would like to look at the future growth assumptions and the percentiles around that, just to see how that would compare to the GBM model just to understand it. And perhaps one of stability, so if you use different data periods, how much does that change the model. If you get a year or two of new data, how would that flex the parameters and responses. And if you get a step change in economic conditions, how it would react to that. So I think there’s a point there just seeing how the model will behave over time I guess.
GM: I guess this is the first stage of the research and I think you propose some interesting further work. I would like to move on to another bit that your paper goes into, which is that it proposes more robust statistical models, more robust parameter estimation, techniques, for example maximum likelihood estimators, and I guess maybe a slightly controversial question, has the industry been too slow to review its methodology and advance with the times.
RT: Well, it’s not only the industry, it’s also the academia, I mean in many academic papers you will see very little detail on the goodness of fit in general. So people just report the parameters .. and give ou the significance and that’s it. In the industry, which was dominated by other asset classes in .. of financial engineering, equity, interest rate and FX, people have the luxury to recalibrate their models daily. So next day, tomorrow, they come back to work, they look at the market, the new information, they recalibrate. So basically they take into account model risk on the go. Here you cannot go on the go because you do not have the information. So the model risk is vital, is present and it needs to be looked at.
Secondly, this is a slow business [?] asset class, which is still very big and I don’t know, what people don’t realise is that this market is 50% of total wealth. So the market is extremely important, but it’s not granular, it’s not fungible, it’s exactly everything that equity and FX is not. So the models have to be looked at properly. In terms of estimation, this is up to each trading desk and institution to see how comfortable they are to spend these losses and to look more frequent, less frequent. It’s not for me to say what they should do. But I think, like I highlight in my book, this is a major risk, because people simply just ignore, they just press a button, most of the time they don’t even estimate. They just take the sample value, right? So they don’t go through a procedure or anything.
And probably the issue is resource somehow in academia, because in academia now, we somehow forgot about how to do the analysis properly like you do in a statistics modelling 101. So, with time everybody seems to be very comfortable. I’m not comfortable at all, I’m not comfortable with maximum likelihood. There are plenty of situations where you do not get a number, so people go on the screen there, and there is plenty of anecdotal evidence in the equity markets and so on where the parameters just went in millions, the values just move like that.
I think it’s a real issue. I try to offer a solution to the research council to ask for to establish an open platform, where people will take all asset classes and all models and will highlight the advantages and disadvantages, open code, everything that can be verified. So collectively we will be a lot better off, because you know if I want to estimate let’s say the Heston model in property markets I just go there, see if Heston is there, no it’s not, maybe I’ll take it from somewhere else and look at it. So I think the issue is still present but to my mind nobody wants to look at it.
GM: Thank you. I’m going to put another one of the big issues on the table, and I’m going to ask you Tom. One of the actuarial profession’s favourite debating techniques is whether risk neutral or real world approaches are more appropriate. The paper doesn’t quite come to a conclusion on that, but interested in getting your view Tom on that particular question.
59:10 So, I think the risk neutral arbitrage-free approach which is covered in the paper effectively is quantifying the cost to hedge the guarantee, and if there was a market what would be that market cost of the guarantee, whereas how insurers currently value lifetime mortgages is basically to look at what they expect the best estimate cashflows of those mortgages to be, and then to calculate the yield on those mortgages, so that they can calculate an illiquidity premium, and that’s what flows into the discounting of the liabilities that are being backed by these assets.
They come up with very different answers. The risk neutral approach will produce a higher cost of the NNEG.
Matching Adjustment isn’t calculated on that basis, it’s not a purely market consistent approach, and I think we have to – I don’t think is really a purely technical debate, this is more around what do UK policy makers and regulators want a life insurance industry to be using to calculate the value of the liabilities – long term insurance liabilities – because clearly if we move down a purely market consistent route, which is based on the cost of hedging guarantees, it’s going to be extremely expensive, and you could argue that that would impact annuity rates, to make them lower, it could also reduce the availability of equity release mortgages, which I think most people would argue is not in the public interest. Also you could say that other regulatory regimes don’t take this approach around the world. The UK is not, you know, a regulatory island, it’s not isolated, you can transfer risk offshore, and I think that would not be in the public interest either, because then that would be profit transferred into other tax regimes. So I think it’s not really a technical debate, it’s more around policy debate, and I think the IFoA has a key role to play in this, to really explain to policy makers what the impacts are of the two approaches.
So I would say what hasn’t come out of the paper is what the difference between those two approaches is, and that’s something the working party wants to look at, to say .. what is the difference between the two approaches, and then have that policy debate, because, you know, market consistency is obviously where we are today in a large part of the financial industry, but market consistency doesn’t necessarily work. [The] global financial crisis was largely driven by the investment banks and the banking approach to modelling risk and that’s purely a market consistent approach.[i]
So I think you really have to say, look, using a purely formulaic approach to calculating risk is dangerous, as highlighted by the global financial crisis, and we really need to look at is .. should there be a framework within which insurers have to calculate the value of these risks. The PRA have come up with one, but taking any judgment away from professionals that have a long term view of risk possibly isn’t the right way to go about doing it. I think that’s a debate that the profession has to have and then it needs to take it to policymakers and say, which version of the future would you like – would you like a future where you have a thriving equity release market, a thriving annuity market that helps consumers to protect themselves against longevity and investment risk, or do you want a future where it’s too expensive to buy these long term insurance products? And I think that’s kind of the essence of this debate.
Gareth Mee: Reasonably strong views there Tom [laughter from the audience]. Andrew, how do you possibly add to that.
1:03:14 Andrew Rendell: Um, I think I will go down more a technical route rather than the political route, I think, and probably use the Solvency II rules as the backdrop I think. So, of you think of what Solvency II is trying to do, you’ve got some assets and you are trying to measure the economic worth of those assets and compare them with your liabilities. So, I think if you have an equity share on your balance sheet, what’s the economic worth of that, I think you would probably quickly accept that the economic worth to the insurer is probably about the same as the economic worth to anybody else, and therefore a good measure of that economic worth is the market price of the share – seems fairly straightforward.
I think where it gets more complicated is when you look at the fact that the insurers have long term liabilities – they have long term illiquid liabilities, and they are matching them with illiquid assets, so there is a synergy there when you bring those two sides together. So we just look at the Matching Adjustment concept. What’s that saying is that if you have a corporate bond, is the economic worth to the insurer the same as it is to everybody else, arguably it isn’t, and the reason for that being that a typical market participant will put a discount to the price that they would be prepared to pay for it, because that corporate bond has risks around liquidity, and it has risks around price volatility over the duration of the asset.
The insurer says ‘well I don’t care about that, because I’m going to hold my asset to maturity, and therefore I don’t need that discount, so the corporate bond is worth more to me than it is to a typical participant. So that’s what the Matching Adjustment does, that recognises that is expressed through an adjustment to the liability rather than an adjustment to the assets, but in a sense that’s what is going on.
So the question then is how does that map through to the ERM, in particular the property side of ERMs, so if there is a very high loan to value and what you are going to get is the property back, what’s the economic worth of that property.
Now I think I accept the principle that if you had an open market property today, that the worth to the insurer of that asset probably is the same as the worth to anybody else, and therefore the market value of that property is a good place to start. I think where it gets more complicated is when in the context of ERM you are saying, actually I don’t have that property today, I am going to take control of it in some years time, in 20 years time, and therefore the deferment rate concept brings a discount to that.
And I think that you could argue that that discount is doing a number of things. So firstly it’s allowing for the loss of income and the loss of control on that asset over that period, but also arguably it’s a discount for the fact that you are not going to see any cash out of that asset for some time, you can’t sell it, so to many investors that would be quite a significant disadvantage, but maybe less so to an insurer that has long term liabilities and it can just wait for that value to emerge. So I think under a market consistent mindset I do think it’s reasonable that the insurer’s view of the deferral rate would be lower than a general market participant’s view for the deferral rate, because of those illiquidity type issues.
GM: Gina, so, we’ve already talked a bit about volatility, but the paper definitely proposes some different parameters to what I’m sure you are used to seeing in the market and certainly different to those proposed by the PRA, so should insurers be altering their parameters?
GC: Yes tomorrow! That’s a joke! I’ll caveat, the next thing I was going to say is the Hosty 2008 [paper] is an exception. We can’t just lift assumptions out of a paper that comes along and then just use them the next day, the next week the next month. I will caveat the Hosty paper 2008 where there are some companies still doing that. So in addition to that I mean what the PRA have set out, that is a diagnostic test, that is not meant to be what we consider to be what is the fair value of the asset. And I think we sometimes forget that. Having said that, I mean the paper comes out with some relatively low volatilities that we are not used to seeing, and I think there needs to be more work done around that, around the sort of, single house price, dilapidation, old people living in there. That hasn’t really been factored in at all. The de-smoothing thing I thought I understood that until a few minutes ago. So the answer to that is I guess no, and also we have to sit back a bit as well, I mean I always keep forgetting what we are trying to do here, under Solvency II and IFRS, generally under IFRS, is to actually set out what you think is the fair value of a third party arms length transaction effectively, and although there’s no liquid market in that, there are some transactions out there, and you have to think, did they go off and use full blown stochastic model, did they use a deterministic scenario, did they do this, did they do that.
It feels to me that we’ve got a lot of things pulling at how we do our valuation, we’ve got the no day one gain under IFRS, we’ve got the EVT under Solvency II, we’ve got real world, risk neutral, and I can really see this – this paper’s really good to get this discussion out, and I do urge you all to keep an open mind when you look at it, because we don’t always – sometimes we’ve got a very narrow way of thinking, but I do think that there’s more to go, and this may go on for longer than the Brexit discussions.
1:10:09 GM: So Radu, I guess a few different views on volatility parameters, I think we’ve heard potentially volatility could be higher, we’ve heard de-smoothing, dilapidation, individual houses, regional variation. Given what you’re doing, do you think there’s an argument that the volatility that they should use might be different to the one you’ve suggested in your paper.
1:10:30 RT: Dilapidation to me is just a haircut that is applied at termination. It’s just like recovery in credit markets. So you go there and you see what it is, OK you don’t get par value of the house you expect, maybe you get 90%, although the contract says that that’s why we charge the roll up deal, that every year we go in and check the maintenance. But I am happy modelling wise to say that dilapidation is 10%, 5%, 20%, whatever practitioners decide it is.
Then when it comes to the volatility of the house price, our modelling was done on the index, and of course the volatility of the index versus the volatility of a single house, is different. And, you know, we teach our students, Hull etc., that the volatility of the index is lower than the volatility of one constituent, the classical example, because of diversification. But if we take that a little bit closer, diversification also means you have negative correlation. You know, different assets being negatively correlated. If you only have three assets, there cannot be all three pairs negatively correlated, so there is a lot of positive correlation in the basket of house prices in the index.
So if people want to go granular, that’s fine with me, but I would like to see some volatilities higher than the index, and I would also like to see the volatilities of houses below the index, because this is how the index volatility will come up what it is. There is no other way round, that’s an estimation procedure. We have it already. There is geographical variation, that should be taken into account.
Now coming back also with the dilapidation, if you apply dilapidation, where is your idiosyncratic difference coming from? Is it double counting here? Because if I have a house price, maybe someone pays the equity release mortgage loan, and spends a lot on maintenance improving the house. Maybe another borrower doesn’t do anything, just dilapidates the house. So this variation is related to a dilipidation channel. If I apply dilapidation at the end, I cannot also apply increased volatility all the way. So I think here there should be some measure considerations. When you go down to the house level, yes, volatilities could be higher. I hope people will also agree it should also be lower, from time to time, because otherwise you won’t be able to calibrate at the index level. It’s as simple as that.
The values we reported, we reported over different periods of time. Post the subprime crisis was a very benign period, that’s why the volatility, as it is now, is relatively small. But if you go back to 74, to 52, I mean people challenge me already, actuaries practitioners and so on, that’s why we did in the report .. to highlight that it is an issue, it is very dependent on the period, on the frequency, on the method of estimation.
But also you have to remember, if we just put double the value, we should see the NNEG value go three times, four times higher. It’s as simple as that. So, I would say, yes maybe more research is OK, but going to the individual house, there is no clear mechanism
GM: Malcolm I would like to come back to you, just on the parameters, and you know perhaps you can respond to some of Radu’s suggestions
MK: Yes, thank you. In terms of dilapidation risk, I think I would agree that it feels to me a little bit more like the adjustment to the r or the g that you are talking about rather than the volatility. In terms of risk neutral versus real world I would agree also there’s a policy dimension to that, and interesting that the only countries that make use of the Matching Adjustment in Europe are the UK and Spain, so that the places you will be directing your focus to may have some interesting dynamics in near future, put it like that.
In terms of volatility of individual houses and regional variations, one would expect the insurers to have portfolios of these NNEGs, and therefore you would get portfolio diversifications, so I would have thought that there might be some reason to adjust from the index, but maybe not necessarily by a huge amount.
I think, are we going to come onto the deferment rate shortly, or do you want to ..
GM: Yes I was going to do deferment rate, that was going to be the last question before we open up to the floor. Do you want to carry on and discuss deferment rate?
1:15:50 MK: Yes, thank you. So I happen to be the trustee of a charity which has a property on its books which has a sitting life tenant, so not too surprisingly the auditors are quite keen that we don’t carry that property at its open market value, but at some depleted value. We build in something that in the context of the PRA is a deferment rate greater than zero. I think I kind of struggle with the concept that it might be less than zero within this type of market, at least on a portfolio basis because that is implying that you’ve got some ultra altruistic individuals who are going to be very keen to enrich the insurer at the expense of themselves. In fact the charity that I am concerned with has been given the property by the sitting tenant so that it might be an exception to that rule, but I think in the context of nearly all commercial transactions I would expect the deferment rate to be positive for those types of arguments, and I think I saw recently last week when I read the paper, [looking] at the internet, and apparently the average discount between the price that you would pay to buy a property with a sitting tenant and the price you would pay without that sitting tenant is roughly 50 percent.
So it does seem to me that there might be scope to come up with some benchmarking. It’s implausible to see that 50% being above 100%, in my opinion.
Quite how that ties in with the rental yield and those points, I think the idea of 80% [of households] paying nothing and 20%, there being a 5%, taking an average that comes out at 1%, I’m not convinced by that logic, because it’s not really what the – the 80% that are paying nothing presumably are the individuals who have their own property and are therefore benefiting from those properties, so I would like to explore that further to understand the rationale for that averaging analysis.
GM: Andrew, what side are you going to fall on?
AR: ‘It’s fair to say we spent some time discussing this in the review group. I’ll give my own take which is that I struggle with the 20% factor rule, and perhaps more to the point, I think it’s 20% times rental yield plus 80% times something, and that something is logically the utility value to the homeowner that’s in their own home. So how do you measure that. Well clearly rent is one option because that’s another way they could get a roof over their head. But I do think there’s an argument that the market rent that you pay includes profit margin and funding costs for the landlord, and maybe that 80% should be applied to something less than rent, but I don’t think it’s zero. [deferment rate vs rental yield, rent is short term measure, should be looking at leasehold property]
1:21:30 Questions.
[…]
[1:44:00] I’m Guy Thomas. A few minutes ago, Malcolm Kemp used the phrase ‘sandcastles in the air’. I have a view that this whole edifice of option pricing really only works when the hedging markets exist, and you can hedge in decent quantities without having an effect [?] on the underlying. None of that seems true for housing. So I wonder if the working party have had any discussions about completely different approaches, including housing in a economic scenario generator, then doing simulations, and taking some low quantile as a reserve. Is that under discussion at all, or is that too difficult for now, or is there a reason why that’s not been considered?
TK: I think that, again, we should look at alternatives, and the research that Radu’s produced was time-boxed effectively so he’s done a great job in producing what the working party considers to be a valuable contribution to the debate. But we don’t believe it’s the only approach. You need to look at real world, and what does this mean, are there other ways of modelling the risk, using economic scenario generators. Yes, I think there’s more to come on this, and the more input and feedback we can get from this event and any other engagement, the better as far as I’m concerned.
[1:45:50] Question: As far as the real world approach goes, do you think that house prices enjoy a risk premium above some measure of inflation or above some measure of interest rates, what’s your view on that, and what sort of level of risk premium – if you assume a constant risk premium – do you think is appropriate?
AR: It seems logical to me that there would be a link to inflation certainly.
MK: I just want to highlight that the issue of whether you should use risk neutral or real world, in a sense is answering a different question. I’ve written a book about market consistency so I have a bias towards thinking about things from a market consistent perspective. Essentially it’s to do with how you think you should recognise profit or return. So if I go for a risk neutral approach, I assume that what it’s worth now is exactly the same as its market value. If I expect extra returns, and I think most people do believe that there will some extra returns from risk seeking assets, unless there’s some kind of economic or political meltdown, then the question is, should I recognise those at outset, or should I recognise them over time as those returns actually appear. My personal view is that you should recognise them more over time as they appear, rather than taking advance credit for them.
But to answer your specific question, what’s the right level, most people expect properties to give you some kind of return in excess of inflation. The way I’ve been schooled at this is that you look at the different factors of production, equities are at the base of that, and they get the least security, and therefore the highest likely expected return, property is a factor of production that goes into activities at least in the commercial world, but you don’t get the same kind of profit return, you get a kind of rental return, so you would expect a lower return than equities, but still some return, and the retail market, a bit more complicated because it’s driven by the likes of you and I and our own personal preferences for properties, so maybe there’s a different dynamic to the commercial market, but probably not hugely different, and therefore I have a property, in fact my wife has a second property, so we’re not averse to the concept that the property might be a useful long term return. But whether I would be encouraging entities like insurers to take advance credit for that excess return that might or might not turn up is a different matter.
[…]