Mortality assumptions and the risks of following the herd
The UK has recently witnessed a step change in mortality improvements - a slowdown in general population mortality improvements since 2011 which has been more pronounced than that predicted by most projection models. This presents a challenge for actuaries in understanding, interpreting and communicating its impact. Richard Marshall writes
There is inherent uncertainty in assessing future mortality trends, and uncertainty remains as to whether the current emerging mortality data is a “blip” or indicative of a long-term change in future mortality trends. There is a risk of insufficient understanding of the drivers for change at this time. Professional judgement will therefore be required by actuaries to assess how, when and whether to recognise a change in trend and what action, if any, is appropriate.
Whilst the reasons for the current changes in trend are not fully understood, there is a risk that actuaries may make judgements in setting assumptions for pricing and reserving that are not reflective of the true underlying reasons for the observed change.
Another aspect of this risk is the potential ethical and professional challenge for actuaries if they are placed under commercial pressure to make overly optimistic or overly prudent assumptions regarding the implications of the recent data.
Additionally, there is a risk that group think could result in overconfidence with actuaries not challenging their understanding of the drivers behind the changes in trends.
As one of the most material assumptions for writers of annuities and for pension schemes alike, getting mortality improvement assumptions right is vital for pricing and maintaining adequate funding.
However, in a day when the CMI series of projection models have become the lingua franca for improvement modelling, is there a danger that we have come to over-rely on benchmarking and casual acceptance of the CMI model’s core parameters at the expense of understanding of what is driving mortality improvements? Do we know what our improvement assumptions mean in terms of changes in the real world?
Assumption setting processes vary in complexity. More sophisticated approaches involve analysis of recent and longer-term trends (e.g. causes of death) to set parameters within the best-estimate improvements model. Also, the use of benchmarks and reinsurance rates when selecting model parameters is more common. However, almost without exception in the UK, the CMI model is the a priori choice of projection tool and many insurers are effectively reducing their basis setting to choosing two or three parameters in the model.
Whilst this is great for communicating improvements, such a reduction in the dimensionality of mortality improvements has encouraged the use of benchmarking by both insurers and auditors, potentially at the expense of interpretation of those improvements.
Benchmarking of assumptions: pros and cons
Benchmarking can be either a blessing or a curse. At best, it can show actuaries where they need to justify their assumptions more carefully if out of line with the industry; at worst, it can be used as a stick to force insurers to conform to the pack, for example due to auditors’ use of benchmarking data. This risk of ‘group think’ can also arise, when actuaries collaborate in sharing market knowledge and developing thought leadership to deal with situations where there is little data and a high degree of uncertainty.
In our 2018 survey of improvement assumptions, long-term rates became slightly more tightly clustered and about half of participants had chosen to use the default period smoothing parameter. Very few firms were making any other adjustments to the core CMI model parameters. All of this is suggestive of herding, even if not definitive proof that it is happening.
If the benchmark consists of multiple independent well-thought-out views, it can provide validation of the reasonableness of those views. The Solvency II Delegated Acts (Article 10) could even apply if these views represented a deep liquid market underlying a market price of mortality improvements. Convergence might be due to insurers evaluating the same set of systemic drivers of mortality, relying upon the same datasets, having similar groups of policyholders and being supported by a small number of reinsurers (potentially with similar views on improvements).
On the other hand, if many firms’ contributions to a benchmark reflect their own reliance upon an earlier benchmark, it represents “group-think” or “herding” of assumptions and can have negative consequences:
- Internal models unrealistically predicated on how an insurer “would change” their best-estimate improvements in various situations.
- Outsourcing of responsibility for Solvency II assumption setting.
- Potential tick-box audit exercises based on benchmarking, instead of the insurer’s approach in arriving at their improvement assumptions.
How can we avoid group-think?
Asking some key questions can reduce reliance on benchmarks, such as what do we expect short, medium- and long-term improvements to look like and why? Also, is there any model which allows these views to be reflected (and, if so, with what parameters)?
The aim is to make sure that the best-estimate improvements correspond to our views around actual drivers of mortality over each time period. If we are able to clearly articulate the rationale behind our improvement assumptions in terms of those drivers, then our views will be less susceptible to the inappropriate influence of benchmarks (or pressure from users).
Appropriate governance of the assumptions process can also reduce the risk of herding or (worse) choosing assumptions to control their impact on the Solvency II balance sheet contrary to the realism required in a Solvency II best-estimate basis.
Richard Marshall leads the Life Underwriting Group at Willis Towers Watson, with a specialism in demographic modelling, particularly for mortality and longevity risk.