Center Stage for Best Estimates

April 4, 2012

Setting Best Estimate assumptions is now a critical element of pricing, reserving, financial reporting and solvency analysis. But what are the inputs, what’s the process and what should be considered along the way?

With Solvency II pushing at its heels, the Best Estimate assumption is now the foundation of most financial projections in the Life insurance industry. Setting these assumptions requires actuarial judgment and adherence to process and documentation. Although the process cannot be described as a fixed set of instructions, it is possible to move clearly through the various stages and to document the key aspects and considerations involved at each point. The authors of PartnerRe’s new report on this previously unpublished topic show us how.

Biometric risk – growth sector

Cover for biometric risks – through protection covers and as an element of savings products – is a core, growth business for the life insurance industry. Many markets now also have new, regulator-imposed requirements for biometric risk quantification and processes, implemented through e.g. the IFRS and Solvency II. These requirements include the establishment and use of Best Estimate assumptions. For life insurers, setting Best Estimate assumptions for biometric risks is a critical element of pricing, reserving, financial reporting and solvency analysis.

Predicting a future

The Best Estimate can only be meaningful if the underlying assumptions and uncertainties are also understood. That requires knowledge of exactly how it was calculated in the first place. What data was used and why, what judgments were made and why? Is the estimate still accurate given new developments in data availability? Documentation of the establishment process and updating the values are not just a regulator requirement, they are fundamental to setting and using Best Estimate assumptions.

Figure 1: Process steps required to establish a Best Estimate assumption.

Set the benchmark

The benchmark is the starting point of a portfolio-specific Best Estimate assumption. It defines the expectation of a given risk (Best Estimate of incidence rate) in a given market. It is based on population or market (insured lives) data. Even if the own portfolio is large enough to establish a portfolio-specific Best Estimate, the market benchmark is still extremely useful for comparison purposes.

To set a benchmark, the actuary must obtain data that is representative of the risk, up to date and of good quality No source will be ideal, even own and insured lives aggregated data require adjustments. Given multiple data sources, choices will be made, often a trade-off between volume in population data and the higher relevance of insured lives data. The Solvency II directive and publications of the European Insurance and Occupational Pensions Authority (EIOPA) address how data quality should be judged for Best Estimates.

For example, data sources for a benchmark for mortality group insurance in France could be:

  • the analysis of mortality by occupational category issued by the National Institute of Statistics and Economic Studies (INSEE)
  • the analysis of mortality by business sector (Cosmop study1) issued by the Institute of Health Monitoring (INVS).

Ensure consistency between the benchmark and specific portfolio

There is, however, generally a delay between the observation period of the data used to build the benchmark and the period of application of the benchmark. To ensure consistency this needs to be taken into account. Adjustments will also have to be made to align the actual characteristics of the benchmark to the specific portfolio. The most common example of such adjustment is to reflect the difference between population data (often used to build the benchmark) and insured lives data, but other factors must also be considered, such as differences between the definitions of incidence and age.

Exactly how these issues are dealt with will depend on the particular biometric risk, the purpose of the analysis and the available data.

Determine rating factors and conduct the experience analysis

The minimum data required is exposures2, (at least annual “snapshots” of exposure, i.e. lives or sums assured) and incidences over a reasonable period of time, usually 3 to 5 years, to eliminate natural variation. This data will then be aggregated into “cells” such as for defined age, gender and duration combinations, or by age band. It is important to consider which rating factors can in fact be confidently analyzed, which are reasonable to analyze and to ensure that double counting effects are removed. The use of multiple rating factors has influenced the move to multi-dimensional models; these minimize double counting and generate a model of incidence rate including all rating factors.

At this stage the actuary has the best available benchmark adjusted to reflect the most up to date data and by valid rating factor (see example structure in table 1). The next step is to compare the experience of the specific portfolio to this benchmark by means of an “experience analysis” (also known as an A/E3 analysis).

Example

Retuing to the French market example, group mortality data should have the following dimensions:

  • age
  • gender
  • employee status classified in two categories: “executives and managers” and “others”
  • business sector (identified with the NAF4 code) classified in four categories of mortality risk:
  1. Class 1: low
  2. Class 2: medium
  3. Class 3: high
  4. Class 4: very high.

One crucial and consuming step here is data cleansing. This includes changing formats, checking for validity and consistency, correcting errors and then adjusting the data. These adjustments will take into account factors such as historical changes in medical underwriting processes, mortality trends, granularity differences and incurred but not reported (IBNR) claims.

The next step is to perform the experience analysis, ascertaining actual over expected (A/E) for each valid dimension of the analysis. “Actual” is the experience of the specific portfolio, “expected” is the experience from the benchmark as described above. For example, 50,000 exposure years and an expected incidence rate from the benchmark of 2 per mille would indicate an expected 100 claims to the portfolio. If there are 110 observed claims in the specific experience, the A/E is 110%.

Analyze the results/credibility

With the A/E analysis completed, the results can be analyzed. If the portfolio specific experience is different from the adjusted benchmark, what are the reasons for this? Also, if the results lie outside the anticipated range, does this reflect an error in the calculations or are there additional or more complex explanations? Although it will often be difficult to justify quantitatively, it is vital to have a statement of opinion on the potential causes. It is then time to set the cursor between the benchmark and the portfolio experience; credibility theory provides the statistical support for this choice.

Adjustments to reflect future changes

If the future is expected to impact incidence rates differently to the past, e.g. future changes to medical underwriting practice are anticipated, an adjustment for such changes can be made given reasoning and after sensitivity testing of the financial implication.

Finally, the choice of the methodology to use in order to set the Best Estimate assumptions will depend on the actuary’s judgment and on his/her expectations of the results. For instance, if the experience within a certain subgroup differs significantly from the benchmark and the credibility factors are low, the actuary must decide whether to use the credibility approach, i.e. to weight the estimates based on their relative credibilities, or to review the benchmark if this is determined to be no longer relevant to the risk in question.

1 Cosmop (Cohorte pour la surveillance de la mortalité par profession). Study « Analyse de la mortalité des causes de décès par secteur d’activité de 1968 à 1999 à partir de l’echantillon démographique permanent ». September 2006.
2 The exposure of lives or policies to the risk being assessed, e.g. death or disability.
3 Actual over expected.
4 La nomenclature des activités françaises.

Story Highlights

  • Best Estimate assumptions are a must-have following the introduction of ‘fair value’ into reporting environments e.g. Solvency II.
  • The benchmark is the starting point, defining the expectation of a given risk in a given market.
  • Adjustments are made to ensure consistency with the specific portfolio.
  • Decision is then taken on the rating factors that can be confidently analyzed and double-counting effects are removed.
  • After data cleansing comes the experience analysis – how does the portfolio compare to the benchmark?
  • Analyze what’s behind any differences. Credibility theory helps sets the cursor between the two.

Recent Articles

View More
Find a Contact