Modern developments in insurance mathematics

Abstract: Arguably the most important developments in the insurance industry in the last decade have been centered around two themes: regulation and machine learning. Regulation has affected both actuarial work and research in insurance mathematics through the introduction of Solvency II in 2016, and more recently IFRS 17. The use of machine learning methods, and in particular neural network models and tree-based methods, has been increasing in research papers in insurance mathematics in recent years, and has furthermore started influencing the insurance industry, e.g. in pricing. This thesis consists of four papers exploring these two themes.Paper I is focused on how to implement the new accounting regulation IFRS 17 in an economically and theoretically sound way. We provide a mathematical interpretation of this extensive regulation. In particular we define an algorithm for how to determine unearned future profits, and how these can be systematically converted to actual profits over time. The algorithm is a crucial ingredient in any practical implementation of the regulation. Furthermore, we suggest suitable methods for valuation of insurance contracts, and allocation of this value to groups of contracts, and demonstrate the practicability of the algorithm and methods in a large scale numerical example. Paper II is concerned with mortality forecasting, which is an important aspect of valuing and pricing life insurance contracts. We consider an extension of the Poisson Lee-Carter model, where the mortality trend is modelled by a long short-term memory (LSTM) neural network. Different calibration approaches of the network are suggested, with the aim of using training data efficiently. In particular, we consider a novel approach to splitting data into training and validation data based on the construction of synthetic subpopulations. The stability of long-term predictions is improved by considering boosted versions of the model, which allows us to obtain reasonable predictions even in cases where the number of observations is very small. In Paper III we consider a premium control problem for a mutual non-life insurer, formalised in terms of a random horizon Markov decision process (MDP). The aim of the insurer is to obtain a premium rule that generates a low, stable premium, that leads to a low probability of default. In realistic settings, taking into account delays in claims payments and feedback effects, classic dynamic programming methods for solving the problem are not feasible. Instead, we explore reinforcement learning algorithms combined with function approximation. We show that a carefully designed reinforcement learning algorithm allows us to obtain an approximate optimal premium rule that gives a good approximation of the true optimal premium rule in a simplified setting, and, furthermore, that the approximate optimal premium rule in a more realistic setting outperforms several benchmark rules. Paper IV delves deeper into theoretical aspects of the reinforcement learning algorithm considered in Paper III. While there are earlier results showing convergence of the algorithm linear semi-gradient SARSA for infinite horizon discounted MDPs, there are none for random horizon MDPs. In Paper IV we consider a variant of this algorithm, where the parameter vector and policy are updated at the end of each trajectory, after reaching the terminal state. Using general results for stochastic approximations, we show that this version of the algorithm converges with probability one in the random horizon case, under similar conditions on the behaviour policy as those used to derive earlier results for infinite horizon discounted MDPs. 

  CLICK HERE TO DOWNLOAD THE WHOLE DISSERTATION. (in PDF format)