Modelling Large Claims in Property and Home Insurance  Extreme Value Analysis
(2015) FMS820 20151Mathematical Statistics
 Abstract
 It is of paramount interest for insurance companies to have an estimate of the probability of being exposed to extremely large claims that could render them directly insolvent or decrease the size of their regulatory capital to the point of nonviability. The difficulty with finding such an estimate is that extreme events are by definition rare and therefore difficult to model. This study approaches the problem by utilizing methods developed in extreme value theory, a branch of statistics dedicated to the study of such extreme events.
The purpose of this study was to construct a model for the property and home insurance claim process for a specific insurance company, Folksam Skadeförsäkring Ab, based in Helsinki, Finland. The aim was to... (More)  It is of paramount interest for insurance companies to have an estimate of the probability of being exposed to extremely large claims that could render them directly insolvent or decrease the size of their regulatory capital to the point of nonviability. The difficulty with finding such an estimate is that extreme events are by definition rare and therefore difficult to model. This study approaches the problem by utilizing methods developed in extreme value theory, a branch of statistics dedicated to the study of such extreme events.
The purpose of this study was to construct a model for the property and home insurance claim process for a specific insurance company, Folksam Skadeförsäkring Ab, based in Helsinki, Finland. The aim was to fit the data to the models proposed by extreme value theory and see whether these would describe the actual observations in a meaningful way. The quantiles of these fitted distributions and the associated confidence intervals would serve as a quantified guideline of the risks the company is exposed to. Furthermore, the distributions could be used to price simple types of reinsurance contracts, used as hedging tools by insurance companies.
Two sets of data were analysed, one containing the daily maxima and the other containing the summed daily claims. These were fitted using the maximum likelihood method to four interlinked, but separate models: the General Extreme Value distribution model for the block maxima and three threshold models, the General Pareto distribution, the PoissonGPD model and the point process approach. Standard statistical tools were deployed to determine the goodness of fit for the difference models.
The first set of data was fairly well modelled by both the block maxima and threshold approaches, both severity and frequency. In addition to the range of quantiles and return levels, a conditional probability distribution was estimated to model the behaviour of claims given that they are larger than a predefined amount. Additionally a simulation study was performed, which gave an estimate of the distribution of aggregated daily maxima exceeding a certain threshold over a range of years.
The models did not provide a sufficient goodness of fit for the second data set. This is possibly explained by the weak autocorrelation found in the summed daily claims. The large confidence intervals ended up being the largest deficiency in the study, stemming from the relatively short period the data was collected from. (Less)
Please use this url to cite or link to this publication:
http://lup.lub.lu.se/studentpapers/record/5421796
 author
 Paldynski, Henrik
 supervisor

 Nader Tajvidi ^{LU}
 organization
 course
 FMS820 20151
 year
 2015
 type
 H2  Master's Degree (Two Years)
 subject
 language
 English
 id
 5421796
 date added to LUP
 20150519 10:47:39
 date last changed
 20150519 10:47:39
@misc{5421796, abstract = {It is of paramount interest for insurance companies to have an estimate of the probability of being exposed to extremely large claims that could render them directly insolvent or decrease the size of their regulatory capital to the point of nonviability. The difficulty with finding such an estimate is that extreme events are by definition rare and therefore difficult to model. This study approaches the problem by utilizing methods developed in extreme value theory, a branch of statistics dedicated to the study of such extreme events. The purpose of this study was to construct a model for the property and home insurance claim process for a specific insurance company, Folksam Skadeförsäkring Ab, based in Helsinki, Finland. The aim was to fit the data to the models proposed by extreme value theory and see whether these would describe the actual observations in a meaningful way. The quantiles of these fitted distributions and the associated confidence intervals would serve as a quantified guideline of the risks the company is exposed to. Furthermore, the distributions could be used to price simple types of reinsurance contracts, used as hedging tools by insurance companies. Two sets of data were analysed, one containing the daily maxima and the other containing the summed daily claims. These were fitted using the maximum likelihood method to four interlinked, but separate models: the General Extreme Value distribution model for the block maxima and three threshold models, the General Pareto distribution, the PoissonGPD model and the point process approach. Standard statistical tools were deployed to determine the goodness of fit for the difference models. The first set of data was fairly well modelled by both the block maxima and threshold approaches, both severity and frequency. In addition to the range of quantiles and return levels, a conditional probability distribution was estimated to model the behaviour of claims given that they are larger than a predefined amount. Additionally a simulation study was performed, which gave an estimate of the distribution of aggregated daily maxima exceeding a certain threshold over a range of years. The models did not provide a sufficient goodness of fit for the second data set. This is possibly explained by the weak autocorrelation found in the summed daily claims. The large confidence intervals ended up being the largest deficiency in the study, stemming from the relatively short period the data was collected from.}, author = {Paldynski, Henrik}, language = {eng}, note = {Student Paper}, title = {Modelling Large Claims in Property and Home Insurance  Extreme Value Analysis}, year = {2015}, }