Micro-level Reserving for General Insurance Claims using a Long
Short-Term Memory Network
- URL: http://arxiv.org/abs/2201.13267v1
- Date: Thu, 27 Jan 2022 02:49:42 GMT
- Title: Micro-level Reserving for General Insurance Claims using a Long
Short-Term Memory Network
- Authors: Ihsan Chaoubi, Camille Besse, H\'el\`ene Cossette, Marie-Pier C\^ot\'e
- Abstract summary: We introduce a discrete-time individual reserving framework incorporating granular information in a deep learning approach named Long Short-Term Memory (LSTM) neural network.
At each time period, the network has two tasks: first, classifying whether there is a payment or a recovery, and second, predicting the corresponding non-zero amount, if any.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detailed information about individual claims are completely ignored when
insurance claims data are aggregated and structured in development triangles
for loss reserving. In the hope of extracting predictive power from the
individual claims characteristics, researchers have recently proposed to move
away from these macro-level methods in favor of micro-level loss reserving
approaches. We introduce a discrete-time individual reserving framework
incorporating granular information in a deep learning approach named Long
Short-Term Memory (LSTM) neural network. At each time period, the network has
two tasks: first, classifying whether there is a payment or a recovery, and
second, predicting the corresponding non-zero amount, if any. We illustrate the
estimation procedure on a simulated and a real general insurance dataset. We
compare our approach with the chain-ladder aggregate method using the
predictive outstanding loss estimates and their actual values. Based on a
generalized Pareto model for excess payments over a threshold, we adjust the
LSTM reserve prediction to account for extreme payments.
Related papers
- Minimax Data Sanitization with Distortion Constraint and Adversarial Inference [28.511444169443195]
We study a privacy-preserving data-sharing setting where a privatizer transforms private data into a sanitized version observed by an authorized reconstructor and two unauthorized adversaries.<n>We propose a data-driven training procedure that alternately updates the privatizer, reconstructor, and adversaries.
arXiv Detail & Related papers (2025-07-23T21:22:35Z) - NOCTA: Non-Greedy Objective Cost-Tradeoff Acquisition for Longitudinal Data [23.75715594365611]
We propose NOCTA, a Non-Greedy Objective Cost-Tradeoff Acquisition method.<n>We first introduce a cohesive estimation target for our NOCTA setting, and then develop two complementary estimators.<n>Experiments on synthetic and real-world medical datasets demonstrate that both NOCTA variants outperform existing baselines.
arXiv Detail & Related papers (2025-07-16T17:00:41Z) - Distribution-free inference for LightGBM and GLM with Tweedie loss [2.638878351659023]
Conformal predictive inference has arisen as a popular distribution-free approach for quantifying predictive uncertainty.<n>In this work, we propose new non-conformity measures for GLMs and GBMs with GLM-type loss.
arXiv Detail & Related papers (2025-07-09T14:58:54Z) - Conformal Information Pursuit for Interactively Guiding Large Language Models [64.39770942422288]
This paper explores sequential querying strategies that aim to minimize the expected number of queries.<n>One such strategy is Information Pursuit (IP), a greedy algorithm that at each iteration selects the query that maximizes information gain or equivalently minimizes uncertainty.<n>We propose Conformal Information Pursuit (C-IP), an alternative approach to sequential information gain based on conformal prediction sets.
arXiv Detail & Related papers (2025-07-04T03:55:39Z) - Benign Overfitting in Out-of-Distribution Generalization of Linear Models [19.203753135860016]
We take an initial step towards understanding benign overfitting in the Out-of-Distribution (OOD) regime.
We provide non-asymptotic guarantees proving that benign overfitting occurs in standard ridge regression.
We also present theoretical results for a more general family of target covariance matrix.
arXiv Detail & Related papers (2024-12-19T02:47:39Z) - An Efficient Rehearsal Scheme for Catastrophic Forgetting Mitigation during Multi-stage Fine-tuning [55.467047686093025]
A common approach to alleviate such forgetting is to rehearse samples from prior tasks during fine-tuning.
We propose a sampling scheme, textttbf mix-cd, that prioritizes rehearsal of collateral damage'' samples.
Our approach is computationally efficient, easy to implement, and outperforms several leading continual learning methods in compute-constrained settings.
arXiv Detail & Related papers (2024-02-12T22:32:12Z) - Temporal Performance Prediction for Deep Convolutional Long Short-Term
Memory Networks [0.0]
We present a temporal postprocessing method which estimates the prediction performance of convolutional long short-term memory networks.
To this end, we create temporal cell state-based input metrics per segment and investigate different models for the estimation of the predictive quality.
arXiv Detail & Related papers (2023-11-13T17:11:35Z) - Large-scale Fully-Unsupervised Re-Identification [78.47108158030213]
We propose two strategies to learn from large-scale unlabeled data.
The first strategy performs a local neighborhood sampling to reduce the dataset size in each without violating neighborhood relationships.
A second strategy leverages a novel Re-Ranking technique, which has a lower time upper bound complexity and reduces the memory complexity from O(n2) to O(kn) with k n.
arXiv Detail & Related papers (2023-07-26T16:19:19Z) - Differentially Private Statistical Inference through $\beta$-Divergence
One Posterior Sampling [2.8544822698499255]
We propose a posterior sampling scheme from a generalised posterior targeting the minimisation of the $beta$-divergence between the model and the data generating process.
This provides private estimation that is generally applicable without requiring changes to the underlying model.
We show that $beta$D-Bayes produces more precise inference estimation for the same privacy guarantees.
arXiv Detail & Related papers (2023-07-11T12:00:15Z) - ZigZag: Universal Sampling-free Uncertainty Estimation Through Two-Step Inference [54.17205151960878]
We introduce a sampling-free approach that is generic and easy to deploy.
We produce reliable uncertainty estimates on par with state-of-the-art methods at a significantly lower computational cost.
arXiv Detail & Related papers (2022-11-21T13:23:09Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Universal Off-Policy Evaluation [64.02853483874334]
We take the first steps towards a universal off-policy estimator (UnO)
We use UnO for estimating and simultaneously bounding the mean, variance, quantiles/median, inter-quantile range, CVaR, and the entire cumulative distribution of returns.
arXiv Detail & Related papers (2021-04-26T18:54:31Z) - Overcoming Catastrophic Forgetting with Gaussian Mixture Replay [79.0660895390689]
We present a rehearsal-based approach for continual learning (CL) based on Gaussian Mixture Models (GMM)
We mitigate catastrophic forgetting (CF) by generating samples from previous tasks and merging them with current training data.
We evaluate GMR on multiple image datasets, which are divided into class-disjoint sub-tasks.
arXiv Detail & Related papers (2021-04-19T11:41:34Z) - Rapid Risk Minimization with Bayesian Models Through Deep Learning
Approximation [9.93116974480156]
We introduce a novel combination of Bayesian Models (BMs) and Neural Networks (NNs) for making predictions with a minimum expected risk.
Our approach combines the data efficiency and interpretability of a BM with the speed of a NN.
We achieve risk minimized predictions significantly faster than standard methods with a negligible loss on the testing dataset.
arXiv Detail & Related papers (2021-03-29T15:08:25Z) - Dimensionality reduction, regularization, and generalization in
overparameterized regressions [8.615625517708324]
We show that PCA-OLS, also known as principal component regression, can be avoided with a dimensionality reduction.
We show that dimensionality reduction improves robustness while OLS is arbitrarily susceptible to adversarial attacks.
We find that methods in which the projection depends on the training data can outperform methods where the projections are chosen independently of the training data.
arXiv Detail & Related papers (2020-11-23T15:38:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.