Combining Structural and Unstructured Data: A Topic-based Finite Mixture Model for Insurance Claim Prediction
- URL: http://arxiv.org/abs/2410.04684v1
- Date: Mon, 7 Oct 2024 01:37:07 GMT
- Title: Combining Structural and Unstructured Data: A Topic-based Finite Mixture Model for Insurance Claim Prediction
- Authors: Yanxi Hou, Xiaolan Xia, Guangyuan Gao,
- Abstract summary: This paper introduces a novel approach by developing a joint mixture model that integrates both claim descriptions and claim amounts.
Our method establishes a probabilistic link between textual descriptions and loss amounts, enhancing the accuracy of claims clustering and prediction.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling insurance claim amounts and classifying claims into different risk levels are critical yet challenging tasks. Traditional predictive models for insurance claims often overlook the valuable information embedded in claim descriptions. This paper introduces a novel approach by developing a joint mixture model that integrates both claim descriptions and claim amounts. Our method establishes a probabilistic link between textual descriptions and loss amounts, enhancing the accuracy of claims clustering and prediction. In our proposed model, the latent topic/component indicator serves as a proxy for both the thematic content of the claim description and the component of loss distributions. Specifically, conditioned on the topic/component indicator, the claim description follows a multinomial distribution, while the claim amount follows a component loss distribution. We propose two methods for model calibration: an EM algorithm for maximum a posteriori estimates, and an MH-within-Gibbs sampler algorithm for the posterior distribution. The empirical study demonstrates that the proposed methods work effectively, providing interpretable claims clustering and prediction.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - $μ$GUIDE: a framework for quantitative imaging via generalized uncertainty-driven inference using deep learning [0.0]
$mu$GUIDE estimates posterior distributions of tissue microstructure parameters from any given biophysical model or MRI signal representation.
The obtained posterior distributions allow to highlight degeneracies present in the model definition and quantify the uncertainty and ambiguity of the estimated parameters.
arXiv Detail & Related papers (2023-12-28T13:59:43Z) - Calibration of Time-Series Forecasting: Detecting and Adapting Context-Driven Distribution Shift [28.73747033245012]
We introduce a universal calibration methodology for the detection and adaptation of context-driven distribution shifts.
A novel CDS detector, termed the "residual-based CDS detector" or "Reconditionor", quantifies the model's vulnerability to CDS.
A high Reconditionor score indicates a severe susceptibility, thereby necessitating model adaptation.
arXiv Detail & Related papers (2023-10-23T11:58:01Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Towards Better Certified Segmentation via Diffusion Models [62.21617614504225]
segmentation models can be vulnerable to adversarial perturbations, which hinders their use in critical-decision systems like healthcare or autonomous driving.
Recently, randomized smoothing has been proposed to certify segmentation predictions by adding Gaussian noise to the input to obtain theoretical guarantees.
In this paper, we address the problem of certifying segmentation prediction using a combination of randomized smoothing and diffusion models.
arXiv Detail & Related papers (2023-06-16T16:30:39Z) - Bayesian CART models for insurance claims frequency [0.0]
classification and regression trees (CARTs) and their ensembles have gained popularity in the actuarial literature.
We introduce Bayesian CART models for insurance pricing, with a particular focus on claims frequency modelling.
Some simulations and real insurance data will be discussed to illustrate the applicability of these models.
arXiv Detail & Related papers (2023-03-03T13:48:35Z) - Bayesian Quantification with Black-Box Estimators [1.599072005190786]
Approaches like adjusted classify and count, black-box shift estimators, and invariant ratio estimators use an auxiliary (and potentially biased) black-box classifier to estimate the class distribution and yield guarantees under weak assumptions.
We demonstrate that all these algorithms are closely related to the inference in a particular Bayesian Chain model, approxing the assumed ground-truthgenerative process.
Then, we discuss an efficient Markov Monte Carlo sampling scheme for the introduced model and show an consistency guarantee in the large-data limit.
arXiv Detail & Related papers (2023-02-17T22:10:04Z) - Evaluating Aleatoric Uncertainty via Conditional Generative Models [15.494774321257939]
We study conditional generative models for aleatoric uncertainty estimation.
We introduce two metrics to measure the discrepancy between two conditional distributions.
We demonstrate numerically how our metrics provide correct measurements of conditional distributional discrepancies.
arXiv Detail & Related papers (2022-06-09T05:39:04Z) - BRIO: Bringing Order to Abstractive Summarization [107.97378285293507]
We propose a novel training paradigm which assumes a non-deterministic distribution.
Our method achieves a new state-of-the-art result on the CNN/DailyMail (47.78 ROUGE-1) and XSum (49.07 ROUGE-1) datasets.
arXiv Detail & Related papers (2022-03-31T05:19:38Z) - Self-Certifying Classification by Linearized Deep Assignment [65.0100925582087]
We propose a novel class of deep predictors for classifying metric data on graphs within PAC-Bayes risk certification paradigm.
Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables learning posterior distributions on the hypothesis space.
arXiv Detail & Related papers (2022-01-26T19:59:14Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.