Telematics Combined Actuarial Neural Networks for Cross-Sectional and
Longitudinal Claim Count Data
- URL: http://arxiv.org/abs/2308.01729v2
- Date: Sun, 3 Dec 2023 16:11:59 GMT
- Title: Telematics Combined Actuarial Neural Networks for Cross-Sectional and
Longitudinal Claim Count Data
- Authors: Francis Duval, Jean-Philippe Boucher, Mathieu Pigeon
- Abstract summary: We present novel cross-sectional and longitudinal claim count models for vehicle insurance built upon the Combined Actuarial Network (CANN) framework proposed by Mario W"uthrich and Michael Merz.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present novel cross-sectional and longitudinal claim count models for
vehicle insurance built upon the Combined Actuarial Neural Network (CANN)
framework proposed by Mario W\"uthrich and Michael Merz. The CANN approach
combines a classical actuarial model, such as a generalized linear model, with
a neural network. This blending of models results in a two-component model
comprising a classical regression model and a neural network part. The CANN
model leverages the strengths of both components, providing a solid foundation
and interpretability from the classical model while harnessing the flexibility
and capacity to capture intricate relationships and interactions offered by the
neural network. In our proposed models, we use well-known log-linear claim
count regression models for the classical regression part and a multilayer
perceptron (MLP) for the neural network part. The MLP part is used to process
telematics car driving data given as a vector characterizing the driving
behavior of each insured driver. In addition to the Poisson and negative
binomial distributions for cross-sectional data, we propose a procedure for
training our CANN model with a multivariate negative binomial (MVNB)
specification. By doing so, we introduce a longitudinal model that accounts for
the dependence between contracts from the same insured. Our results reveal that
the CANN models exhibit superior performance compared to log-linear models that
rely on manually engineered telematics features.
Related papers
- Analyzing Populations of Neural Networks via Dynamical Model Embedding [10.455447557943463]
A core challenge in the interpretation of deep neural networks is identifying commonalities between the underlying algorithms implemented by distinct networks trained for the same task.
Motivated by this problem, we introduce DYNAMO, an algorithm that constructs low-dimensional manifold where each point corresponds to a neural network model, and two points are nearby if the corresponding neural networks enact similar high-level computational processes.
DYNAMO takes as input a collection of pre-trained neural networks and outputs a meta-model that emulates the dynamics of the hidden states as well as the outputs of any model in the collection.
arXiv Detail & Related papers (2023-02-27T19:00:05Z) - Contextually Enhanced ES-dRNN with Dynamic Attention for Short-Term Load
Forecasting [1.1602089225841632]
The proposed model is composed of two simultaneously trained tracks: the context track and the main track.
The RNN architecture consists of multiple recurrent layers stacked with hierarchical dilations and equipped with recently proposed attentive recurrent cells.
The model produces both point forecasts and predictive intervals.
arXiv Detail & Related papers (2022-12-18T07:42:48Z) - A Statistical-Modelling Approach to Feedforward Neural Network Model Selection [0.8287206589886881]
Feedforward neural networks (FNNs) can be viewed as non-linear regression models.
A novel model selection method is proposed using the Bayesian information criterion (BIC) for FNNs.
The choice of BIC over out-of-sample performance leads to an increased probability of recovering the true model.
arXiv Detail & Related papers (2022-07-09T11:07:04Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Regularized Sequential Latent Variable Models with Adversarial Neural
Networks [33.74611654607262]
We will present different ways of using high level latent random variables in RNN to model the variability in the sequential data.
We will explore possible ways of using adversarial method to train a variational RNN model.
arXiv Detail & Related papers (2021-08-10T08:05:14Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - A Driving Behavior Recognition Model with Bi-LSTM and Multi-Scale CNN [59.57221522897815]
We propose a neural network model based on trajectories information for driving behavior recognition.
We evaluate the proposed model on the public BLVD dataset, achieving a satisfying performance.
arXiv Detail & Related papers (2021-03-01T06:47:29Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.