Churn prediction in online gambling
- URL: http://arxiv.org/abs/2201.02463v1
- Date: Fri, 7 Jan 2022 14:20:25 GMT
- Title: Churn prediction in online gambling
- Authors: Florian Merchie and Damien Ernst
- Abstract summary: This work contributes to the domain by formalizing the problem of churn prediction in the context of online gambling.
We propose an algorithmic answer to this problem based on recurrent neural network.
This algorithm is tested with online gambling data that have the form of time series.
- Score: 4.523089386111081
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In business retention, churn prevention has always been a major concern. This
work contributes to this domain by formalizing the problem of churn prediction
in the context of online gambling as a binary classification task. We also
propose an algorithmic answer to this problem based on recurrent neural
network. This algorithm is tested with online gambling data that have the form
of time series, which can be efficiently processed by recurrent neural
networks. To evaluate the performances of the trained models, standard machine
learning metrics were used, such as accuracy, precision and recall. For this
problem in particular, the conducted experiments allowed to assess that the
choice of a specific architecture depends on the metric which is given the
greatest importance. Architectures using nBRC favour precision, those using
LSTM give better recall, while GRU-based architectures allow a higher accuracy
and balance two other metrics. Moreover, further experiments showed that using
only the more recent time-series histories to train the networks decreases the
quality of the results. We also study the performances of models learned at a
specific instant $t$, at other times $t^{\prime} > t$. The results show that
the performances of the models learned at time $t$ remain good at the following
instants $t^{\prime} > t$, suggesting that there is no need to refresh the
models at a high rate. However, the performances of the models were subject to
noticeable variance due to one-off events impacting the data.
Related papers
- A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Model Architecture Adaption for Bayesian Neural Networks [9.978961706999833]
We show a novel network architecture search (NAS) that optimize BNNs for both accuracy and uncertainty.
In our experiments, the searched models show comparable uncertainty ability and accuracy compared to the state-of-the-art (deep ensemble)
arXiv Detail & Related papers (2022-02-09T10:58:50Z) - KENN: Enhancing Deep Neural Networks by Leveraging Knowledge for Time
Series Forecasting [6.652753636450873]
We propose a novel knowledge fusion architecture, Knowledge Enhanced Neural Network (KENN), for time series forecasting.
We show that KENN not only reduces data dependency of the overall framework but also improves performance by producing predictions that are better than the ones produced by purely knowledge and data driven domains.
arXiv Detail & Related papers (2022-02-08T14:47:47Z) - Deep Calibration of Interest Rates Model [0.0]
Despite the growing use of Deep Learning, classic rate models such as CIR and the Gaussian family are still widely used.
In this paper, we propose to calibrate the five parameters of the G2++ model using Neural Networks.
arXiv Detail & Related papers (2021-10-28T14:08:45Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Lost in Pruning: The Effects of Pruning Neural Networks beyond Test
Accuracy [42.15969584135412]
Neural network pruning is a popular technique used to reduce the inference costs of modern networks.
We evaluate whether the use of test accuracy alone in the terminating condition is sufficient to ensure that the resulting model performs well.
We find that pruned networks effectively approximate the unpruned model, however, the prune ratio at which pruned networks achieve commensurate performance varies significantly across tasks.
arXiv Detail & Related papers (2021-03-04T13:22:16Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Adjusting for Autocorrelated Errors in Neural Networks for Time Series
Regression and Forecasting [10.659189276058948]
We learn the autocorrelation coefficient jointly with the model parameters in order to adjust for autocorrelated errors.
For time series regression, large-scale experiments indicate that our method outperforms the Prais-Winsten method.
Results across a wide range of real-world datasets show that our method enhances performance in almost all cases.
arXiv Detail & Related papers (2021-01-28T04:25:51Z) - Temporal Calibrated Regularization for Robust Noisy Label Learning [60.90967240168525]
Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.
However, labeling large-scale data can be very costly and error-prone so that it is difficult to guarantee the annotation quality.
We propose a Temporal Calibrated Regularization (TCR) in which we utilize the original labels and the predictions in the previous epoch together.
arXiv Detail & Related papers (2020-07-01T04:48:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.