Clustering in Recurrent Neural Networks for Micro-Segmentation using
Spending Personality
- URL: http://arxiv.org/abs/2109.09425v1
- Date: Mon, 20 Sep 2021 11:06:58 GMT
- Title: Clustering in Recurrent Neural Networks for Micro-Segmentation using
Spending Personality
- Authors: Charl Maree, Christian W. Omlin
- Abstract summary: Fine-grained customer segments are notoriously elusive and one method of obtaining them is through feature extraction.
We consider both temporal and non-sequential models, using long short-term memory (LSTM) and feed-forward neural networks, respectively.
We show that classification using these extracted features performs at least as well as bespoke models on two common metrics, namely loan default rate and customer liquidity index.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Customer segmentation has long been a productive field in banking. However,
with new approaches to traditional problems come new opportunities.
Fine-grained customer segments are notoriously elusive and one method of
obtaining them is through feature extraction. It is possible to assign
coefficients of standard personality traits to financial transaction classes
aggregated over time. However, we have found that the clusters formed are not
sufficiently discriminatory for micro-segmentation. In this study, we extract
temporal features with continuous values from the hidden states of neural
networks predicting customers' spending personality from their financial
transactions. We consider both temporal and non-sequential models, using long
short-term memory (LSTM) and feed-forward neural networks, respectively. We
found that recurrent neural networks produce micro-segments where feed-forward
networks produce only course segments. Finally, we show that classification
using these extracted features performs at least as well as bespoke models on
two common metrics, namely loan default rate and customer liquidity index.
Related papers
- Shortcut Features as Top Eigenfunctions of NTK: A Linear Neural Network Case and More [10.601167538666902]
We analyzed the case of linear neural networks to derive some important properties of shortcut learning.<n>We found that shortcut features correspond to features with larger eigenvalues when the shortcuts stem from the imbalanced number of samples in the clustered distribution.<n>We also showed that the features with larger eigenvalues still have a large influence on the neural network output even after training, due to data variances in the clusters.
arXiv Detail & Related papers (2026-02-03T03:50:18Z) - Bridging Neural Networks and Dynamic Time Warping for Adaptive Time Series Classification [2.443957114877221]
We develop a versatile model that adapts to cold-start conditions and becomes trainable with labeled data.<n>As a neural network, it becomes trainable when sufficient labeled data is available, while still retaining DTW's inherent interpretability.
arXiv Detail & Related papers (2025-07-13T23:15:21Z) - Explainable AI for Fraud Detection: An Attention-Based Ensemble of CNNs, GNNs, and A Confidence-Driven Gating Mechanism [5.486205584465161]
This study presents a new stacking-based approach for CCF detection by adding two extra layers to the usual classification process.
In the attention layer, we combine soft outputs from a convolutional neural network (CNN) and a recurrent neural network (RNN) using the dependent ordered weighted averaging (DOWA) operator.
In the confidence-based layer, we select whichever aggregate (DOWA or IOWA) shows lower uncertainty to feed into a meta-learner.
Experiments on three datasets show that our method achieves high accuracy and robust generalization, making it effective for CCF detection.
arXiv Detail & Related papers (2024-10-01T09:56:23Z) - Neural networks for insurance pricing with frequency and severity data: a benchmark study from data preprocessing to technical tariff [2.4578723416255754]
We present a benchmark study on four insurance data sets with frequency and severity targets in the presence of multiple types of input features.
We compare in detail the performance of a generalized linear model on binned input data, a gradient-boosted tree model, a feed-forward neural network (FFNN), and the combined actuarial neural network (CANN)
arXiv Detail & Related papers (2023-10-19T12:00:33Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Bayesian Neural Networks for Macroeconomic Analysis [0.0]
We develop Bayesian neural networks (BNNs) that are well-suited for handling datasets commonly used for macroeconomic analysis in policy institutions.
Our approach avoids extensive specification searches through a novel mixture specification for the activation function.
We show that our BNNs produce precise density forecasts, typically better than those from other machine learning methods.
arXiv Detail & Related papers (2022-11-09T09:10:57Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Discovering Novel Customer Features with Recurrent Neural Networks for
Personality Based Financial Services [0.0]
The micro-segmentation of customers in the finance sector is a non-trivial task and has been an atypical omission from recent scientific literature.
Where traditional segmentation classifies customers based on coarse features such as demographics, micro-segmentation depicts more nuanced differences between individuals.
Although ubiquitous in many industries, the proliferation of AI in sensitive industries such as finance has become contingent on the imperatives of responsible AI.
arXiv Detail & Related papers (2021-09-24T10:32:36Z) - Bilinear Input Normalization for Neural Networks in Financial
Forecasting [101.89872650510074]
We propose a novel data-driven normalization method for deep neural networks that handle high-frequency financial time-series.
The proposed normalization scheme takes into account the bimodal characteristic of financial time-series.
Our experiments, conducted with state-of-the-arts neural networks and high-frequency data, show significant improvements over other normalization techniques.
arXiv Detail & Related papers (2021-09-01T07:52:03Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image
Classification [49.87503122462432]
We introduce a novel neural network termed Relation-and-Margin learning Network (ReMarNet)
Our method assembles two networks of different backbones so as to learn the features that can perform excellently in both of the aforementioned two classification mechanisms.
Experiments on four image datasets demonstrate that our approach is effective in learning discriminative features from a small set of labeled samples.
arXiv Detail & Related papers (2020-06-27T13:50:20Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.