Churn Prediction via Multimodal Fusion Learning:Integrating Customer
Financial Literacy, Voice, and Behavioral Data
- URL: http://arxiv.org/abs/2312.01301v1
- Date: Sun, 3 Dec 2023 06:28:55 GMT
- Title: Churn Prediction via Multimodal Fusion Learning:Integrating Customer
Financial Literacy, Voice, and Behavioral Data
- Authors: David Hason Rudd, Huan Huo, Md Rafiqul Islam, Guandong Xu
- Abstract summary: This paper proposes a multimodal fusion learning model for identifying customer churn risk levels in financial service providers.
Our approach integrates customer sentiments financial literacy (FL) level, and financial behavioral data.
Our novel approach demonstrates a marked improvement in churn prediction, achieving a test accuracy of 91.2%, a Mean Average Precision (MAP) score of 66, and a Macro-Averaged F1 score of 54.
- Score: 14.948017876322597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In todays competitive landscape, businesses grapple with customer retention.
Churn prediction models, although beneficial, often lack accuracy due to the
reliance on a single data source. The intricate nature of human behavior and
high dimensional customer data further complicate these efforts. To address
these concerns, this paper proposes a multimodal fusion learning model for
identifying customer churn risk levels in financial service providers. Our
multimodal approach integrates customer sentiments financial literacy (FL)
level, and financial behavioral data, enabling more accurate and bias-free
churn prediction models. The proposed FL model utilizes a SMOGN COREG
supervised model to gauge customer FL levels from their financial data. The
baseline churn model applies an ensemble artificial neural network and
oversampling techniques to predict churn propensity in high-dimensional
financial data. We also incorporate a speech emotion recognition model
employing a pre-trained CNN-VGG16 to recognize customer emotions based on
pitch, energy, and tone. To integrate these diverse features while retaining
unique insights, we introduced late and hybrid fusion techniques that
complementary boost coordinated multimodal co learning. Robust metrics were
utilized to evaluate the proposed multimodal fusion model and hence the
approach validity, including mean average precision and macro-averaged F1
score. Our novel approach demonstrates a marked improvement in churn
prediction, achieving a test accuracy of 91.2%, a Mean Average Precision (MAP)
score of 66, and a Macro-Averaged F1 score of 54 through the proposed hybrid
fusion learning technique compared with late fusion and baseline models.
Furthermore, the analysis demonstrates a positive correlation between negative
emotions, low FL scores, and high-risk customers.
Related papers
- IMFL-AIGC: Incentive Mechanism Design for Federated Learning Empowered by Artificial Intelligence Generated Content [15.620004060097155]
Federated learning (FL) has emerged as a promising paradigm that enables clients to collaboratively train a shared global model without uploading their local data.
We propose a data quality-aware incentive mechanism to encourage clients' participation.
Our proposed mechanism exhibits highest training accuracy and reduces up to 53.34% of the server's cost with real-world datasets.
arXiv Detail & Related papers (2024-06-12T07:47:22Z) - Design of reliable technology valuation model with calibrated machine learning of patent indicators [14.31250748501038]
We propose an analytical framework for reliable technology valuation using calibrated ML models.
We extract quantitative patent indicators that represent various technology characteristics as input data.
arXiv Detail & Related papers (2024-06-08T11:52:37Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - FedCliP: Federated Learning with Client Pruning [3.796320380104124]
Federated learning (FL) is a newly emerging distributed learning paradigm.
One fundamental bottleneck in FL is the heavy communication overheads between the distributed clients and the central server.
We propose FedCliP, the first communication efficient FL training framework from a macro perspective.
arXiv Detail & Related papers (2023-01-17T09:15:37Z) - Federated Learning under Heterogeneous and Correlated Client
Availability [10.05687757555923]
This paper presents the first convergence analysis for a FedAvg-like FL algorithm under heterogeneous and correlated client availability.
We propose CA-Fed, a new FL algorithm that tries to balance the conflicting goals of maximizing convergence speed and minimizing model bias.
Our experimental results show that CA-Fed achieves higher time-average accuracy and a lower standard deviation than state-of-the-art AdaFed and F3AST.
arXiv Detail & Related papers (2023-01-11T18:38:48Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - RF-LighGBM: A probabilistic ensemble way to predict customer repurchase
behaviour in community e-commerce [8.750970436444083]
The number of online payment users in China has reached 854 million.
With the emergence of community e-commerce platforms, the trend of integration of e-commerce and social applications is increasingly intense.
This paper uses the data-driven method to study the prediction of community e-commerce customers' repurchase behaviour.
arXiv Detail & Related papers (2021-09-02T05:38:16Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.