Causal Analysis of Customer Churn Using Deep Learning
- URL: http://arxiv.org/abs/2304.10604v1
- Date: Thu, 20 Apr 2023 18:56:13 GMT
- Title: Causal Analysis of Customer Churn Using Deep Learning
- Authors: David Hason Rudd, Huan Huo, Guandong Xu
- Abstract summary: Customer churn describes terminating a relationship with a business or reducing customer engagement over a specific period.
This paper proposes a framework using a deep feedforward neural network for classification.
We also propose a causal Bayesian network to predict cause probabilities that lead to customer churn.
- Score: 9.84528076130809
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Customer churn describes terminating a relationship with a business or
reducing customer engagement over a specific period. Two main business
marketing strategies play vital roles to increase market share dollar-value:
gaining new and preserving existing customers. Customer acquisition cost can be
five to six times that for customer retention, hence investing in customers
with churn risk is smart. Causal analysis of the churn model can predict
whether a customer will churn in the foreseeable future and assist enterprises
to identify effects and possible causes for churn and subsequently use that
knowledge to apply tailored incentives. This paper proposes a framework using a
deep feedforward neural network for classification accompanied by a sequential
pattern mining method on high-dimensional sparse data. We also propose a causal
Bayesian network to predict cause probabilities that lead to customer churn.
Evaluation metrics on test data confirm the XGBoost and our deep learning model
outperformed previous techniques. Experimental analysis confirms that some
independent causal variables representing the level of super guarantee
contribution, account growth, and customer tenure were identified as
confounding factors for customer churn with a high degree of belief. This paper
provides a real-world customer churn analysis from current status inference to
future directions in local superannuation funds.
Related papers
- Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Language Models Can Reduce Asymmetry in Information Markets [100.38786498942702]
We introduce an open-source simulated digital marketplace where intelligent agents, powered by language models, buy and sell information on behalf of external participants.
The central mechanism enabling this marketplace is the agents' dual capabilities: they have the capacity to assess the quality of privileged information but also come equipped with the ability to forget.
To perform well, agents must make rational decisions, strategically explore the marketplace through generated sub-queries, and synthesize answers from purchased information.
arXiv Detail & Related papers (2024-03-21T14:48:37Z) - Early Churn Prediction from Large Scale User-Product Interaction Time
Series [0.0]
This paper conducts an exhaustive study on predicting user churn using historical data.
We aim to create a model forecasting customer churn likelihood, facilitating businesses in comprehending attrition trends and formulating effective retention plans.
arXiv Detail & Related papers (2023-09-25T08:44:32Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Improved Churn Causal Analysis Through Restrained High-Dimensional
Feature Space Effects in Financial Institutions [9.84528076130809]
Customer churn describes terminating a relationship with a business or reducing customer engagement over a specific period.
Customer acquisition cost can be five to six times that of customer retention, hence investing in customers with churn risk is wise.
This study presents a conceptual framework to discover the confounding features that correlate with independent variables and are causally related to those dependent variables that impact churn.
arXiv Detail & Related papers (2023-04-23T00:45:35Z) - Customer Churn Prediction Model using Explainable Machine Learning [0.0]
Key objective of the paper is to develop a unique Customer churn prediction model which can help to predict potential customers who are most likely to churn.
We evaluated and analyzed the performance of various tree-based machine learning approaches and algorithms.
In order to improve Model explainability and transparency, paper proposed a novel approach to calculate Shapley values for possible combination of features.
arXiv Detail & Related papers (2023-03-02T04:45:57Z) - Estimating defection in subscription-type markets: empirical analysis
from the scholarly publishing industry [0.0]
We present the first empirical study on customer churn prediction in the scholarly publishing industry.
The study examines our proposed method for prediction on a customer subscription data over a period of 6.5 years.
We show that this approach can be both accurate as well as uniquely useful in the business-to-business context.
arXiv Detail & Related papers (2022-11-18T01:29:51Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Supporting Financial Inclusion with Graph Machine Learning and Super-App
Alternative Data [63.942632088208505]
Super-Apps have changed the way we think about the interactions between users and commerce.
This paper investigates how different interactions between users within a Super-App provide a new source of information to predict borrower behavior.
arXiv Detail & Related papers (2021-02-19T15:13:06Z) - On the Reproducibility of Neural Network Predictions [52.47827424679645]
We study the problem of churn, identify factors that cause it, and propose two simple means of mitigating it.
We first demonstrate that churn is indeed an issue, even for standard image classification tasks.
We propose using emphminimum entropy regularizers to increase prediction confidences.
We present empirical results showing the effectiveness of both techniques in reducing churn while improving the accuracy of the underlying model.
arXiv Detail & Related papers (2021-02-05T18:51:01Z) - Graph Neural Networks to Predict Customer Satisfaction Following
Interactions with a Corporate Call Center [6.4047628200011815]
This work describes a fully operational system for predicting customer satisfaction following incoming phone calls.
The system takes as an input speech-to-text transcriptions of calls and predicts call satisfaction reported by customers on post-call surveys.
arXiv Detail & Related papers (2021-01-31T10:13:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.