A New Transformation Approach for Uplift Modeling with Binary Outcome
- URL: http://arxiv.org/abs/2310.05549v1
- Date: Mon, 9 Oct 2023 09:17:52 GMT
- Title: A New Transformation Approach for Uplift Modeling with Binary Outcome
- Authors: Kun Li, Jiang Tian and Xiaojia Xiang
- Abstract summary: Uplift modeling is a machine learning technique that predicts the gain from performing some action with respect to not taking it.
In this paper, we design a novel transformed outcome for the case of the binary target variable and unlock the full value of the samples with zero outcome.
Our new approach has already been applied to precision marketing in a China nation-wide financial holdings group.
- Score: 7.828300476533517
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Uplift modeling has been used effectively in fields such as marketing and
customer retention, to target those customers who are more likely to respond
due to the campaign or treatment. Essentially, it is a machine learning
technique that predicts the gain from performing some action with respect to
not taking it. A popular class of uplift models is the transformation approach
that redefines the target variable with the original treatment indicator. These
transformation approaches only need to train and predict the difference in
outcomes directly. The main drawback of these approaches is that in general it
does not use the information in the treatment indicator beyond the construction
of the transformed outcome and usually is not efficient. In this paper, we
design a novel transformed outcome for the case of the binary target variable
and unlock the full value of the samples with zero outcome. From a practical
perspective, our new approach is flexible and easy to use. Experimental results
on synthetic and real-world datasets obviously show that our new approach
outperforms the traditional one. At present, our new approach has already been
applied to precision marketing in a China nation-wide financial holdings group.
Related papers
- Novel Saliency Analysis for the Forward Forward Algorithm [0.0]
We introduce the Forward Forward algorithm into neural network training.
This method involves executing two forward passes the first with actual data to promote positive reinforcement, and the second with synthetically generated negative data to enable discriminative learning.
To overcome the limitations inherent in traditional saliency techniques, we developed a bespoke saliency algorithm specifically tailored for the Forward Forward framework.
arXiv Detail & Related papers (2024-09-18T17:21:59Z) - Bias Mitigation in Fine-tuning Pre-trained Models for Enhanced Fairness
and Efficiency [26.86557244460215]
We introduce an efficient and robust fine-tuning framework specifically designed to mitigate biases in new tasks.
Our empirical analysis shows that the parameters in the pre-trained model that affect predictions for different demographic groups are different.
We employ a transfer learning strategy that neutralizes the importance of these influential weights, determined using Fisher information across demographic groups.
arXiv Detail & Related papers (2024-03-01T16:01:28Z) - Learning Objective-Specific Active Learning Strategies with Attentive
Neural Processes [72.75421975804132]
Learning Active Learning (LAL) suggests to learn the active learning strategy itself, allowing it to adapt to the given setting.
We propose a novel LAL method for classification that exploits symmetry and independence properties of the active learning problem.
Our approach is based on learning from a myopic oracle, which gives our model the ability to adapt to non-standard objectives.
arXiv Detail & Related papers (2023-09-11T14:16:37Z) - Emergent Agentic Transformer from Chain of Hindsight Experience [96.56164427726203]
We show that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
This is the first time that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
arXiv Detail & Related papers (2023-05-26T00:43:02Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Learning Neural Models for Natural Language Processing in the Face of
Distributional Shift [10.990447273771592]
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications.
It builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time.
This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information.
It is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime
arXiv Detail & Related papers (2021-09-03T14:29:20Z) - Self-supervised Augmentation Consistency for Adapting Semantic
Segmentation [56.91850268635183]
We propose an approach to domain adaptation for semantic segmentation that is both practical and highly accurate.
We employ standard data augmentation techniques $-$ photometric noise, flipping and scaling $-$ and ensure consistency of the semantic predictions.
We achieve significant improvements of the state-of-the-art segmentation accuracy after adaptation, consistent both across different choices of the backbone architecture and adaptation scenarios.
arXiv Detail & Related papers (2021-04-30T21:32:40Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Interpretable Multiple Treatment Revenue Uplift Modeling [4.9571232160914365]
Uplift models support a firm's decision-making by predicting the change of a customer's behavior due to a treatment.
The paper extends corresponding approaches by developing uplift models for multiple treatments and continuous outcomes.
arXiv Detail & Related papers (2021-01-09T11:29:00Z) - Adapting Neural Networks for Uplift Models [0.0]
Uplift is estimated using either i) conditional mean regression or ii) transformed outcome regression.
Most existing approaches are adaptations of classification and regression trees for the uplift case.
Here we propose a new method using neural networks.
arXiv Detail & Related papers (2020-10-30T18:42:56Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.