Adapting Neural Networks for Uplift Models
- URL: http://arxiv.org/abs/2011.00041v1
- Date: Fri, 30 Oct 2020 18:42:56 GMT
- Title: Adapting Neural Networks for Uplift Models
- Authors: Belbahri Mouloud, Gandouet Olivier, Kazma Ghaith
- Abstract summary: Uplift is estimated using either i) conditional mean regression or ii) transformed outcome regression.
Most existing approaches are adaptations of classification and regression trees for the uplift case.
Here we propose a new method using neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uplift is a particular case of individual treatment effect modeling. Such
models deal with cause-and-effect inference for a specific factor, such as a
marketing intervention. In practice, these models are built on customer data
who purchased products or services to improve product marketing. Uplift is
estimated using either i) conditional mean regression or ii) transformed
outcome regression. Most existing approaches are adaptations of classification
and regression trees for the uplift case. However, in practice, these
conventional approaches are prone to overfitting. Here we propose a new method
using neural networks. This representation allows to jointly optimize the
difference in conditional means and the transformed outcome losses. As a
consequence, the model not only estimates the uplift, but also ensures
consistency in predicting the outcome. We focus on fully randomized
experiments, which is the case of our data. We show our proposed method
improves the state-of-the-art on synthetic and real data.
Related papers
- Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Learning Augmentation Policies from A Model Zoo for Time Series Forecasting [58.66211334969299]
We introduce AutoTSAug, a learnable data augmentation method based on reinforcement learning.
By augmenting the marginal samples with a learnable policy, AutoTSAug substantially improves forecasting performance.
arXiv Detail & Related papers (2024-09-10T07:34:19Z) - Self-Consuming Generative Models with Curated Data Provably Optimize Human Preferences [20.629333587044012]
We study the impact of data curation on iterated retraining of generative models.
We prove that, if the data is curated according to a reward model, the expected reward of the iterative retraining procedure is maximized.
arXiv Detail & Related papers (2024-06-12T21:28:28Z) - Generalized Regression with Conditional GANs [2.4171019220503402]
We propose to learn a prediction function whose outputs, when paired with the corresponding inputs, are indistinguishable from feature-label pairs in the training dataset.
We show that this approach to regression makes fewer assumptions on the distribution of the data we are fitting to and, therefore, has better representation capabilities.
arXiv Detail & Related papers (2024-04-21T01:27:47Z) - ZeroShape: Regression-based Zero-shot Shape Reconstruction [56.652766763775226]
We study the problem of single-image zero-shot 3D shape reconstruction.
Recent works learn zero-shot shape reconstruction through generative modeling of 3D assets.
We show that ZeroShape achieves superior performance over state-of-the-art methods.
arXiv Detail & Related papers (2023-12-21T01:56:34Z) - Uplift Modeling based on Graph Neural Network Combined with Causal
Knowledge [9.005051998738134]
We propose a framework based on graph neural networks that combine causal knowledge with an estimate of uplift value.
Our findings demonstrate that this method works effectively for predicting uplift values, with small errors in typical simulated data.
arXiv Detail & Related papers (2023-11-14T07:21:00Z) - A New Transformation Approach for Uplift Modeling with Binary Outcome [7.828300476533517]
Uplift modeling is a machine learning technique that predicts the gain from performing some action with respect to not taking it.
In this paper, we design a novel transformed outcome for the case of the binary target variable and unlock the full value of the samples with zero outcome.
Our new approach has already been applied to precision marketing in a China nation-wide financial holdings group.
arXiv Detail & Related papers (2023-10-09T09:17:52Z) - Mismatched No More: Joint Model-Policy Optimization for Model-Based RL [172.37829823752364]
We propose a single objective for jointly training the model and the policy, such that updates to either component increases a lower bound on expected return.
Our objective is a global lower bound on expected return, and this bound becomes tight under certain assumptions.
The resulting algorithm (MnM) is conceptually similar to a GAN.
arXiv Detail & Related papers (2021-10-06T13:43:27Z) - A Twin Neural Model for Uplift [59.38563723706796]
Uplift is a particular case of conditional treatment effect modeling.
We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk.
We show our proposed method is competitive with the state-of-the-art in simulation setting and on real data from large scale randomized experiments.
arXiv Detail & Related papers (2021-05-11T16:02:39Z) - Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing
Regressions In NLP Model Updates [68.09049111171862]
This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates.
We formulate the regression-free model updates into a constrained optimization problem.
We empirically analyze how model ensemble reduces regression.
arXiv Detail & Related papers (2021-05-07T03:33:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.