CausalMTA: Eliminating the User Confounding Bias for Causal Multi-touch
Attribution
- URL: http://arxiv.org/abs/2201.00689v1
- Date: Tue, 21 Dec 2021 01:59:16 GMT
- Title: CausalMTA: Eliminating the User Confounding Bias for Causal Multi-touch
Attribution
- Authors: Di Yao, Chang Gong, Lei Zhang, Sheng Chen, Jingping Bi
- Abstract summary: We propose CausalMTA to eliminate the influence of user preferences.
CaulMTA achieves better prediction performance than the state-of-the-art method.
It also generates meaningful attribution credits across different advertising channels.
- Score: 16.854552780506822
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-touch attribution (MTA), aiming to estimate the contribution of each
advertisement touchpoint in conversion journeys, is essential for budget
allocation and automatically advertising. Existing methods first train a model
to predict the conversion probability of the advertisement journeys with
historical data and calculate the attribution of each touchpoint using
counterfactual predictions. An assumption of these works is the conversion
prediction model is unbiased, i.e., it can give accurate predictions on any
randomly assigned journey, including both the factual and counterfactual ones.
Nevertheless, this assumption does not always hold as the exposed
advertisements are recommended according to user preferences. This confounding
bias of users would lead to an out-of-distribution (OOD) problem in the
counterfactual prediction and cause concept drift in attribution. In this
paper, we define the causal MTA task and propose CausalMTA to eliminate the
influence of user preferences. It systemically eliminates the confounding bias
from both static and dynamic preferences to learn the conversion prediction
model using historical data. We also provide a theoretical analysis to prove
CausalMTA can learn an unbiased prediction model with sufficient data.
Extensive experiments on both public datasets and the impression data in an
e-commerce company show that CausalMTA not only achieves better prediction
performance than the state-of-the-art method but also generates meaningful
attribution credits across different advertising channels.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Debiased Recommendation with Noisy Feedback [41.38490962524047]
We study intersectional threats to the unbiased learning of the prediction model from data MNAR and OME in the collected data.
First, we design OME-EIB, OME-IPS, and OME-DR estimators, which largely extend the existing estimators to combat OME in real-world recommendation scenarios.
arXiv Detail & Related papers (2024-06-24T23:42:18Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - DCRMTA: Unbiased Causal Representation for Multi-touch Attribution [0.2417342411475111]
Multi-touch attribution (MTA) currently plays a pivotal role in achieving a fair estimation of the contributions of each advertising to-wards conversion behavior.
Previous works attempted to eliminate the bias caused by user preferences to achieve the unbiased assumption of the conversion model.
This paper re-defines the causal effect of user features on con-versions and proposes a novel end-to-end ap-proach, Deep Causal Representation for MTA.
arXiv Detail & Related papers (2024-01-16T23:16:18Z) - Debiasing Multimodal Models via Causal Information Minimization [65.23982806840182]
We study bias arising from confounders in a causal graph for multimodal data.
Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data.
We use these features as confounder representations and use them via methods motivated by causal theory to remove bias from models.
arXiv Detail & Related papers (2023-11-28T16:46:14Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Simultaneous Improvement of ML Model Fairness and Performance by
Identifying Bias in Data [1.76179873429447]
We propose a data preprocessing technique that can detect instances ascribing a specific kind of bias that should be removed from the dataset before training.
In particular, we claim that in the problem settings where instances exist with similar feature but different labels caused by variation in protected attributes, an inherent bias gets induced in the dataset.
arXiv Detail & Related papers (2022-10-24T13:04:07Z) - Prisoners of Their Own Devices: How Models Induce Data Bias in
Performative Prediction [4.874780144224057]
A biased model can make decisions that disproportionately harm certain groups in society.
Much work has been devoted to measuring unfairness in static ML environments, but not in dynamic, performative prediction ones.
We propose a taxonomy to characterize bias in the data, and study cases where it is shaped by model behaviour.
arXiv Detail & Related papers (2022-06-27T10:56:04Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.