Fairness Evaluation for Uplift Modeling in the Absence of Ground Truth
- URL: http://arxiv.org/abs/2403.12069v1
- Date: Mon, 12 Feb 2024 06:13:24 GMT
- Title: Fairness Evaluation for Uplift Modeling in the Absence of Ground Truth
- Authors: Serdar Kadioglu, Filip Michalsky,
- Abstract summary: We propose a framework that overcomes the missing ground truth problem by generating surrogates.
We show how to apply the approach in a comprehensive study from a real-world marketing campaign.
- Score: 2.946562343070891
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The acceleration in the adoption of AI-based automated decision-making systems poses a challenge for evaluating the fairness of algorithmic decisions, especially in the absence of ground truth. When designing interventions, uplift modeling is used extensively to identify candidates that are likely to benefit from treatment. However, these models remain particularly susceptible to fairness evaluation due to the lack of ground truth on the outcome measure since a candidate cannot be in both treatment and control simultaneously. In this article, we propose a framework that overcomes the missing ground truth problem by generating surrogates to serve as a proxy for counterfactual labels of uplift modeling campaigns. We then leverage the surrogate ground truth to conduct a more comprehensive binary fairness evaluation. We show how to apply the approach in a comprehensive study from a real-world marketing campaign for promotional offers and demonstrate its enhancement for fairness evaluation.
Related papers
- FairLENS: Assessing Fairness in Law Enforcement Speech Recognition [37.75768315119143]
We propose a novel and adaptable evaluation method to examine the fairness disparity between different models.
We conducted fairness assessments on 1 open-source and 11 commercially available state-of-the-art ASR models.
arXiv Detail & Related papers (2024-05-21T19:23:40Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Exploring validation metrics for offline model-based optimisation with
diffusion models [50.404829846182764]
In model-based optimisation (MBO) we are interested in using machine learning to design candidates that maximise some measure of reward with respect to a black box function called the (ground truth) oracle.
While an approximation to the ground oracle can be trained and used in place of it during model validation to measure the mean reward over generated candidates, the evaluation is approximate and vulnerable to adversarial examples.
This is encapsulated under our proposed evaluation framework which is also designed to measure extrapolation.
arXiv Detail & Related papers (2022-11-19T16:57:37Z) - Optimising Equal Opportunity Fairness in Model Training [60.0947291284978]
Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias.
We propose two novel training objectives which directly optimise for the widely-used criterion of it equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
arXiv Detail & Related papers (2022-05-05T01:57:58Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Beyond traditional assumptions in fair machine learning [5.029280887073969]
This thesis scrutinizes common assumptions underlying traditional machine learning approaches to fairness in consequential decision making.
We show that group fairness criteria purely based on statistical properties of observed data are fundamentally limited.
We overcome the assumption that sensitive data is readily available in practice.
arXiv Detail & Related papers (2021-01-29T09:02:15Z) - Prune Responsibly [0.913755431537592]
Irrespective of the specific definition of fairness in a machine learning application, pruning the underlying model affects it.
We investigate and document the emergence and exacerbation of undesirable per-class performance imbalances, across tasks and architectures, for almost one million categories considered across over 100K image classification models that undergo a pruning process.
We demonstrate the need for transparent reporting, inclusive of bias, fairness, and inclusion metrics, in real-life engineering decision-making around neural network pruning.
arXiv Detail & Related papers (2020-09-10T04:43:11Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Ethical Adversaries: Towards Mitigating Unfairness with Adversarial
Machine Learning [8.436127109155008]
Individuals, as well as organisations, notice, test, and criticize unfair results to hold model designers and deployers accountable.
We offer a framework that assists these groups in mitigating unfair representations stemming from the training datasets.
Our framework relies on two inter-operating adversaries to improve fairness.
arXiv Detail & Related papers (2020-05-14T10:10:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.