Fairness-Aware Online Personalization
- URL: http://arxiv.org/abs/2007.15270v2
- Date: Sun, 6 Sep 2020 10:03:27 GMT
- Title: Fairness-Aware Online Personalization
- Authors: G Roshan Lal and Sahin Cem Geyik and Krishnaram Kenthapadi
- Abstract summary: We present a study of fairness in online personalization settings involving the ranking of individuals.
We first demonstrate that online personalization can cause the model to learn to act in an unfair manner if the user is biased in his/her responses.
We then formulate the problem of learning personalized models under fairness constraints and present a regularization based approach for mitigating biases in machine learning.
- Score: 16.320648868892526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decision making in crucial applications such as lending, hiring, and college
admissions has witnessed increasing use of algorithmic models and techniques as
a result of a confluence of factors such as ubiquitous connectivity, ability to
collect, aggregate, and process large amounts of fine-grained data using cloud
computing, and ease of access to applying sophisticated machine learning
models. Quite often, such applications are powered by search and recommendation
systems, which in turn make use of personalized ranking algorithms. At the same
time, there is increasing awareness about the ethical and legal challenges
posed by the use of such data-driven systems. Researchers and practitioners
from different disciplines have recently highlighted the potential for such
systems to discriminate against certain population groups, due to biases in the
datasets utilized for learning their underlying recommendation models. We
present a study of fairness in online personalization settings involving the
ranking of individuals. Starting from a fair warm-start machine-learned model,
we first demonstrate that online personalization can cause the model to learn
to act in an unfair manner if the user is biased in his/her responses. For this
purpose, we construct a stylized model for generating training data with
potentially biased features as well as potentially biased labels and quantify
the extent of bias that is learned by the model when the user responds in a
biased manner as in many real-world scenarios. We then formulate the problem of
learning personalized models under fairness constraints and present a
regularization based approach for mitigating biases in machine learning. We
demonstrate the efficacy of our approach through extensive simulations with
different parameter settings. Code:
https://github.com/groshanlal/Fairness-Aware-Online-Personalization
Related papers
- Understanding trade-offs in classifier bias with quality-diversity optimization: an application to talent management [2.334978724544296]
A major struggle for the development of fair AI models lies in the bias implicit in the data available to train such models.
We propose a method for visualizing the biases inherent in a dataset and understanding the potential trade-offs between fairness and accuracy.
arXiv Detail & Related papers (2024-11-25T22:14:02Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - Improving Fairness and Privacy in Selection Problems [21.293367386282902]
We study the possibility of using a differentially private exponential mechanism as a post-processing step to improve both fairness and privacy of supervised learning models.
We show that the exponential mechanism can improve both privacy and fairness, with a slight decrease in accuracy compared to the model without post-processing.
arXiv Detail & Related papers (2020-12-07T15:55:28Z) - Learning from others' mistakes: Avoiding dataset biases without modeling
them [111.17078939377313]
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended task.
Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available.
We show a method for training models that learn to ignore these problematic correlations.
arXiv Detail & Related papers (2020-12-02T16:10:54Z) - A survey of bias in Machine Learning through the prism of Statistical
Parity for the Adult Data Set [5.277804553312449]
We show the importance of understanding how a bias can be introduced into automatic decisions.
We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting.
We then propose to quantify the presence of bias by using the standard Disparate Impact index on the real and well-known Adult income data set.
arXiv Detail & Related papers (2020-03-31T14:48:36Z) - Fairness-Aware Learning with Prejudice Free Representations [2.398608007786179]
We propose a novel algorithm that can effectively identify and treat latent discriminating features.
The approach helps to collect discrimination-free features that would improve the model performance.
arXiv Detail & Related papers (2020-02-26T10:06:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.