Striking a Balance in Fairness for Dynamic Systems Through Reinforcement
Learning
- URL: http://arxiv.org/abs/2401.06318v1
- Date: Fri, 12 Jan 2024 01:29:26 GMT
- Title: Striking a Balance in Fairness for Dynamic Systems Through Reinforcement
Learning
- Authors: Yaowei Hu, Jacob Lear, Lu Zhang
- Abstract summary: We study fairness in dynamic systems where sequential decisions are made.
We propose an algorithmic framework to integrate various fairness considerations with reinforcement learning.
Three case studies show that our method can strike a balance between traditional fairness notions, long-term fairness, and utility.
- Score: 6.814499629376316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While significant advancements have been made in the field of fair machine
learning, the majority of studies focus on scenarios where the decision model
operates on a static population. In this paper, we study fairness in dynamic
systems where sequential decisions are made. Each decision may shift the
underlying distribution of features or user behavior. We model the dynamic
system through a Markov Decision Process (MDP). By acknowledging that
traditional fairness notions and long-term fairness are distinct requirements
that may not necessarily align with one another, we propose an algorithmic
framework to integrate various fairness considerations with reinforcement
learning using both pre-processing and in-processing approaches. Three case
studies show that our method can strike a balance between traditional fairness
notions, long-term fairness, and utility.
Related papers
- Compete and Compose: Learning Independent Mechanisms for Modular World Models [57.94106862271727]
We present COMET, a modular world model which leverages reusable, independent mechanisms across different environments.
COMET is trained on multiple environments with varying dynamics via a two-step process: competition and composition.
We show that COMET is able to adapt to new environments with varying numbers of objects with improved sample efficiency compared to more conventional finetuning approaches.
arXiv Detail & Related papers (2024-04-23T15:03:37Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Fair Off-Policy Learning from Observational Data [30.77874108094485]
We propose a novel framework for fair off-policy learning.
We first formalize different fairness notions for off-policy learning.
We then propose a neural network-based framework to learn optimal policies under different fairness notions.
arXiv Detail & Related papers (2023-03-15T10:47:48Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Can Ensembling Pre-processing Algorithms Lead to Better Machine Learning
Fairness? [8.679212948810916]
Several fairness pre-processing algorithms are available to alleviate implicit biases during model training.
These algorithms employ different concepts of fairness, often leading to conflicting strategies with consequential trade-offs between fairness and accuracy.
We evaluate three popular fairness pre-processing algorithms and investigate the potential for combining all algorithms into a more robust pre-processing ensemble.
arXiv Detail & Related papers (2022-12-05T21:54:29Z) - Survey on Fairness Notions and Related Tensions [4.257210316104905]
Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting.
However, objective machine learning (ML) algorithms are prone to bias, which results in yet unfair decisions.
This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy.
arXiv Detail & Related papers (2022-09-16T13:36:05Z) - Fair Inference for Discrete Latent Variable Models [12.558187319452657]
Machine learning models, trained on data without due care, often exhibit unfair and discriminatory behavior against certain populations.
We develop a fair variational inference technique for the discrete latent variables, which is accomplished by including a fairness penalty on the variational distribution.
To demonstrate the generality of our approach and its potential for real-world impact, we then develop a special-purpose graphical model for criminal justice risk assessments.
arXiv Detail & Related papers (2022-09-15T04:54:21Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - FairALM: Augmented Lagrangian Method for Training Fair Models with
Little Regret [42.66567001275493]
It is now accepted that because of biases in the datasets we present to the models, a fairness-oblivious training will lead to unfair models.
Here, we study mechanisms that impose fairness concurrently while training the model.
arXiv Detail & Related papers (2020-04-03T03:18:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.