What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning
- URL: http://arxiv.org/abs/2404.10942v2
- Date: Sun, 28 Apr 2024 08:49:45 GMT
- Title: What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning
- Authors: Zhihong Deng, Jing Jiang, Guodong Long, Chengqi Zhang,
- Abstract summary: In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
- Score: 52.51430732904994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In sequential decision-making problems involving sensitive attributes like race and gender, reinforcement learning (RL) agents must carefully consider long-term fairness while maximizing returns. Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear. In this paper, we address this gap in the literature by investigating the sources of inequality through a causal lens. We first analyse the causal relationships governing the data generation process and decompose the effect of sensitive attributes on long-term well-being into distinct components. We then introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics, distinguishing it from those induced by decision-making or inherited from the past. This notion requires evaluating the expected changes in the next state and the reward induced by changing the value of the sensitive attribute while holding everything else constant. To quantitatively evaluate this counterfactual concept, we derive identification formulas that allow us to obtain reliable estimations from data. Extensive experiments demonstrate the effectiveness of the proposed techniques in explaining, detecting, and reducing inequality in reinforcement learning. We publicly release code at https://github.com/familyld/InsightFair.
Related papers
- One Fits All: Learning Fair Graph Neural Networks for Various Sensitive Attributes [40.57757706386367]
We propose a graph fairness framework based on invariant learning, namely FairINV.
FairINV incorporates sensitive attribute partition and trains fair GNNs by eliminating spurious correlations between the label and various sensitive attributes.
Experimental results on several real-world datasets demonstrate that FairINV significantly outperforms state-of-the-art fairness approaches.
arXiv Detail & Related papers (2024-06-19T13:30:17Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - Fairness Explainability using Optimal Transport with Applications in
Image Classification [0.46040036610482665]
We propose a comprehensive approach to uncover the causes of discrimination in Machine Learning applications.
We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions.
This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence emphon the bias.
arXiv Detail & Related papers (2023-08-22T00:10:23Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - FAIRER: Fairness as Decision Rationale Alignment [23.098752318439782]
Deep neural networks (DNNs) have made significant progress, but often suffer from fairness issues.
It is unclear how the trained network makes a fair prediction, which limits future fairness improvements.
We propose gradient-guided parity alignment, which encourages gradient-weighted consistency of neurons across subgroups.
arXiv Detail & Related papers (2023-06-27T08:37:57Z) - Counterfactual Fairness with Partially Known Causal Graph [85.15766086381352]
This paper proposes a general method to achieve the notion of counterfactual fairness when the true causal graph is unknown.
We find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided.
arXiv Detail & Related papers (2022-05-27T13:40:50Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Adversarial Learning for Counterfactual Fairness [15.302633901803526]
In recent years, fairness has become an important topic in the machine learning research community.
We propose to rely on an adversarial neural learning approach, that enables more powerful inference than with MMD penalties.
Experiments show significant improvements in term of counterfactual fairness for both the discrete and the continuous settings.
arXiv Detail & Related papers (2020-08-30T09:06:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.