Fairness for Cooperative Multi-Agent Learning with Equivariant Policies
- URL: http://arxiv.org/abs/2106.05727v1
- Date: Thu, 10 Jun 2021 13:17:46 GMT
- Title: Fairness for Cooperative Multi-Agent Learning with Equivariant Policies
- Authors: Niko A. Grupen, Bart Selman, Daniel D. Lee
- Abstract summary: We study fairness through the lens of cooperative multi-agent learning.
We introduce team fairness, a group-based fairness measure for multi-agent learning.
We then incorporate team fairness into policy optimization.
- Score: 24.92668968807012
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study fairness through the lens of cooperative multi-agent learning. Our
work is motivated by empirical evidence that naive maximization of team reward
yields unfair outcomes for individual team members. To address fairness in
multi-agent contexts, we introduce team fairness, a group-based fairness
measure for multi-agent learning. We then incorporate team fairness into policy
optimization -- introducing Fairness through Equivariance (Fair-E), a novel
learning strategy that achieves provably fair reward distributions. We then
introduce Fairness through Equivariance Regularization (Fair-ER) as a
soft-constraint version of Fair-E and show that Fair-ER reaches higher levels
of utility than Fair-E and fairer outcomes than policies with no equivariance.
Finally, we investigate the fairness-utility trade-off in multi-agent settings.
Related papers
- Cooperation and Fairness in Multi-Agent Reinforcement Learning [6.164771707307928]
In resource-constrained environments of mobility and transportation systems, efficiency may be achieved at the expense of fairness.
We consider the problem of fair multi-agent navigation for a group of decentralized agents using multi-agent reinforcement learning (MARL)
We find that our model yields a 14% improvement in efficiency and a 5% improvement in fairness over a baseline trained using random assignments.
arXiv Detail & Related papers (2024-10-19T00:10:52Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Equal Improvability: A New Fairness Notion Considering the Long-term
Impact [27.72859815965265]
We propose a new fairness notion called Equal Improvability (EI)
EI equalizes the potential acceptance rate of the rejected samples across different groups.
We show that proposed EI-regularized algorithms encourage us to find a fair classifier in terms of EI.
arXiv Detail & Related papers (2022-10-13T04:59:28Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Unified Group Fairness on Federated Learning [22.143427873780404]
Federated learning (FL) has emerged as an important machine learning paradigm where a global model is trained based on private data from distributed clients.
Recent researches focus on achieving fairness among clients, but they ignore the fairness towards different groups formed by sensitive attribute(s) (e.g., gender and/or race)
We propose a novel FL algorithm, named Group Distributionally Robust Federated Averaging (G-DRFA), which mitigates the distribution shift across groups with theoretical analysis of convergence rate.
arXiv Detail & Related papers (2021-11-09T08:21:38Z) - Fair Mixup: Fairness via Interpolation [28.508444261249423]
We propose fair mixup, a new data augmentation strategy for imposing the fairness constraint.
We show that fairness can be achieved by regularizing the models on paths of interpolated samples between the groups.
We empirically show that it ensures a better generalization for both accuracy and fairness measurement in benchmarks.
arXiv Detail & Related papers (2021-03-11T06:57:26Z) - SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness [50.916483212900275]
We first formulate a version of individual fairness that enforces invariance on certain sensitive sets.
We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently.
arXiv Detail & Related papers (2020-06-25T04:31:57Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.