Multi-Fair Pareto Boosting
- URL: http://arxiv.org/abs/2104.13312v1
- Date: Tue, 27 Apr 2021 16:37:35 GMT
- Title: Multi-Fair Pareto Boosting
- Authors: Arjun Roy, Vasileios Iosifidis, Eirini Ntoutsi
- Abstract summary: We introduce a new fairness notion,Multi-Max Mistreatment(MMM), which measures unfairness while considering both (multi-attribute) protected group and class membership of instances.
We solve the problem using a boosting approach that in-training,incorporates multi-fairness treatment in the distribution update and post-training.
- Score: 7.824964622317634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness-aware machine learning for multiple protected at-tributes (referred
to as multi-fairness hereafter) is receiving increasing attention as
traditional single-protected attribute approaches cannot en-sure fairness
w.r.t. other protected attributes. Existing methods, how-ever, still ignore the
fact that datasets in this domain are often imbalanced, leading to unfair
decisions towards the minority class. Thus, solutions are needed that achieve
multi-fairness,accurate predictive performance in overall, and balanced
performance across the different classes.To this end, we introduce a new
fairness notion,Multi-Max Mistreatment(MMM), which measures unfairness while
considering both (multi-attribute) protected group and class membership of
instances. To learn an MMM-fair classifier, we propose a multi-objective
problem formulation. We solve the problem using a boosting approach that
in-training,incorporates multi-fairness treatment in the distribution update
and post-training, finds multiple Pareto-optimal solutions; then uses
pseudo-weight based decision making to select optimal solution(s) among
accurate, balanced, and multi-attribute fair solutions
Related papers
- Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Achieving Fairness in Multi-Agent Markov Decision Processes Using
Reinforcement Learning [30.605881670761853]
We propose a Reinforcement Learning approach to achieve fairness in finite-horizon episodic MDPs.
We show that such an approach achieves sub-linear regret in terms of the number of episodes.
arXiv Detail & Related papers (2023-06-01T03:43:53Z) - Optimizing fairness tradeoffs in machine learning with multiobjective
meta-models [0.913755431537592]
We present a flexible framework for defining the fair machine learning task as a weighted classification problem with multiple cost functions.
We use multiobjective optimization to define the sample weights used in model training for a given machine learner, and adapt the weights to optimize multiple metrics of fairness and accuracy.
On a set of real-world problems, this approach outperforms current state-of-the-art methods by finding solution sets with preferable error/fairness trade-offs.
arXiv Detail & Related papers (2023-04-21T13:42:49Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Mitigating Unfairness via Evolutionary Multi-objective Ensemble Learning [0.8563354084119061]
Optimising one or several fairness measures may sacrifice or deteriorate other measures.
A multi-objective evolutionary learning framework is used to simultaneously optimise several metrics.
Our proposed algorithm can provide decision-makers with better tradeoffs among accuracy and multiple fairness metrics.
arXiv Detail & Related papers (2022-10-30T06:34:10Z) - Improving Robust Fairness via Balance Adversarial Training [51.67643171193376]
Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes.
We propose Adversarial Training (BAT) to address the robust fairness problem.
arXiv Detail & Related papers (2022-09-15T14:44:48Z) - Towards A Holistic View of Bias in Machine Learning: Bridging
Algorithmic Fairness and Imbalanced Learning [8.602734307457387]
A key element in achieving algorithmic fairness with respect to protected groups is the simultaneous reduction of class and protected group imbalance in the underlying training data.
We propose a novel oversampling algorithm, Fair Oversampling, that addresses both skewed class distributions and protected features.
arXiv Detail & Related papers (2022-07-13T09:48:52Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - PLM: Partial Label Masking for Imbalanced Multi-label Classification [59.68444804243782]
Neural networks trained on real-world datasets with long-tailed label distributions are biased towards frequent classes and perform poorly on infrequent classes.
We propose a method, Partial Label Masking (PLM), which utilizes this ratio during training.
Our method achieves strong performance when compared to existing methods on both multi-label (MultiMNIST and MSCOCO) and single-label (imbalanced CIFAR-10 and CIFAR-100) image classification datasets.
arXiv Detail & Related papers (2021-05-22T18:07:56Z) - Accuracy and Fairness Trade-offs in Machine Learning: A Stochastic
Multi-Objective Approach [0.0]
In the application of machine learning to real-life decision-making systems, the prediction outcomes might discriminate against people with sensitive attributes, leading to unfairness.
The commonly used strategy in fair machine learning is to include fairness as a constraint or a penalization term in the minimization of the prediction loss.
In this paper, we introduce a new approach to handle fairness by formulating a multi-objective optimization problem.
arXiv Detail & Related papers (2020-08-03T18:51:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.