Mitigating Unfairness via Evolutionary Multi-objective Ensemble Learning
- URL: http://arxiv.org/abs/2210.16754v1
- Date: Sun, 30 Oct 2022 06:34:10 GMT
- Title: Mitigating Unfairness via Evolutionary Multi-objective Ensemble Learning
- Authors: Zhang Qingquan, Liu Jialin, Zhang Zeqi, Wen Junyi, Mao Bifei, Yao Xin
- Abstract summary: Optimising one or several fairness measures may sacrifice or deteriorate other measures.
A multi-objective evolutionary learning framework is used to simultaneously optimise several metrics.
Our proposed algorithm can provide decision-makers with better tradeoffs among accuracy and multiple fairness metrics.
- Score: 0.8563354084119061
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the literature of mitigating unfairness in machine learning, many fairness
measures are designed to evaluate predictions of learning models and also
utilised to guide the training of fair models. It has been theoretically and
empirically shown that there exist conflicts and inconsistencies among accuracy
and multiple fairness measures. Optimising one or several fairness measures may
sacrifice or deteriorate other measures. Two key questions should be
considered, how to simultaneously optimise accuracy and multiple fairness
measures, and how to optimise all the considered fairness measures more
effectively. In this paper, we view the mitigating unfairness problem as a
multi-objective learning problem considering the conflicts among fairness
measures. A multi-objective evolutionary learning framework is used to
simultaneously optimise several metrics (including accuracy and multiple
fairness measures) of machine learning models. Then, ensembles are constructed
based on the learning models in order to automatically balance different
metrics. Empirical results on eight well-known datasets demonstrate that
compared with the state-of-the-art approaches for mitigating unfairness, our
proposed algorithm can provide decision-makers with better tradeoffs among
accuracy and multiple fairness metrics. Furthermore, the high-quality models
generated by the framework can be used to construct an ensemble to
automatically achieve a better tradeoff among all the considered fairness
metrics than other ensemble methods. Our code is publicly available at
https://github.com/qingquan63/FairEMOL
Related papers
- Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for
Hate Speech Detection [8.841221697099687]
We introduce a differentiable measure that enables direct optimization of group fairness in model training.
We evaluate our methods on the specific task of hate speech detection.
Empirical results across convolutional, sequential, and transformer-based neural architectures show superior empirical accuracy vs. fairness trade-offs over prior work.
arXiv Detail & Related papers (2022-04-15T22:11:25Z) - Learning Optimal Fair Classification Trees: Trade-offs Between
Interpretability, Fairness, and Accuracy [7.215903549622416]
We propose a mixed integer optimization framework for learning optimal classification trees.
We benchmark our method against state-of-the-art approaches for fair classification on popular datasets.
Our method consistently finds decisions with almost full parity, while other methods rarely do.
arXiv Detail & Related papers (2022-01-24T19:47:10Z) - FADE: FAir Double Ensemble Learning for Observable and Counterfactual
Outcomes [0.0]
Methods for building fair predictors often involve tradeoffs between fairness and accuracy and between different fairness criteria.
We develop a flexible framework for fair ensemble learning that allows users to efficiently explore the fairness-accuracy space.
We show that, surprisingly, multiple unfairness measures can sometimes be minimized simultaneously with little impact on accuracy.
arXiv Detail & Related papers (2021-09-01T03:56:43Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Fairness Constraints in Semi-supervised Learning [56.48626493765908]
We develop a framework for fair semi-supervised learning, which is formulated as an optimization problem.
We theoretically analyze the source of discrimination in semi-supervised learning via bias, variance and noise decomposition.
Our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.
arXiv Detail & Related papers (2020-09-14T04:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.