Fair Few-shot Learning with Auxiliary Sets
- URL: http://arxiv.org/abs/2308.14338v1
- Date: Mon, 28 Aug 2023 06:31:37 GMT
- Title: Fair Few-shot Learning with Auxiliary Sets
- Authors: Song Wang, Jing Ma, Lu Cheng, Jundong Li
- Abstract summary: In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
- Score: 53.30014767684218
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, there has been a growing interest in developing machine learning
(ML) models that can promote fairness, i.e., eliminating biased predictions
towards certain populations (e.g., individuals from a specific demographic
group). Most existing works learn such models based on well-designed fairness
constraints in optimization. Nevertheless, in many practical ML tasks, only
very few labeled data samples can be collected, which can lead to inferior
fairness performance. This is because existing fairness constraints are
designed to restrict the prediction disparity among different sensitive groups,
but with few samples, it becomes difficult to accurately measure the disparity,
thus rendering ineffective fairness optimization. In this paper, we define the
fairness-aware learning task with limited training samples as the \emph{fair
few-shot learning} problem. To deal with this problem, we devise a novel
framework that accumulates fairness-aware knowledge across different
meta-training tasks and then generalizes the learned knowledge to meta-test
tasks. To compensate for insufficient training samples, we propose an essential
strategy to select and leverage an auxiliary set for each meta-test task. These
auxiliary sets contain several labeled training samples that can enhance the
model performance regarding fairness in meta-test tasks, thereby allowing for
the transfer of learned useful fairness-oriented knowledge to meta-test tasks.
Furthermore, we conduct extensive experiments on three real-world datasets to
validate the superiority of our framework against the state-of-the-art
baselines.
Related papers
- Fair Class-Incremental Learning using Sample Weighting [27.82760149957115]
We show that naively using all the samples of the current task for training results in unfair catastrophic forgetting for certain sensitive groups including classes.
We propose a fair class-incremental learning framework that adjusts the training weights of current task samples to change the direction of the average gradient vector.
Experiments show that FSW achieves better accuracy-fairness tradeoff results than state-of-the-art approaches on real datasets.
arXiv Detail & Related papers (2024-10-02T08:32:21Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - Mitigating Unfairness via Evolutionary Multi-objective Ensemble Learning [0.8563354084119061]
Optimising one or several fairness measures may sacrifice or deteriorate other measures.
A multi-objective evolutionary learning framework is used to simultaneously optimise several metrics.
Our proposed algorithm can provide decision-makers with better tradeoffs among accuracy and multiple fairness metrics.
arXiv Detail & Related papers (2022-10-30T06:34:10Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Fairness Constraints in Semi-supervised Learning [56.48626493765908]
We develop a framework for fair semi-supervised learning, which is formulated as an optimization problem.
We theoretically analyze the source of discrimination in semi-supervised learning via bias, variance and noise decomposition.
Our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.
arXiv Detail & Related papers (2020-09-14T04:25:59Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z) - Ethical Adversaries: Towards Mitigating Unfairness with Adversarial
Machine Learning [8.436127109155008]
Individuals, as well as organisations, notice, test, and criticize unfair results to hold model designers and deployers accountable.
We offer a framework that assists these groups in mitigating unfair representations stemming from the training datasets.
Our framework relies on two inter-operating adversaries to improve fairness.
arXiv Detail & Related papers (2020-05-14T10:10:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.