Fairness-Aware Online Meta-learning
- URL: http://arxiv.org/abs/2108.09435v1
- Date: Sat, 21 Aug 2021 04:36:40 GMT
- Title: Fairness-Aware Online Meta-learning
- Authors: Chen Zhao, Feng Chen, Bhavani Thuraisingham
- Abstract summary: We propose a novel online meta-learning algorithm, namely FFML, under the setting of unfairness prevention.
Our experiments demonstrate the versatility of FFML by applying it to classification on three real-world datasets.
- Score: 9.513605738438047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In contrast to offline working fashions, two research paradigms are devised
for online learning: (1) Online Meta Learning (OML) learns good priors over
model parameters (or learning to learn) in a sequential setting where tasks are
revealed one after another. Although it provides a sub-linear regret bound,
such techniques completely ignore the importance of learning with fairness
which is a significant hallmark of human intelligence. (2) Online
Fairness-Aware Learning. This setting captures many classification problems for
which fairness is a concern. But it aims to attain zero-shot generalization
without any task-specific adaptation. This therefore limits the capability of a
model to adapt onto newly arrived data. To overcome such issues and bridge the
gap, in this paper for the first time we proposed a novel online meta-learning
algorithm, namely FFML, which is under the setting of unfairness prevention.
The key part of FFML is to learn good priors of an online fair classification
model's primal and dual parameters that are associated with the model's
accuracy and fairness, respectively. The problem is formulated in the form of a
bi-level convex-concave optimization. Theoretic analysis provides sub-linear
upper bounds for loss regret and for violation of cumulative fairness
constraints. Our experiments demonstrate the versatility of FFML by applying it
to classification on three real-world datasets and show substantial
improvements over the best prior work on the tradeoff between fairness and
classification accuracy
Related papers
- Fairness Uncertainty Quantification: How certain are you that the model
is fair? [13.209748908186606]
In modern machine learning, Gradient Descent (SGD) type algorithms are almost always used as training algorithms implying that the learned model, and consequently, its fairness properties are random.
In this work we provide Confidence Interval (CI) for test unfairness when a group-fairness-aware, specifically, Disparate Impact (DI), and Disparate Mistreatment (DM) aware linear binary classifier is trained using online SGD-type algorithms.
arXiv Detail & Related papers (2023-04-27T04:07:58Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Preventing Discriminatory Decision-making in Evolving Data Streams [8.952662914331901]
Bias in machine learning has rightly received significant attention over the last decade.
Most fair machine learning (fair-ML) work to address bias in decision-making systems has focused solely on the offline setting.
Despite the wide prevalence of online systems in the real world, work on identifying and correcting bias in the online setting is severely lacking.
arXiv Detail & Related papers (2023-02-16T01:20:08Z) - Prototype-Anchored Learning for Learning with Imperfect Annotations [83.7763875464011]
It is challenging to learn unbiased classification models from imperfectly annotated datasets.
We propose a prototype-anchored learning (PAL) method, which can be easily incorporated into various learning-based classification schemes.
We verify the effectiveness of PAL on class-imbalanced learning and noise-tolerant learning by extensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-06-23T10:25:37Z) - Adaptive Fairness-Aware Online Meta-Learning for Changing Environments [29.073555722548956]
The fairness-aware online learning framework has arisen as a powerful tool for the continual lifelong learning setting.
Existing methods make heavy use of the i.i.d assumption for data and hence provide static regret analysis for the framework.
We propose a novel adaptive fairness-aware online meta-learning algorithm, namely FairSAOML, which is able to adapt to changing environments in both bias control and model precision.
arXiv Detail & Related papers (2022-05-20T15:29:38Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - A Primal-Dual Subgradient Approachfor Fair Meta Learning [23.65344558042896]
Few shot meta-learning is well-known with its fast-adapted capability and accuracy generalization onto unseen tasks.
We propose a Primal-Dual Fair Meta-learning framework, namely PDFM, which learns to train fair machine learning models using only a few examples.
arXiv Detail & Related papers (2020-09-26T19:47:38Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Fairness Constraints in Semi-supervised Learning [56.48626493765908]
We develop a framework for fair semi-supervised learning, which is formulated as an optimization problem.
We theoretically analyze the source of discrimination in semi-supervised learning via bias, variance and noise decomposition.
Our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.
arXiv Detail & Related papers (2020-09-14T04:25:59Z) - Deep F-measure Maximization for End-to-End Speech Understanding [52.36496114728355]
We propose a differentiable approximation to the F-measure and train the network with this objective using standard backpropagation.
We perform experiments on two standard fairness datasets, Adult, Communities and Crime, and also on speech-to-intent detection on the ATIS dataset and speech-to-image concept classification on the Speech-COCO dataset.
In all four of these tasks, F-measure results in improved micro-F1 scores, with absolute improvements of up to 8% absolute, as compared to models trained with the cross-entropy loss function.
arXiv Detail & Related papers (2020-08-08T03:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.