Fair Robust Active Learning by Joint Inconsistency
- URL: http://arxiv.org/abs/2209.10729v1
- Date: Thu, 22 Sep 2022 01:56:41 GMT
- Title: Fair Robust Active Learning by Joint Inconsistency
- Authors: Tsung-Han Wu, Shang-Tse Chen, Winston H. Hsu
- Abstract summary: We introduce a novel task, Fair Robust Active Learning (FRAL), integrating conventional FAL and adversarial robustness.
We develop a simple yet effective FRAL strategy by Joint INconsistency (JIN)
Our method exploits the prediction inconsistency between benign and adversarial samples as well as between standard and robust models.
- Score: 22.150782414035422
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Fair Active Learning (FAL) utilized active learning techniques to achieve
high model performance with limited data and to reach fairness between
sensitive groups (e.g., genders). However, the impact of the adversarial
attack, which is vital for various safety-critical machine learning
applications, is not yet addressed in FAL. Observing this, we introduce a novel
task, Fair Robust Active Learning (FRAL), integrating conventional FAL and
adversarial robustness. FRAL requires ML models to leverage active learning
techniques to jointly achieve equalized performance on benign data and
equalized robustness against adversarial attacks between groups. In this new
task, previous FAL methods generally face the problem of unbearable
computational burden and ineffectiveness. Therefore, we develop a simple yet
effective FRAL strategy by Joint INconsistency (JIN). To efficiently find
samples that can boost the performance and robustness of disadvantaged groups
for labeling, our method exploits the prediction inconsistency between benign
and adversarial samples as well as between standard and robust models.
Extensive experiments under diverse datasets and sensitive groups demonstrate
that our method not only achieves fairer performance on benign samples but also
obtains fairer robustness under white-box PGD attacks compared with existing
active learning and FAL baselines. We are optimistic that FRAL would pave a new
path for developing safe and robust ML research and applications such as facial
attribute recognition in biometrics systems.
Related papers
- Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - How To Overcome Confirmation Bias in Semi-Supervised Image
Classification By Active Learning [2.1805442504863506]
We present three data challenges common in real-world applications: between-class imbalance, within-class imbalance, and between-class similarity.
We find that random sampling does not mitigate confirmation bias and, in some cases, leads to worse performance than supervised learning.
Our results provide insights into the potential of combining active and semi-supervised learning in the presence of common real-world challenges.
arXiv Detail & Related papers (2023-08-16T08:52:49Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - FAL-CUR: Fair Active Learning using Uncertainty and Representativeness
on Fair Clustering [16.808400593594435]
We propose a novel strategy, named Fair Active Learning using fair Clustering, Uncertainty, and Representativeness (FAL-CUR)
FAL-CUR achieves a 15% - 20% improvement in fairness compared to the best state-of-the-art method in terms of equalized odds.
An ablation study highlights the crucial roles of fair clustering in preserving fairness and the acquisition function in stabilizing the accuracy performance.
arXiv Detail & Related papers (2022-09-21T08:28:43Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.