Fairness-aware Regression Robust to Adversarial Attacks
- URL: http://arxiv.org/abs/2211.04449v1
- Date: Fri, 4 Nov 2022 18:09:34 GMT
- Title: Fairness-aware Regression Robust to Adversarial Attacks
- Authors: Yulu Jin and Lifeng Lai
- Abstract summary: We take a first step towards answering the question of how to design fair machine learning algorithms that are robust to adversarial attacks.
For both synthetic data and real-world datasets, numerical results illustrate that the proposed adversarially robust models have better performance on poisoned datasets than other fair machine learning models.
- Score: 46.01773881604595
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we take a first step towards answering the question of how to
design fair machine learning algorithms that are robust to adversarial attacks.
Using a minimax framework, we aim to design an adversarially robust fair
regression model that achieves optimal performance in the presence of an
attacker who is able to add a carefully designed adversarial data point to the
dataset or perform a rank-one attack on the dataset. By solving the proposed
nonsmooth nonconvex-nonconcave minimax problem, the optimal adversary as well
as the robust fairness-aware regression model are obtained. For both synthetic
data and real-world datasets, numerical results illustrate that the proposed
adversarially robust fair models have better performance on poisoned datasets
than other fair machine learning models in both prediction accuracy and
group-based fairness measure.
Related papers
- Provable Optimization for Adversarial Fair Self-supervised Contrastive Learning [49.417414031031264]
This paper studies learning fair encoders in a self-supervised learning setting.
All data are unlabeled and only a small portion of them are annotated with sensitive attributes.
arXiv Detail & Related papers (2024-06-09T08:11:12Z) - Efficient Data-Free Model Stealing with Label Diversity [22.8804507954023]
Machine learning as a Service (ML) allows users to query the machine learning model in an API manner, which provides an opportunity for users to enjoy the benefits brought by the high-performance model trained on valuable data.
This interface boosts the proliferation of machine learning based applications, while on the other hand, it introduces the attack surface for model stealing attacks.
Existing model stealing attacks have relaxed their attack assumptions to the data-free setting, while keeping the effectiveness.
In this paper, we revisit the model stealing problem from a diversity perspective and demonstrate that keeping the generated data samples more diverse across all the classes is the critical point
arXiv Detail & Related papers (2024-03-29T18:52:33Z) - Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Membership Inference Attacks against Language Models via Neighbourhood
Comparison [45.086816556309266]
Membership Inference attacks (MIAs) aim to predict whether a data sample was present in the training data of a machine learning model or not.
Recent work has demonstrated that reference-based attacks which compare model scores to those obtained from a reference model trained on similar data can substantially improve the performance of MIAs.
We investigate their performance in more realistic scenarios and find that they are highly fragile in relation to the data distribution used to train reference models.
arXiv Detail & Related papers (2023-05-29T07:06:03Z) - GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models [60.48306899271866]
We present a new framework, called GREAT Score, for global robustness evaluation of adversarial perturbation using generative models.
We show high correlation and significantly reduced cost of GREAT Score when compared to the attack-based model ranking on RobustBench.
GREAT Score can be used for remote auditing of privacy-sensitive black-box models.
arXiv Detail & Related papers (2023-04-19T14:58:27Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Poisoning Attacks on Fair Machine Learning [13.874416271549523]
We present a framework that seeks to generate poisoning samples to attack both model accuracy and algorithmic fairness.
We develop three online attacks, adversarial sampling, adversarial labeling, and adversarial feature modification.
Our framework enables attackers to flexibly adjust the attack's focus on prediction accuracy or fairness and accurately quantify the impact of each candidate point to both accuracy loss and fairness violation.
arXiv Detail & Related papers (2021-10-17T21:56:14Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - On Adversarial Bias and the Robustness of Fair Machine Learning [11.584571002297217]
We show that giving the same importance to groups of different sizes and distributions, to counteract the effect of bias in training data, can be in conflict with robustness.
An adversary who can control sampling or labeling for a fraction of training data, can reduce the test accuracy significantly beyond what he can achieve on unconstrained models.
We analyze the robustness of fair machine learning through an empirical evaluation of attacks on multiple algorithms and benchmark datasets.
arXiv Detail & Related papers (2020-06-15T18:17:44Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.