Causal Adversarial Perturbations for Individual Fairness and Robustness
in Heterogeneous Data Spaces
- URL: http://arxiv.org/abs/2308.08938v1
- Date: Thu, 17 Aug 2023 12:16:48 GMT
- Title: Causal Adversarial Perturbations for Individual Fairness and Robustness
in Heterogeneous Data Spaces
- Authors: Ahmad-Reza Ehyaei, Kiarash Mohammadi, Amir-Hossein Karimi, Samira
Samadi, Golnoosh Farnadi
- Abstract summary: We propose a novel approach that examines the relationship between individual fairness, adversarial robustness, and structural causal models in heterogeneous data spaces.
By introducing a novel causal adversarial perturbation and applying adversarial training, we create a new regularizer that combines individual fairness, causality, and robustness in the classifier.
- Score: 9.945881685938602
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As responsible AI gains importance in machine learning algorithms, properties
such as fairness, adversarial robustness, and causality have received
considerable attention in recent years. However, despite their individual
significance, there remains a critical gap in simultaneously exploring and
integrating these properties. In this paper, we propose a novel approach that
examines the relationship between individual fairness, adversarial robustness,
and structural causal models in heterogeneous data spaces, particularly when
dealing with discrete sensitive attributes. We use causal structural models and
sensitive attributes to create a fair metric and apply it to measure semantic
similarity among individuals. By introducing a novel causal adversarial
perturbation and applying adversarial training, we create a new regularizer
that combines individual fairness, causality, and robustness in the classifier.
Our method is evaluated on both real-world and synthetic datasets,
demonstrating its effectiveness in achieving an accurate classifier that
simultaneously exhibits fairness, adversarial robustness, and causal awareness.
Related papers
- Causal Fair Metric: Bridging Causality, Individual Fairness, and
Adversarial Robustness [7.246701762489971]
Adversarial perturbation, used to identify vulnerabilities in models, and individual fairness, aiming for equitable treatment of similar individuals, both depend on metrics to generate comparable input data instances.
Previous attempts to define such joint metrics often lack general assumptions about data or structural causal models and were unable to reflect counterfactual proximity.
This paper introduces a causal fair metric formulated based on causal structures encompassing sensitive attributes and protected causal perturbation.
arXiv Detail & Related papers (2023-10-30T09:53:42Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - Fairness and robustness in anti-causal prediction [73.693135253335]
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
arXiv Detail & Related papers (2022-09-20T02:41:17Z) - Enhancing Model Robustness and Fairness with Causality: A Regularization
Approach [15.981724441808147]
Recent work has raised concerns on the risk of spurious correlations and unintended biases in machine learning models.
We propose a simple and intuitive regularization approach to integrate causal knowledge during model training.
We build a predictive model that relies more on causal features and less on non-causal features.
arXiv Detail & Related papers (2021-10-03T02:49:33Z) - Adversarial Robustness through the Lens of Causality [105.51753064807014]
adversarial vulnerability of deep neural networks has attracted significant attention in machine learning.
We propose to incorporate causality into mitigating adversarial vulnerability.
Our method can be seen as the first attempt to leverage causality for mitigating adversarial vulnerability.
arXiv Detail & Related papers (2021-06-11T06:55:02Z) - Balancing Robustness and Sensitivity using Feature Contrastive Learning [95.86909855412601]
Methods that promote robustness can hurt the model's sensitivity to rare or underrepresented patterns.
We propose Feature Contrastive Learning (FCL) that encourages a model to be more sensitive to the features that have higher contextual utility.
arXiv Detail & Related papers (2021-05-19T20:53:02Z) - FAIR: Fair Adversarial Instance Re-weighting [0.7829352305480285]
We propose a Fair Adrial Instance Re-weighting (FAIR) method, which uses adversarial training to learn instance weighting function that ensures fair predictions.
To the best of our knowledge, this is the first model that merges reweighting and adversarial approaches by means of a weighting function that can provide interpretable information about fairness of individual instances.
arXiv Detail & Related papers (2020-11-15T10:48:56Z) - Adversarial Learning for Counterfactual Fairness [15.302633901803526]
In recent years, fairness has become an important topic in the machine learning research community.
We propose to rely on an adversarial neural learning approach, that enables more powerful inference than with MMD penalties.
Experiments show significant improvements in term of counterfactual fairness for both the discrete and the continuous settings.
arXiv Detail & Related papers (2020-08-30T09:06:03Z) - Precise Tradeoffs in Adversarial Training for Linear Regression [55.764306209771405]
We provide a precise and comprehensive understanding of the role of adversarial training in the context of linear regression with Gaussian features.
We precisely characterize the standard/robust accuracy and the corresponding tradeoff achieved by a contemporary mini-max adversarial training approach.
Our theory for adversarial training algorithms also facilitates the rigorous study of how a variety of factors (size and quality of training data, model overparametrization etc.) affect the tradeoff between these two competing accuracies.
arXiv Detail & Related papers (2020-02-24T19:01:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.