Causal Fair Metric: Bridging Causality, Individual Fairness, and
Adversarial Robustness
- URL: http://arxiv.org/abs/2310.19391v2
- Date: Tue, 6 Feb 2024 10:25:37 GMT
- Title: Causal Fair Metric: Bridging Causality, Individual Fairness, and
Adversarial Robustness
- Authors: Ahmad-Reza Ehyaei, Golnoosh Farnadi, Samira Samadi
- Abstract summary: Adversarial perturbation, used to identify vulnerabilities in models, and individual fairness, aiming for equitable treatment of similar individuals, both depend on metrics to generate comparable input data instances.
Previous attempts to define such joint metrics often lack general assumptions about data or structural causal models and were unable to reflect counterfactual proximity.
This paper introduces a causal fair metric formulated based on causal structures encompassing sensitive attributes and protected causal perturbation.
- Score: 7.246701762489971
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the essential need for comprehensive considerations in responsible
AI, factors like robustness, fairness, and causality are often studied in
isolation. Adversarial perturbation, used to identify vulnerabilities in
models, and individual fairness, aiming for equitable treatment of similar
individuals, despite initial differences, both depend on metrics to generate
comparable input data instances. Previous attempts to define such joint metrics
often lack general assumptions about data or structural causal models and were
unable to reflect counterfactual proximity. To address this, our paper
introduces a causal fair metric formulated based on causal structures
encompassing sensitive attributes and protected causal perturbation. To enhance
the practicality of our metric, we propose metric learning as a method for
metric estimation and deployment in real-world problems in the absence of
structural causal models. We also demonstrate the application of our novel
metric in classifiers. Empirical evaluation of real-world and synthetic
datasets illustrates the effectiveness of our proposed metric in achieving an
accurate classifier with fairness, resilience to adversarial perturbations, and
a nuanced understanding of causal relationships.
Related papers
- Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Causal Adversarial Perturbations for Individual Fairness and Robustness
in Heterogeneous Data Spaces [9.945881685938602]
We propose a novel approach that examines the relationship between individual fairness, adversarial robustness, and structural causal models in heterogeneous data spaces.
By introducing a novel causal adversarial perturbation and applying adversarial training, we create a new regularizer that combines individual fairness, causality, and robustness in the classifier.
arXiv Detail & Related papers (2023-08-17T12:16:48Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - On Causally Disentangled Representations [18.122893077772993]
We present an analysis of disentangled representations through the notion of disentangled causal process.
We show that our metrics capture the desiderata of disentangled causal process.
We perform an empirical study on state of the art disentangled representation learners using our metrics and dataset to evaluate them from causal perspective.
arXiv Detail & Related papers (2021-12-10T18:56:27Z) - Enhancing Model Robustness and Fairness with Causality: A Regularization
Approach [15.981724441808147]
Recent work has raised concerns on the risk of spurious correlations and unintended biases in machine learning models.
We propose a simple and intuitive regularization approach to integrate causal knowledge during model training.
We build a predictive model that relies more on causal features and less on non-causal features.
arXiv Detail & Related papers (2021-10-03T02:49:33Z) - Understanding Factuality in Abstractive Summarization with FRANK: A
Benchmark for Factuality Metrics [17.677637487977208]
Modern summarization models generate highly fluent but often factually unreliable outputs.
Due to the lack of common benchmarks, metrics attempting to measure the factuality of automatically generated summaries cannot be compared.
We devise a typology of factual errors and use it to collect human annotations of generated summaries from state-of-the-art summarization systems.
arXiv Detail & Related papers (2021-04-27T17:28:07Z) - On Disentangled Representations Learned From Correlated Data [59.41587388303554]
We bridge the gap to real-world scenarios by analyzing the behavior of the most prominent disentanglement approaches on correlated data.
We show that systematically induced correlations in the dataset are being learned and reflected in the latent representations.
We also demonstrate how to resolve these latent correlations, either using weak supervision during training or by post-hoc correcting a pre-trained model with a small number of labels.
arXiv Detail & Related papers (2020-06-14T12:47:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.