Fairness and robustness in anti-causal prediction
- URL: http://arxiv.org/abs/2209.09423v2
- Date: Tue, 12 Sep 2023 14:46:24 GMT
- Title: Fairness and robustness in anti-causal prediction
- Authors: Maggie Makar, Alexander D'Amour
- Abstract summary: Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
- Score: 73.693135253335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robustness to distribution shift and fairness have independently emerged as
two important desiderata required of modern machine learning models. While
these two desiderata seem related, the connection between them is often unclear
in practice. Here, we discuss these connections through a causal lens, focusing
on anti-causal prediction tasks, where the input to a classifier (e.g., an
image) is assumed to be generated as a function of the target label and the
protected attribute. By taking this perspective, we draw explicit connections
between a common fairness criterion - separation - and a common notion of
robustness - risk invariance. These connections provide new motivation for
applying the separation criterion in anticausal settings, and inform old
discussions regarding fairness-performance tradeoffs. In addition, our findings
suggest that robustness-motivated approaches can be used to enforce separation,
and that they often work better in practice than methods designed to directly
enforce separation. Using a medical dataset, we empirically validate our
findings on the task of detecting pneumonia from X-rays, in a setting where
differences in prevalence across sex groups motivates a fairness mitigation.
Our findings highlight the importance of considering causal structure when
choosing and enforcing fairness criteria.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - Extending Momentum Contrast with Cross Similarity Consistency
Regularization [5.085461418671174]
We present Extended Momentum Contrast, a self-supervised representation learning method founded upon the legacy of the momentum-encoder unit proposed in the MoCo family configurations.
Under the cross consistency regularization rule, we argue that semantic representations associated with any pair of images (positive or negative) should preserve their cross-similarity.
We report a competitive performance on the standard Imagenet-1K linear head classification benchmark.
arXiv Detail & Related papers (2022-06-07T20:06:56Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Attributing Fair Decisions with Attention Interventions [28.968122909973975]
We design an attention-based model that can be leveraged as an attribution framework.
It can identify features responsible for both performance and fairness of the model through attention interventions and attention weight manipulation.
We then design a post-processing bias mitigation strategy and compare it with a suite of baselines.
arXiv Detail & Related papers (2021-09-08T22:28:44Z) - Concurrent Discrimination and Alignment for Self-Supervised Feature
Learning [52.213140525321165]
Existing self-supervised learning methods learn by means of pretext tasks which are either (1) discriminating that explicitly specify which features should be separated or (2) aligning that precisely indicate which features should be closed together.
In this work, we combine the positive aspects of the discriminating and aligning methods, and design a hybrid method that addresses the above issue.
Our method explicitly specifies the repulsion and attraction mechanism respectively by discriminative predictive task and concurrently maximizing mutual information between paired views.
Our experiments on nine established benchmarks show that the proposed model consistently outperforms the existing state-of-the-art results of self-supervised and transfer
arXiv Detail & Related papers (2021-08-19T09:07:41Z) - Adversarial Robustness through the Lens of Causality [105.51753064807014]
adversarial vulnerability of deep neural networks has attracted significant attention in machine learning.
We propose to incorporate causality into mitigating adversarial vulnerability.
Our method can be seen as the first attempt to leverage causality for mitigating adversarial vulnerability.
arXiv Detail & Related papers (2021-06-11T06:55:02Z) - Adversarial Robustness with Non-uniform Perturbations [3.804240190982695]
Prior work mainly focus on crafting adversarial examples with small uniform norm-bounded perturbations across features to maintain the requirement of imperceptibility.
Our approach can be adapted to other domains where non-uniform perturbations more accurately represent realistic adversarial examples.
arXiv Detail & Related papers (2021-02-24T00:54:43Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.