Fair Visual Recognition via Intervention with Proxy Features
- URL: http://arxiv.org/abs/2211.01253v1
- Date: Wed, 2 Nov 2022 16:33:49 GMT
- Title: Fair Visual Recognition via Intervention with Proxy Features
- Authors: Yi Zhang, Jitao Sang, Junyang Wang
- Abstract summary: Existing work minimizes information about social attributes in models for debiasing.
High correlation between target task and social attributes makes bias mitigation incompatible with target task accuracy.
We propose emph Proxy Debiasing, to first transfer the target task's learning of bias information from bias features to artificial proxy features, and then employ causal intervention to eliminate proxy features in inference.
- Score: 13.280828458515062
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models often learn to make predictions that rely on sensitive
social attributes like gender and race, which poses significant fairness risks,
especially in societal applications, e.g., hiring, banking, and criminal
justice. Existing work tackles this issue by minimizing information about
social attributes in models for debiasing. However, the high correlation
between target task and social attributes makes bias mitigation incompatible
with target task accuracy. Recalling that model bias arises because the
learning of features in regard to bias attributes (i.e., bias features) helps
target task optimization, we explore the following research question: \emph{Can
we leverage proxy features to replace the role of bias feature in target task
optimization for debiasing?} To this end, we propose \emph{Proxy Debiasing}, to
first transfer the target task's learning of bias information from bias
features to artificial proxy features, and then employ causal intervention to
eliminate proxy features in inference. The key idea of \emph{Proxy Debiasing}
is to design controllable proxy features to on one hand replace bias features
in contributing to target task during the training stage, and on the other hand
easily to be removed by intervention during the inference stage. This
guarantees the elimination of bias features without affecting the target
information, thus addressing the fairness-accuracy paradox in previous
debiasing solutions. We apply \emph{Proxy Debiasing} to several benchmark
datasets, and achieve significant improvements over the state-of-the-art
debiasing methods in both of accuracy and fairness.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Benign Shortcut for Debiasing: Fair Visual Recognition via Intervention
with Shortcut Features [47.01860331227165]
We propose emphShortcut Debiasing, to first transfer the target task's learning of bias attributes from bias features to shortcut features.
We achieve significant improvements over the state-of-the-art debiasing methods in both accuracy and fairness.
arXiv Detail & Related papers (2023-08-13T00:40:22Z) - Model Debiasing via Gradient-based Explanation on Representation [14.673988027271388]
We propose a novel fairness framework that performs debiasing with regard to sensitive attributes and proxy attributes.
Our framework achieves better fairness-accuracy trade-off on unstructured and structured datasets than previous state-of-the-art approaches.
arXiv Detail & Related papers (2023-05-20T11:57:57Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - Controlling Bias Exposure for Fair Interpretable Predictions [11.364105288235308]
We argue that a favorable debiasing method should use sensitive information 'fairly' rather than blindly eliminating it.
Our model achieves a desirable trade-off between debiasing and task performance along with producing debiased rationales as evidence.
arXiv Detail & Related papers (2022-10-14T01:49:01Z) - Self-supervised debiasing using low rank regularization [59.84695042540525]
Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
arXiv Detail & Related papers (2022-10-11T08:26:19Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - Towards Accuracy-Fairness Paradox: Adversarial Example-based Data
Augmentation for Visual Debiasing [15.689539491203373]
Machine learning fairness concerns about the biases towards certain protected or sensitive group of people when addressing the target tasks.
This paper studies the debiasing problem in the context of image classification tasks.
arXiv Detail & Related papers (2020-07-27T15:17:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.