Mining bias-target Alignment from Voronoi Cells
- URL: http://arxiv.org/abs/2305.03691v1
- Date: Fri, 5 May 2023 17:09:01 GMT
- Title: Mining bias-target Alignment from Voronoi Cells
- Authors: R\'emi Nahon and Van-Tam Nguyen and Enzo Tartaglione
- Abstract summary: We propose a bias-agnostic approach to mitigate the impact of bias in deep neural networks.
Unlike traditional debiasing approaches, we rely on a metric to quantify bias alignment/misalignment'' on target classes.
Our results indicate that the proposed method achieves comparable performance to state-of-the-art supervised approaches.
- Score: 2.66418345185993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite significant research efforts, deep neural networks are still
vulnerable to biases: this raises concerns about their fairness and limits
their generalization. In this paper, we propose a bias-agnostic approach to
mitigate the impact of bias in deep neural networks. Unlike traditional
debiasing approaches, we rely on a metric to quantify ``bias
alignment/misalignment'' on target classes, and use this information to
discourage the propagation of bias-target alignment information through the
network. We conduct experiments on several commonly used datasets for debiasing
and compare our method to supervised and bias-specific approaches. Our results
indicate that the proposed method achieves comparable performance to
state-of-the-art supervised approaches, although it is bias-agnostic, even in
presence of multiple biases in the same sample.
Related papers
- Looking at Model Debiasing through the Lens of Anomaly Detection [11.113718994341733]
Deep neural networks are sensitive to bias in the data.
We propose a new bias identification method based on anomaly detection.
We reach state-of-the-art performance on synthetic and real benchmark datasets.
arXiv Detail & Related papers (2024-07-24T17:30:21Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - Self-supervised debiasing using low rank regularization [59.84695042540525]
Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
arXiv Detail & Related papers (2022-10-11T08:26:19Z) - Training Debiased Subnetworks with Contrastive Weight Pruning [45.27261440157806]
We present theoretical insight that alerts potential limitations of existing algorithms in exploring unbiased spuriousworks.
We then elucidate the importance of bias-conflicting samples on structure learning.
Motivated by these observations, we propose a Debiased Contrastive Weight Pruning (DCWP) algorithm, which probes unbiasedworks without expensive group annotations.
arXiv Detail & Related papers (2022-10-11T08:25:47Z) - Unsupervised Learning of Unbiased Visual Representations [10.871587311621974]
Deep neural networks are known for their inability to learn robust representations when biases exist in the dataset.
We propose a fully unsupervised debiasing framework, consisting of three steps.
We employ state-of-the-art supervised debiasing techniques to obtain an unbiased model.
arXiv Detail & Related papers (2022-04-26T10:51:50Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Learning Debiased Models with Dynamic Gradient Alignment and
Bias-conflicting Sample Mining [39.00256193731365]
Deep neural networks notoriously suffer from dataset biases which are detrimental to model robustness, generalization and fairness.
We propose a two-stage debiasing scheme to combat against the intractable unknown biases.
arXiv Detail & Related papers (2021-11-25T14:50:10Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - Towards Debiasing NLU Models from Unknown Biases [70.31427277842239]
NLU models often exploit biases to achieve high dataset-specific performance without properly learning the intended task.
We present a self-debiasing framework that prevents models from mainly utilizing biases without knowing them in advance.
arXiv Detail & Related papers (2020-09-25T15:49:39Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.