Self-supervised debiasing using low rank regularization
- URL: http://arxiv.org/abs/2210.05248v2
- Date: Mon, 9 Oct 2023 02:55:15 GMT
- Title: Self-supervised debiasing using low rank regularization
- Authors: Geon Yeong Park, Chanyong Jung, Sangmin Lee, Jong Chul Ye, Sang Wan
Lee
- Abstract summary: Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
- Score: 59.84695042540525
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spurious correlations can cause strong biases in deep neural networks,
impairing generalization ability. While most existing debiasing methods require
full supervision on either spurious attributes or target labels, training a
debiased model from a limited amount of both annotations is still an open
question. To address this issue, we investigate an interesting phenomenon using
the spectral analysis of latent representations: spuriously correlated
attributes make neural networks inductively biased towards encoding lower
effective rank representations. We also show that a rank regularization can
amplify this bias in a way that encourages highly correlated features.
Leveraging these findings, we propose a self-supervised debiasing framework
potentially compatible with unlabeled samples. Specifically, we first pretrain
a biased encoder in a self-supervised manner with the rank regularization,
serving as a semantic bottleneck to enforce the encoder to learn the spuriously
correlated attributes. This biased encoder is then used to discover and
upweight bias-conflicting samples in a downstream task, serving as a boosting
to effectively debias the main model. Remarkably, the proposed debiasing
framework significantly improves the generalization performance of
self-supervised learning baselines and, in some cases, even outperforms
state-of-the-art supervised debiasing approaches.
Related papers
- Debiasify: Self-Distillation for Unsupervised Bias Mitigation [19.813054813868476]
Simplicity bias poses a significant challenge in neural networks, often leading models to favor simpler solutions and inadvertently learn decision rules influenced by spurious correlations.
We introduce Debiasify, a novel self-distillation approach that requires no prior knowledge about the nature of biases.
Our method leverages a new distillation loss to transfer knowledge within the network, from deeper layers containing complex, highly-predictive features to shallower layers with simpler, attribute-conditioned features in an unsupervised manner.
arXiv Detail & Related papers (2024-11-01T16:25:05Z) - Looking at Model Debiasing through the Lens of Anomaly Detection [11.113718994341733]
Deep neural networks are sensitive to bias in the data.
We propose a new bias identification method based on anomaly detection.
We reach state-of-the-art performance on synthetic and real benchmark datasets.
arXiv Detail & Related papers (2024-07-24T17:30:21Z) - Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - Kernel-Whitening: Overcome Dataset Bias with Isotropic Sentence
Embedding [51.48582649050054]
We propose a representation normalization method which aims at disentangling the correlations between features of encoded sentences.
We also propose Kernel-Whitening, a Nystrom kernel approximation method to achieve more thorough debiasing on nonlinear spurious correlations.
Experiments show that Kernel-Whitening significantly improves the performance of BERT on out-of-distribution datasets while maintaining in-distribution accuracy.
arXiv Detail & Related papers (2022-10-14T05:56:38Z) - Training Debiased Subnetworks with Contrastive Weight Pruning [45.27261440157806]
We present theoretical insight that alerts potential limitations of existing algorithms in exploring unbiased spuriousworks.
We then elucidate the importance of bias-conflicting samples on structure learning.
Motivated by these observations, we propose a Debiased Contrastive Weight Pruning (DCWP) algorithm, which probes unbiasedworks without expensive group annotations.
arXiv Detail & Related papers (2022-10-11T08:25:47Z) - Unsupervised Learning of Unbiased Visual Representations [10.871587311621974]
Deep neural networks are known for their inability to learn robust representations when biases exist in the dataset.
We propose a fully unsupervised debiasing framework, consisting of three steps.
We employ state-of-the-art supervised debiasing techniques to obtain an unbiased model.
arXiv Detail & Related papers (2022-04-26T10:51:50Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.