CLAD: A Contrastive Learning based Approach for Background Debiasing
- URL: http://arxiv.org/abs/2210.02748v1
- Date: Thu, 6 Oct 2022 08:33:23 GMT
- Title: CLAD: A Contrastive Learning based Approach for Background Debiasing
- Authors: Ke Wang, Harshitha Machiraju, Oh-Hyeon Choung, Michael Herzog, Pascal
Frossard
- Abstract summary: We introduce a contrastive learning-based approach to mitigate the background bias in CNNs.
We achieve state-of-the-art results on the Background Challenge dataset, outperforming the previous benchmark with a margin of 4.1%.
- Score: 43.0296255565593
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional neural networks (CNNs) have achieved superhuman performance in
multiple vision tasks, especially image classification. However, unlike humans,
CNNs leverage spurious features, such as background information to make
decisions. This tendency creates different problems in terms of robustness or
weak generalization performance. Through our work, we introduce a contrastive
learning-based approach (CLAD) to mitigate the background bias in CNNs. CLAD
encourages semantic focus on object foregrounds and penalizes learning features
from irrelavant backgrounds. Our method also introduces an efficient way of
sampling negative samples. We achieve state-of-the-art results on the
Background Challenge dataset, outperforming the previous benchmark with a
margin of 4.1\%. Our paper shows how CLAD serves as a proof of concept for
debiasing of spurious features, such as background and texture (in
supplementary material).
Related papers
- Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud
Semantic Segmentation via Decoupling Optimization [64.36097398869774]
Semi-supervised learning (SSL) has been an active research topic for large-scale 3D scene understanding.
The existing SSL-based methods suffer from severe training bias due to class imbalance and long-tail distributions of the point cloud data.
We introduce a new decoupling optimization framework, which disentangles feature representation learning and classifier in an alternative optimization manner to shift the bias decision boundary effectively.
arXiv Detail & Related papers (2024-01-13T04:16:40Z) - On Feature Learning in the Presence of Spurious Correlations [45.86963293019703]
We show that the quality learned feature representations is greatly affected by the design decisions beyond the method.
We significantly improve upon the best results reported in the literature on the popular Waterbirds, Celeb hair color prediction and WILDS-FMOW problems.
arXiv Detail & Related papers (2022-10-20T16:10:28Z) - Siamese Prototypical Contrastive Learning [24.794022951873156]
Contrastive Self-supervised Learning (CSL) is a practical solution that learns meaningful visual representations from massive data in an unsupervised approach.
In this paper, we tackle this problem by introducing a simple but effective contrastive learning framework.
The key insight is to employ siamese-style metric loss to match intra-prototype features, while increasing the distance between inter-prototype features.
arXiv Detail & Related papers (2022-08-18T13:25:30Z) - CoDo: Contrastive Learning with Downstream Background Invariance for
Detection [10.608660802917214]
We propose a novel object-level self-supervised learning method, called Contrastive learning with Downstream background invariance (CoDo)
The pretext task is converted to focus on instance location modeling for various backgrounds, especially for downstream datasets.
Experiments on MSCOCO demonstrate that the proposed CoDo with common backbones, ResNet50-FPN, yields strong transfer learning results for object detection.
arXiv Detail & Related papers (2022-05-10T01:26:15Z) - Improving deep neural network generalization and robustness to
background bias via layer-wise relevance propagation optimization [0.0]
Features in images' backgrounds can spuriously correlate with the images' classes, representing background bias.
Deep neural networks (DNNs) that perform well on standard evaluation datasets but generalize poorly to real-world data.
We show that the optimization of LRP heatmaps can minimize the background bias influence on deep classifiers.
arXiv Detail & Related papers (2022-02-01T05:58:01Z) - Image Restoration by Deep Projected GSURE [115.57142046076164]
Ill-posed inverse problems appear in many image processing applications, such as deblurring and super-resolution.
We propose a new image restoration framework that is based on minimizing a loss function that includes a "projected-version" of the Generalized SteinUnbiased Risk Estimator (GSURE) and parameterization of the latent image by a CNN.
arXiv Detail & Related papers (2021-02-04T08:52:46Z) - Face Anti-Spoofing Via Disentangled Representation Learning [90.90512800361742]
Face anti-spoofing is crucial to security of face recognition systems.
We propose a novel perspective of face anti-spoofing that disentangles the liveness features and content features from images.
arXiv Detail & Related papers (2020-08-19T03:54:23Z) - Shallow Feature Based Dense Attention Network for Crowd Counting [103.67446852449551]
We propose a Shallow feature based Dense Attention Network (SDANet) for crowd counting from still images.
Our method outperforms other existing methods by a large margin, as is evident from a remarkable 11.9% Mean Absolute Error (MAE) drop of our SDANet.
arXiv Detail & Related papers (2020-06-17T13:34:42Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.