Pseudo Bias-Balanced Learning for Debiased Chest X-ray Classification
- URL: http://arxiv.org/abs/2203.09860v1
- Date: Fri, 18 Mar 2022 11:02:18 GMT
- Title: Pseudo Bias-Balanced Learning for Debiased Chest X-ray Classification
- Authors: Luyang Luo, Dunyuan Xu, Hao Chen, Tien-Tsin Wong, and Pheng-Ann Heng
- Abstract summary: We study the problem of developing debiased chest X-ray diagnosis models without knowing exactly the bias labels.
We propose a novel algorithm, pseudo bias-balanced learning, which first captures and predicts per-sample bias labels.
Our proposed method achieved consistent improvements over other state-of-the-art approaches.
- Score: 57.53567756716656
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep learning models were frequently reported to learn from shortcuts like
dataset biases. As deep learning is playing an increasingly important role in
the modern healthcare system, it is of great need to combat shortcut learning
in medical data as well as develop unbiased and trustworthy models. In this
paper, we study the problem of developing debiased chest X-ray diagnosis models
from the biased training data without knowing exactly the bias labels. We start
with the observations that the imbalance of bias distribution is one of the key
reasons causing shortcut learning, and the dataset biases are preferred by the
model if they were easier to be learned than the intended features. Based on
these observations, we propose a novel algorithm, pseudo bias-balanced
learning, which first captures and predicts per-sample bias labels via
generalized cross entropy loss and then trains a debiased model using pseudo
bias labels and bias-balanced softmax function. To our best knowledge, we are
pioneered in tackling dataset biases in medical images without explicit
labeling on the bias attributes. We constructed several chest X-ray datasets
with various dataset bias situations and demonstrated with extensive
experiments that our proposed method achieved consistent improvements over
other state-of-the-art approaches.
Related papers
- CosFairNet:A Parameter-Space based Approach for Bias Free Learning [1.9116784879310025]
Deep neural networks trained on biased data often inadvertently learn unintended inference rules.
We introduce a novel approach to address bias directly in the model's parameter space, preventing its propagation across layers.
We show enhanced classification accuracy and debiasing effectiveness across various synthetic and real-world datasets.
arXiv Detail & Related papers (2024-10-19T13:06:40Z) - Medical Image Debiasing by Learning Adaptive Agreement from a Biased
Council [8.530912655468645]
Deep learning could be prone to learning shortcuts raised by dataset bias.
Despite its significance, there is a dearth of research in the medical image classification domain to address dataset bias.
This paper proposes learning Adaptive Agreement from a Biased Council (Ada-ABC), a debiasing framework that does not rely on explicit bias labels.
arXiv Detail & Related papers (2024-01-22T06:29:52Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Targeted Data Augmentation for bias mitigation [0.0]
We introduce a novel and efficient approach for addressing biases called Targeted Data Augmentation (TDA)
Unlike the laborious task of removing biases, our method proposes to insert biases instead, resulting in improved performance.
To identify biases, we annotated two diverse datasets: a dataset of clinical skin lesions and a dataset of male and female faces.
arXiv Detail & Related papers (2023-08-22T12:25:49Z) - Unsupervised Learning of Unbiased Visual Representations [10.871587311621974]
Deep neural networks are known for their inability to learn robust representations when biases exist in the dataset.
We propose a fully unsupervised debiasing framework, consisting of three steps.
We employ state-of-the-art supervised debiasing techniques to obtain an unbiased model.
arXiv Detail & Related papers (2022-04-26T10:51:50Z) - Intrinsic Bias Identification on Medical Image Datasets [9.054785751150547]
We first define the data intrinsic bias attribute, and then propose a novel bias identification framework for medical image datasets.
The framework contains two major components, KlotskiNet and Bias Discriminant Direction Analysis(bdda), where KlostkiNet is to build the mapping which makes backgrounds to distinguish positive and negative samples.
Experimental results on three datasets show the effectiveness of the bias attributes discovered by the framework.
arXiv Detail & Related papers (2022-03-24T06:28:07Z) - Evading the Simplicity Bias: Training a Diverse Set of Models Discovers
Solutions with Superior OOD Generalization [93.8373619657239]
Neural networks trained with SGD were recently shown to rely preferentially on linearly-predictive features.
This simplicity bias can explain their lack of robustness out of distribution (OOD)
We demonstrate that the simplicity bias can be mitigated and OOD generalization improved.
arXiv Detail & Related papers (2021-05-12T12:12:24Z) - Learning from others' mistakes: Avoiding dataset biases without modeling
them [111.17078939377313]
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended task.
Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available.
We show a method for training models that learn to ignore these problematic correlations.
arXiv Detail & Related papers (2020-12-02T16:10:54Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - Towards Robustifying NLI Models Against Lexical Dataset Biases [94.79704960296108]
This paper explores both data-level and model-level debiasing methods to robustify models against lexical dataset biases.
First, we debias the dataset through data augmentation and enhancement, but show that the model bias cannot be fully removed via this method.
The second approach employs a bag-of-words sub-model to capture the features that are likely to exploit the bias and prevents the original model from learning these biased features.
arXiv Detail & Related papers (2020-05-10T17:56:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.