Medical Image Debiasing by Learning Adaptive Agreement from a Biased
Council
- URL: http://arxiv.org/abs/2401.11713v1
- Date: Mon, 22 Jan 2024 06:29:52 GMT
- Title: Medical Image Debiasing by Learning Adaptive Agreement from a Biased
Council
- Authors: Luyang Luo, Xin Huang, Minghao Wang, Zhuoyue Wan, Hao Chen
- Abstract summary: Deep learning could be prone to learning shortcuts raised by dataset bias.
Despite its significance, there is a dearth of research in the medical image classification domain to address dataset bias.
This paper proposes learning Adaptive Agreement from a Biased Council (Ada-ABC), a debiasing framework that does not rely on explicit bias labels.
- Score: 8.530912655468645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning could be prone to learning shortcuts raised by dataset bias and
result in inaccurate, unreliable, and unfair models, which impedes its adoption
in real-world clinical applications. Despite its significance, there is a
dearth of research in the medical image classification domain to address
dataset bias. Furthermore, the bias labels are often agnostic, as identifying
biases can be laborious and depend on post-hoc interpretation. This paper
proposes learning Adaptive Agreement from a Biased Council (Ada-ABC), a
debiasing framework that does not rely on explicit bias labels to tackle
dataset bias in medical images. Ada-ABC develops a biased council consisting of
multiple classifiers optimized with generalized cross entropy loss to learn the
dataset bias. A debiasing model is then simultaneously trained under the
guidance of the biased council. Specifically, the debiasing model is required
to learn adaptive agreement with the biased council by agreeing on the
correctly predicted samples and disagreeing on the wrongly predicted samples by
the biased council. In this way, the debiasing model could learn the target
attribute on the samples without spurious correlations while also avoiding
ignoring the rich information in samples with spurious correlations. We
theoretically demonstrated that the debiasing model could learn the target
features when the biased model successfully captures dataset bias. Moreover, to
our best knowledge, we constructed the first medical debiasing benchmark from
four datasets containing seven different bias scenarios. Our extensive
experiments practically showed that our proposed Ada-ABC outperformed
competitive approaches, verifying its effectiveness in mitigating dataset bias
for medical image classification. The codes and organized benchmark datasets
will be made publicly available.
Related papers
- Marginal Debiased Network for Fair Visual Recognition [59.05212866862219]
We propose a novel marginal debiased network (MDN) to learn debiased representations.
Our MDN can achieve a remarkable performance on under-represented samples.
arXiv Detail & Related papers (2024-01-04T08:57:09Z) - Improving Bias Mitigation through Bias Experts in Natural Language
Understanding [10.363406065066538]
We propose a new debiasing framework that introduces binary classifiers between the auxiliary model and the main model.
Our proposed strategy improves the bias identification ability of the auxiliary model.
arXiv Detail & Related papers (2023-12-06T16:15:00Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - Echoes: Unsupervised Debiasing via Pseudo-bias Labeling in an Echo
Chamber [17.034228910493056]
This paper presents experimental analyses revealing that the existing biased models overfit to bias-conflicting samples in the training data.
We propose a straightforward and effective method called Echoes, which trains a biased model and a target model with a different strategy.
Our approach achieves superior debiasing results compared to the existing baselines on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-06T13:13:18Z) - Improving Evaluation of Debiasing in Image Classification [29.711865666774017]
Our study indicates several issues need to be improved when conducting evaluation of debiasing in image classification.
Based on such issues, this paper proposes an evaluation metric Align-Conflict (AC) score' for the tuning criterion.
We believe our findings and lessons inspire future researchers in debiasing to further push state-of-the-art performances with fair comparisons.
arXiv Detail & Related papers (2022-06-08T05:24:13Z) - Intrinsic Bias Identification on Medical Image Datasets [9.054785751150547]
We first define the data intrinsic bias attribute, and then propose a novel bias identification framework for medical image datasets.
The framework contains two major components, KlotskiNet and Bias Discriminant Direction Analysis(bdda), where KlostkiNet is to build the mapping which makes backgrounds to distinguish positive and negative samples.
Experimental results on three datasets show the effectiveness of the bias attributes discovered by the framework.
arXiv Detail & Related papers (2022-03-24T06:28:07Z) - Pseudo Bias-Balanced Learning for Debiased Chest X-ray Classification [57.53567756716656]
We study the problem of developing debiased chest X-ray diagnosis models without knowing exactly the bias labels.
We propose a novel algorithm, pseudo bias-balanced learning, which first captures and predicts per-sample bias labels.
Our proposed method achieved consistent improvements over other state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-18T11:02:18Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - Towards Robustifying NLI Models Against Lexical Dataset Biases [94.79704960296108]
This paper explores both data-level and model-level debiasing methods to robustify models against lexical dataset biases.
First, we debias the dataset through data augmentation and enhancement, but show that the model bias cannot be fully removed via this method.
The second approach employs a bag-of-words sub-model to capture the features that are likely to exploit the bias and prevents the original model from learning these biased features.
arXiv Detail & Related papers (2020-05-10T17:56:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.