Mixing-AdaSIN: Constructing a de-biased dataset using Adaptive
Structural Instance Normalization and texture Mixing
- URL: http://arxiv.org/abs/2103.14255v1
- Date: Fri, 26 Mar 2021 04:40:14 GMT
- Title: Mixing-AdaSIN: Constructing a de-biased dataset using Adaptive
Structural Instance Normalization and texture Mixing
- Authors: Myeongkyun Kang, Philip Chikontwe, Miguel Luna, Kyung Soo Hong, June
Hong Ahn, Sang Hyun Park
- Abstract summary: We propose Mixing-AdaSIN; a bias mitigation method that uses a generative model to generate de-biased images.
To demonstrate the efficacy of our method, we construct a biased COVID-19 vs. bacterial pneumonia dataset.
- Score: 6.976822832216875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Following the pandemic outbreak, several works have proposed to diagnose
COVID-19 with deep learning in computed tomography (CT); reporting performance
on-par with experts. However, models trained/tested on the same in-distribution
data may rely on the inherent data biases for successful prediction, failing to
generalize on out-of-distribution samples or CT with different scanning
protocols. Early attempts have partly addressed bias-mitigation and
generalization through augmentation or re-sampling, but are still limited by
collection costs and the difficulty of quantifying bias in medical images. In
this work, we propose Mixing-AdaSIN; a bias mitigation method that uses a
generative model to generate de-biased images by mixing texture information
between different labeled CT scans with semantically similar features. Here, we
use Adaptive Structural Instance Normalization (AdaSIN) to enhance de-biasing
generation quality and guarantee structural consistency. Following, a
classifier trained with the generated images learns to correctly predict the
label without bias and generalizes better. To demonstrate the efficacy of our
method, we construct a biased COVID-19 vs. bacterial pneumonia dataset based on
CT protocols and compare with existing state-of-the-art de-biasing methods. Our
experiments show that classifiers trained with de-biased generated images
report improved in-distribution performance and generalization on an external
COVID-19 dataset.
Related papers
- Debiasing Classifiers by Amplifying Bias with Latent Diffusion and Large Language Models [9.801159950963306]
We introduce DiffuBias, a novel pipeline for text-to-image generation that enhances classifier robustness by generating bias-conflict samples.
DrouBias is the first approach leveraging a stable diffusion model to generate bias-conflict samples in debiasing tasks.
Our comprehensive experimental evaluations demonstrate that DiffuBias achieves state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2024-11-25T04:11:16Z) - Fair Text to Medical Image Diffusion Model with Subgroup Distribution Aligned Tuning [12.064840522920251]
The text to medical image (T2MedI) with latent diffusion model has great potential to alleviate the scarcity of medical imaging data.
However, as the text to nature image models, we show that the T2MedI model can also bias to some subgroups to overlook the minority ones in the training set.
In this work, we first build a T2MedI model based on the pre-trained Imagen model, which has the fixed contrastive language-image pre-training (CLIP) text encoder.
Its decoder has been fine-tuned on medical images from the Radiology Objects in C
arXiv Detail & Related papers (2024-06-21T03:23:37Z) - GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection [60.78684630040313]
Diffusion models tend to reconstruct normal counterparts of test images with certain noises added.
From the global perspective, the difficulty of reconstructing images with different anomalies is uneven.
We propose a global and local adaptive diffusion model (abbreviated to GLAD) for unsupervised anomaly detection.
arXiv Detail & Related papers (2024-06-11T17:27:23Z) - A Two-Stage Generative Model with CycleGAN and Joint Diffusion for
MRI-based Brain Tumor Detection [41.454028276986946]
We propose a novel framework Two-Stage Generative Model (TSGM) to improve brain tumor detection and segmentation.
CycleGAN is trained on unpaired data to generate abnormal images from healthy images as data prior.
VE-JP is implemented to reconstruct healthy images using synthetic paired abnormal images as a guide.
arXiv Detail & Related papers (2023-11-06T12:58:26Z) - Feature-Level Debiased Natural Language Understanding [86.8751772146264]
Existing natural language understanding (NLU) models often rely on dataset biases to achieve high performance on specific datasets.
We propose debiasing contrastive learning (DCT) to mitigate biased latent features and neglect the dynamic nature of bias.
DCT outperforms state-of-the-art baselines on out-of-distribution datasets while maintaining in-distribution performance.
arXiv Detail & Related papers (2022-12-11T06:16:14Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - DeepTechnome: Mitigating Unknown Bias in Deep Learning Based Assessment
of CT Images [44.62475518267084]
We debias deep learning models during training against unknown bias.
We use control regions as surrogates that carry information regarding the bias.
Applying the proposed method to learn from data exhibiting a strong bias, it near-perfectly recovers the classification performance observed when training with corresponding unbiased data.
arXiv Detail & Related papers (2022-05-26T12:18:48Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Evaluating and Mitigating Bias in Image Classifiers: A Causal
Perspective Using Counterfactuals [27.539001365348906]
We present a method for generating counterfactuals by incorporating a structural causal model (SCM) in an improved variant of Adversarially Learned Inference (ALI)
We show how to explain a pre-trained machine learning classifier, evaluate its bias, and mitigate the bias using a counterfactual regularizer.
arXiv Detail & Related papers (2020-09-17T13:19:31Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.