Red-GAN: Attacking class imbalance via conditioned generation. Yet
another perspective on medical image synthesis for skin lesion dermoscopy and
brain tumor MRI
- URL: http://arxiv.org/abs/2004.10734v4
- Date: Sun, 28 Mar 2021 00:15:19 GMT
- Title: Red-GAN: Attacking class imbalance via conditioned generation. Yet
another perspective on medical image synthesis for skin lesion dermoscopy and
brain tumor MRI
- Authors: Ahmad B Qasim, Ivan Ezhov, Suprosanna Shit, Oliver Schoppe, Johannes C
Paetzold, Anjany Sekuboyina, Florian Kofler, Jana Lipkova, Hongwei Li, Bjoern
Menze
- Abstract summary: We propose a data augmentation protocol based on generative adversarial networks.
We validate the approach on two medical datasets: BraTS, ISIC.
- Score: 5.075029145724692
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exploiting learning algorithms under scarce data regimes is a limitation and
a reality of the medical imaging field. In an attempt to mitigate the problem,
we propose a data augmentation protocol based on generative adversarial
networks. We condition the networks at a pixel-level (segmentation mask) and at
a global-level information (acquisition environment or lesion type). Such
conditioning provides immediate access to the image-label pairs while
controlling global class specific appearance of the synthesized images. To
stimulate synthesis of the features relevant for the segmentation task, an
additional passive player in a form of segmentor is introduced into the
adversarial game. We validate the approach on two medical datasets: BraTS,
ISIC. By controlling the class distribution through injection of synthetic
images into the training set we achieve control over the accuracy levels of the
datasets' classes.
Related papers
- MRGen: Segmentation Data Engine for Underrepresented MRI Modalities [59.61465292965639]
Training medical image segmentation models for rare yet clinically important imaging modalities is challenging due to the scarcity of annotated data.<n>This paper investigates leveraging generative models to synthesize data, for training segmentation models for underrepresented modalities.<n>We present MRGen, a data engine for controllable medical image synthesis conditioned on text prompts and segmentation masks.
arXiv Detail & Related papers (2024-12-04T16:34:22Z) - COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images [3.5418498524791766]
This research is development of a novel counterfactual inpainting approach (COIN)
COIN flips the predicted classification label from abnormal to normal by using a generative model.
The effectiveness of the method is demonstrated by segmenting synthetic targets and actual kidney tumors from CT images acquired from Tartu University Hospital in Estonia.
arXiv Detail & Related papers (2024-04-19T12:09:49Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Generative Adversarial Networks based Skin Lesion Segmentation [7.9234173309439715]
We propose a novel adversarial learning-based framework called Efficient-GAN that uses an unsupervised generative network to generate accurate lesion masks.
It outperforms the current state-of-the-art skin lesion segmentation approaches with a Dice coefficient, Jaccard similarity, and Accuracy of 90.1%, 83.6%, and 94.5%, respectively.
We also design a lightweight segmentation framework (MGAN) that achieves comparable performance as EGAN but with an order of magnitude lower number of training parameters.
arXiv Detail & Related papers (2023-05-29T15:51:31Z) - Multi-Level Global Context Cross Consistency Model for Semi-Supervised
Ultrasound Image Segmentation with Diffusion Model [0.0]
We propose a framework that uses images generated by a Latent Diffusion Model (LDM) as unlabeled images for semi-supervised learning.
Our approach enables the effective transfer of probability distribution knowledge to the segmentation network, resulting in improved segmentation accuracy.
arXiv Detail & Related papers (2023-05-16T14:08:24Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - Cross-level Contrastive Learning and Consistency Constraint for
Semi-supervised Medical Image Segmentation [46.678279106837294]
We propose a cross-level constrastive learning scheme to enhance representation capacity for local features in semi-supervised medical image segmentation.
With the help of the cross-level contrastive learning and consistency constraint, the unlabelled data can be effectively explored to improve segmentation performance.
arXiv Detail & Related papers (2022-02-08T15:12:11Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z) - Melanoma Detection using Adversarial Training and Deep Transfer Learning [6.22964000148682]
We propose a two-stage framework for automatic classification of skin lesion images.
In the first stage, we leverage the inter-class variation of the data distribution for the task of conditional image synthesis.
In the second stage, we train a deep convolutional neural network for skin lesion classification.
arXiv Detail & Related papers (2020-04-14T22:46:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.