Generative Adversarial U-Net for Domain-free Medical Image Augmentation
- URL: http://arxiv.org/abs/2101.04793v1
- Date: Tue, 12 Jan 2021 23:02:26 GMT
- Title: Generative Adversarial U-Net for Domain-free Medical Image Augmentation
- Authors: Xiaocong Chen and Yun Li and Lina Yao and Ehsan Adeli and Yu Zhang
- Abstract summary: The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
- Score: 49.72048151146307
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The shortage of annotated medical images is one of the biggest challenges in
the field of medical image computing. Without a sufficient number of training
samples, deep learning based models are very likely to suffer from over-fitting
problem. The common solution is image manipulation such as image rotation,
cropping, or resizing. Those methods can help relieve the over-fitting problem
as more training samples are introduced. However, they do not really introduce
new images with additional information and may lead to data leakage as the test
set may contain similar samples which appear in the training set. To address
this challenge, we propose to generate diverse images with generative
adversarial network. In this paper, we develop a novel generative method named
generative adversarial U-Net , which utilizes both generative adversarial
network and U-Net. Different from existing approaches, our newly designed model
is domain-free and generalizable to various medical images. Extensive
experiments are conducted over eight diverse datasets including computed
tomography (CT) scan, pathology, X-ray, etc. The visualization and quantitative
results demonstrate the efficacy and good generalization of the proposed method
on generating a wide array of high-quality medical images.
Related papers
- SurgicaL-CD: Generating Surgical Images via Unpaired Image Translation with Latent Consistency Diffusion Models [1.6189876649941652]
We introduce emphSurgicaL-CD, a consistency-distilled diffusion method to generate realistic surgical images.
Our results demonstrate that our method outperforms GANs and diffusion-based approaches.
arXiv Detail & Related papers (2024-08-19T09:19:25Z) - Interactive Image Selection and Training for Brain Tumor Segmentation Network [42.62139206176152]
We employ an interactive method for image selection and training based on Feature Learning from Image Markers (FLIM)
The results demonstrated that with our methodology, we could choose a small set of images to train the encoder of a U-shaped network, obtaining performance equal to manual selection and even surpassing the same U-shaped network trained with backpropagation and all training images.
arXiv Detail & Related papers (2024-06-05T13:03:06Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions [66.40971096248946]
In this paper, we collect a series of MedISeg tricks for different model implementation phases.
We experimentally explore the effectiveness of these tricks on consistent baselines.
We also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play.
arXiv Detail & Related papers (2022-09-21T12:30:05Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - Pathology-Aware Generative Adversarial Networks for Medical Image
Augmentation [0.22843885788439805]
Generative Adversarial Networks (GANs) can generate realistic but novel samples, and thus effectively cover the real image distribution.
This thesis contains four GAN projects aiming to present such novel applications' clinical relevance in collaboration with physicians.
arXiv Detail & Related papers (2021-06-03T15:08:14Z) - Medical Image Harmonization Using Deep Learning Based Canonical Mapping:
Toward Robust and Generalizable Learning in Imaging [4.396671464565882]
We propose a new paradigm in which data from a diverse range of acquisition conditions are "harmonized" to a common reference domain.
We test this approach on two example problems, namely MRI-based brain age prediction and classification of schizophrenia.
arXiv Detail & Related papers (2020-10-11T22:01:37Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z) - RADIOGAN: Deep Convolutional Conditional Generative adversarial Network
To Generate PET Images [3.947298454012977]
We propose a deep convolutional conditional generative adversarial network to generate MIP positron emission tomography image (PET)
The advantage of our proposed method consists of one model that is capable of generating different classes of lesions trained on a small sample size for each class of lesion.
In addition, we show that a walk through a latent space can be used as a tool to evaluate the images generated.
arXiv Detail & Related papers (2020-03-19T10:14:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.