End-to-end autoencoding architecture for the simultaneous generation of
medical images and corresponding segmentation masks
- URL: http://arxiv.org/abs/2311.10472v1
- Date: Fri, 17 Nov 2023 11:56:53 GMT
- Title: End-to-end autoencoding architecture for the simultaneous generation of
medical images and corresponding segmentation masks
- Authors: Aghiles Kebaili and J\'er\^ome Lapuyade-Lahorgue and Pierre Vera and
Su Ruan
- Abstract summary: We present an end-to-end architecture based on the Hamiltonian Variational Autoencoder (HVAE)
This approach yields an improved posterior distribution approximation compared to traditional Variational Autoencoders (VAE)
Our method outperforms generative adversarial conditions, showcasing enhancements in image quality synthesis.
- Score: 3.1133049660590615
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the increasing use of deep learning in medical image segmentation,
acquiring sufficient training data remains a challenge in the medical field. In
response, data augmentation techniques have been proposed; however, the
generation of diverse and realistic medical images and their corresponding
masks remains a difficult task, especially when working with insufficient
training sets. To address these limitations, we present an end-to-end
architecture based on the Hamiltonian Variational Autoencoder (HVAE). This
approach yields an improved posterior distribution approximation compared to
traditional Variational Autoencoders (VAE), resulting in higher image
generation quality. Our method outperforms generative adversarial architectures
under data-scarce conditions, showcasing enhancements in image quality and
precise tumor mask synthesis. We conduct experiments on two publicly available
datasets, MICCAI's Brain Tumor Segmentation Challenge (BRATS), and Head and
Neck Tumor Segmentation Challenge (HECKTOR), demonstrating the effectiveness of
our method on different medical imaging modalities.
Related papers
- Discriminative Hamiltonian Variational Autoencoder for Accurate Tumor Segmentation in Data-Scarce Regimes [2.8498944632323755]
We propose an end-to-end hybrid architecture for medical image segmentation.
We use Hamiltonian Variational Autoencoders (HVAE) and a discriminative regularization to improve the quality of generated images.
Our architecture operates on a slice-by-slice basis to segment 3D volumes, capitilizing on the richly augmented dataset.
arXiv Detail & Related papers (2024-06-17T15:42:08Z) - 3D MRI Synthesis with Slice-Based Latent Diffusion Models: Improving Tumor Segmentation Tasks in Data-Scarce Regimes [2.8498944632323755]
We propose a novel slice-based latent diffusion architecture to address the complexities of volumetric data generation.
This approach extends the joint distribution modeling of medical images and their associated masks, allowing a simultaneous generation of both under data-scarce regimes.
Our architecture can be conditioned by tumor characteristics, including size, shape, and relative position, thereby providing a diverse range of tumor variations.
arXiv Detail & Related papers (2024-06-08T09:53:45Z) - MGI: Multimodal Contrastive pre-training of Genomic and Medical Imaging [16.325123491357203]
We propose a multimodal pre-training framework that jointly incorporates genomics and medical images for downstream tasks.
We align medical images and genes using a self-supervised contrastive learning approach which combines the Mamba as a genetic encoder and the Vision Transformer (ViT) as a medical image encoder.
arXiv Detail & Related papers (2024-06-02T06:20:45Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - EMIT-Diff: Enhancing Medical Image Segmentation via Text-Guided
Diffusion Model [4.057796755073023]
We develop controllable diffusion models for medical image synthesis, called EMIT-Diff.
We leverage recent diffusion probabilistic models to generate realistic and diverse synthetic medical image data.
In our approach, we ensure that the synthesized samples adhere to medically relevant constraints.
arXiv Detail & Related papers (2023-10-19T16:18:02Z) - Genetic InfoMax: Exploring Mutual Information Maximization in
High-Dimensional Imaging Genetics Studies [50.11449968854487]
Genome-wide association studies (GWAS) are used to identify relationships between genetic variations and specific traits.
Representation learning for imaging genetics is largely under-explored due to the unique challenges posed by GWAS.
We introduce a trans-modal learning framework Genetic InfoMax (GIM) to address the specific challenges of GWAS.
arXiv Detail & Related papers (2023-09-26T03:59:21Z) - Data Augmentation-Based Unsupervised Domain Adaptation In Medical
Imaging [0.709016563801433]
We propose an unsupervised method for robust domain adaptation in brain MRI segmentation by leveraging MRI-specific augmentation techniques.
The results show that our proposed approach achieves high accuracy, exhibits broad applicability, and showcases remarkable robustness against domain shift in various tasks.
arXiv Detail & Related papers (2023-08-08T17:00:11Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.