Anatomically constrained CT image translation for heterogeneous blood
vessel segmentation
- URL: http://arxiv.org/abs/2210.01713v1
- Date: Tue, 4 Oct 2022 16:14:49 GMT
- Title: Anatomically constrained CT image translation for heterogeneous blood
vessel segmentation
- Authors: Giammarco La Barbera, Haithem Boussaid, Francesco Maso, Sabine
Sarnacki, Laurence Rouet, Pietro Gori, Isabelle Bloch
- Abstract summary: Anatomical structures in contrast-enhanced CT (ceCT) images can be challenging to segment due to variability in contrast medium diffusion.
To limit the radiation dose, generative models could be used to synthesize one modality, instead of acquiring it.
CycleGAN has attracted particular attention because it alleviates the need for paired data.
We present an extension of CycleGAN to generate high fidelity images, with good structural consistency.
- Score: 3.88838725116957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anatomical structures such as blood vessels in contrast-enhanced CT (ceCT)
images can be challenging to segment due to the variability in contrast medium
diffusion. The combined use of ceCT and contrast-free (CT) CT images can
improve the segmentation performances, but at the cost of a double radiation
exposure. To limit the radiation dose, generative models could be used to
synthesize one modality, instead of acquiring it. The CycleGAN approach has
recently attracted particular attention because it alleviates the need for
paired data that are difficult to obtain. Despite the great performances
demonstrated in the literature, limitations still remain when dealing with 3D
volumes generated slice by slice from unpaired datasets with different fields
of view. We present an extension of CycleGAN to generate high fidelity images,
with good structural consistency, in this context. We leverage anatomical
constraints and automatic region of interest selection by adapting the
Self-Supervised Body Regressor. These constraints enforce anatomical
consistency and allow feeding anatomically-paired input images to the
algorithm. Results show qualitative and quantitative improvements, compared to
stateof-the-art methods, on the translation task between ceCT and CT images
(and vice versa).
Related papers
- Neurovascular Segmentation in sOCT with Deep Learning and Synthetic Training Data [4.5276169699857505]
This study demonstrates a synthesis engine for neurovascular segmentation in serial-section optical coherence tomography images.
Our approach comprises two phases: label synthesis and label-to-image transformation.
We demonstrate the efficacy of the former by comparing it to several more realistic sets of training labels, and the latter by an ablation study of synthetic noise and artifact models.
arXiv Detail & Related papers (2024-07-01T16:09:07Z) - Similarity-aware Syncretic Latent Diffusion Model for Medical Image Translation with Representation Learning [15.234393268111845]
Non-contrast CT (NCCT) imaging may reduce image contrast and anatomical visibility, potentially increasing diagnostic uncertainty.
We propose a novel Syncretic generative model based on the latent diffusion model for medical image translation (S$2$LDM)
S$2$LDM enhances the similarity in distinct modal images via syncretic encoding and diffusing, promoting amalgamated information in the latent space and generating medical images with more details in contrast-enhanced regions.
arXiv Detail & Related papers (2024-06-20T03:54:41Z) - Enhanced Sharp-GAN For Histopathology Image Synthesis [63.845552349914186]
Histopathology image synthesis aims to address the data shortage issue in training deep learning approaches for accurate cancer detection.
We propose a novel approach that enhances the quality of synthetic images by using nuclei topology and contour regularization.
The proposed approach outperforms Sharp-GAN in all four image quality metrics on two datasets.
arXiv Detail & Related papers (2023-01-24T17:54:01Z) - Self-Attention Generative Adversarial Network for Iterative
Reconstruction of CT Images [0.9208007322096533]
The aim of this study is to train a single neural network to reconstruct high-quality CT images from noisy or incomplete data.
The network includes a self-attention block to model long-range dependencies in the data.
Our approach is shown to have comparable overall performance to CIRCLE GAN, while outperforming the other two approaches.
arXiv Detail & Related papers (2021-12-23T19:20:38Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - DuDoTrans: Dual-Domain Transformer Provides More Attention for Sinogram
Restoration in Sparse-View CT Reconstruction [13.358197688568463]
iodine radiation in the imaging process induces irreversible injury.
Iterative models are proposed to alleviate the appeared artifacts in sparse-view CT images, but the cost is too expensive.
We propose textbfDual-textbfDomain textbfDuDoTrans to reconstruct CT image with both the enhanced and raw sinograms.
arXiv Detail & Related papers (2021-11-21T10:41:07Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Symmetry-Enhanced Attention Network for Acute Ischemic Infarct
Segmentation with Non-Contrast CT Images [50.55978219682419]
We propose a symmetry enhanced attention network (SEAN) for acute ischemic infarct segmentation.
Our proposed network automatically transforms an input CT image into the standard space where the brain tissue is bilaterally symmetric.
The proposed SEAN outperforms some symmetry-based state-of-the-art methods in terms of both dice coefficient and infarct localization.
arXiv Detail & Related papers (2021-10-11T07:13:26Z) - Bone Segmentation in Contrast Enhanced Whole-Body Computed Tomography [2.752817022620644]
This paper outlines a U-net architecture with novel preprocessing techniques to segment bone-bone marrow regions from low dose contrast enhanced whole-body CT scans.
We have demonstrated that appropriate preprocessing is important for differentiating between bone and contrast dye, and that excellent results can be achieved with limited data.
arXiv Detail & Related papers (2020-08-12T10:48:38Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.