Shape-consistent Generative Adversarial Networks for multi-modal Medical
segmentation maps
- URL: http://arxiv.org/abs/2201.09693v1
- Date: Mon, 24 Jan 2022 13:57:31 GMT
- Title: Shape-consistent Generative Adversarial Networks for multi-modal Medical
segmentation maps
- Authors: Leo Segre, Or Hirschorn, Dvir Ginzburg, Dan Raviv
- Abstract summary: We present a segmentation network using synthesised cardiac volumes for extremely limited datasets.
Our solution is based on a 3D cross-modality generative adversarial network to share information between modalities.
We show that improved segmentation can be achieved on small datasets when using spatial augmentations.
- Score: 10.781866671930857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image translation across domains for unpaired datasets has gained interest
and great improvement lately. In medical imaging, there are multiple imaging
modalities, with very different characteristics. Our goal is to use
cross-modality adaptation between CT and MRI whole cardiac scans for semantic
segmentation. We present a segmentation network using synthesised cardiac
volumes for extremely limited datasets. Our solution is based on a 3D
cross-modality generative adversarial network to share information between
modalities and generate synthesized data using unpaired datasets. Our network
utilizes semantic segmentation to improve generator shape consistency, thus
creating more realistic synthesised volumes to be used when re-training the
segmentation network. We show that improved segmentation can be achieved on
small datasets when using spatial augmentations to improve a generative
adversarial network. These augmentations improve the generator capabilities,
thus enhancing the performance of the Segmentor. Using only 16 CT and 16 MRI
cardiovascular volumes, improved results are shown over other segmentation
methods while using the suggested architecture.
Related papers
- Contextual Embedding Learning to Enhance 2D Networks for Volumetric Image Segmentation [5.995633685952995]
2D convolutional neural networks (CNNs) can hardly exploit the spatial correlation of volumetric data.
We propose a contextual embedding learning approach to facilitate 2D CNNs capturing spatial information properly.
Our approach leverages the learned embedding and the slice-wisely neighboring matching as a soft cue to guide the network.
arXiv Detail & Related papers (2024-04-02T08:17:39Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Leveraging SO(3)-steerable convolutions for pose-robust semantic segmentation in 3D medical data [2.207533492015563]
We present a new family of segmentation networks that use equivariant voxel convolutions based on spherical harmonics.
These networks are robust to data poses not seen during training, and do not require rotation-based data augmentation during training.
We demonstrate improved segmentation performance in MRI brain tumor and healthy brain structure segmentation tasks.
arXiv Detail & Related papers (2023-03-01T09:27:08Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Multi-organ Segmentation Network with Adversarial Performance Validator [10.775440368500416]
This paper introduces an adversarial performance validation network into a 2D-to-3D segmentation framework.
The proposed network converts the 2D-coarse result to 3D high-quality segmentation masks in a coarse-to-fine manner, allowing joint optimization to improve segmentation accuracy.
Experiments on the NIH pancreas segmentation dataset demonstrate the proposed network achieves state-of-the-art accuracy on small organ segmentation and outperforms the previous best.
arXiv Detail & Related papers (2022-04-16T18:00:29Z) - Enhancing MR Image Segmentation with Realistic Adversarial Data
Augmentation [17.539828821476224]
We propose an adversarial data augmentation approach to improve the efficiency in utilizing training data.
We present a generic task-driven learning framework, which jointly optimize a data augmentation model and a segmentation network during training.
The proposed adversarial data augmentation does not rely on generative networks and can be used as a plug-in module in general segmentation networks.
arXiv Detail & Related papers (2021-08-07T11:32:37Z) - Realistic Adversarial Data Augmentation for MR Image Segmentation [17.951034264146138]
We propose an adversarial data augmentation method for training neural networks for medical image segmentation.
Our model generates plausible and realistic signal corruptions, which models the intensity inhomogeneities caused by a common type of artefacts in MR imaging: bias field.
We show that such an approach can improve the ability generalization and robustness of models as well as provide significant improvements in low-data scenarios.
arXiv Detail & Related papers (2020-06-23T20:43:18Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.