Contrast Adaptive Tissue Classification by Alternating Segmentation and
Synthesis
- URL: http://arxiv.org/abs/2103.02767v1
- Date: Thu, 4 Mar 2021 00:25:24 GMT
- Title: Contrast Adaptive Tissue Classification by Alternating Segmentation and
Synthesis
- Authors: Dzung L. Pham, Yi-Yu Chou, Blake E. Dewey, Daniel S. Reich, John A.
Butman, and Snehashis Roy
- Abstract summary: We describe an approach using alternating segmentation and synthesis steps that adapts the contrast properties of the training data to the input image.
A notable advantage of this approach is that only a single example of the acquisition protocol is required to adapt to its contrast properties.
- Score: 0.21111026813272174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning approaches to the segmentation of magnetic resonance images
have shown significant promise in automating the quantitative analysis of brain
images. However, a continuing challenge has been its sensitivity to the
variability of acquisition protocols. Attempting to segment images that have
different contrast properties from those within the training data generally
leads to significantly reduced performance. Furthermore, heterogeneous data
sets cannot be easily evaluated because the quantitative variation due to
acquisition differences often dwarfs the variation due to the biological
differences that one seeks to measure. In this work, we describe an approach
using alternating segmentation and synthesis steps that adapts the contrast
properties of the training data to the input image. This allows input images
that do not resemble the training data to be more consistently segmented. A
notable advantage of this approach is that only a single example of the
acquisition protocol is required to adapt to its contrast properties. We
demonstrate the efficacy of our approaching using brain images from a set of
human subjects scanned with two different T1-weighted volumetric protocols.
Related papers
- Self-training with dual uncertainty for semi-supervised medical image
segmentation [9.538419502275975]
Traditional self-training methods can partially solve the problem of insufficient labeled data by generating pseudo labels for iterative training.
We add sample-level and pixel-level uncertainty to stabilize the training process based on the self-training framework.
Our proposed method achieves better segmentation performance on both datasets under the same settings.
arXiv Detail & Related papers (2023-04-10T07:57:24Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Contrastive Image Synthesis and Self-supervised Feature Adaptation for
Cross-Modality Biomedical Image Segmentation [8.772764547425291]
CISFA builds on image domain translation and unsupervised feature adaptation for cross-modality biomedical image segmentation.
We use a one-sided generative model and add a weighted patch-wise contrastive loss between sampled patches of the input image and the corresponding synthetic image.
We evaluate our methods on segmentation tasks containing CT and MRI images for abdominal cavities and whole hearts.
arXiv Detail & Related papers (2022-07-27T01:49:26Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Positional Contrastive Learning for Volumetric Medical Image
Segmentation [13.086140606803408]
We propose a novel positional contrastive learning framework to generate contrastive data pairs.
The proposed PCL method can substantially improve the segmentation performance compared to existing methods in both semi-supervised setting and transfer learning setting.
arXiv Detail & Related papers (2021-06-16T22:15:28Z) - CT Image Synthesis Using Weakly Supervised Segmentation and Geometric
Inter-Label Relations For COVID Image Analysis [4.898744396854313]
We propose improvements over previous GAN-based medical image synthesis methods by learning the relationship between different anatomical labels.
We use the synthetic images from our method to train networks for segmenting COVID-19 infected areas from lung CT images.
arXiv Detail & Related papers (2021-06-15T07:21:24Z) - Bone Segmentation in Contrast Enhanced Whole-Body Computed Tomography [2.752817022620644]
This paper outlines a U-net architecture with novel preprocessing techniques to segment bone-bone marrow regions from low dose contrast enhanced whole-body CT scans.
We have demonstrated that appropriate preprocessing is important for differentiating between bone and contrast dye, and that excellent results can be achieved with limited data.
arXiv Detail & Related papers (2020-08-12T10:48:38Z) - Adversarial Semantic Data Augmentation for Human Pose Estimation [96.75411357541438]
We propose Semantic Data Augmentation (SDA), a method that augments images by pasting segmented body parts with various semantic granularity.
We also propose Adversarial Semantic Data Augmentation (ASDA), which exploits a generative network to dynamiclly predict tailored pasting configuration.
State-of-the-art results are achieved on challenging benchmarks.
arXiv Detail & Related papers (2020-08-03T07:56:04Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.