Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain
Adaptation
- URL: http://arxiv.org/abs/2103.03781v1
- Date: Fri, 5 Mar 2021 16:22:31 GMT
- Title: Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain
Adaptation
- Authors: Devavrat Tomar, Manana Lortkipanidze, Guillaume Vray, Behzad
Bozorgtabar, Jean-Philippe Thiran
- Abstract summary: Cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists.
We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups.
- Score: 9.659642285903418
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite the successes of deep neural networks on many challenging vision
tasks, they often fail to generalize to new test domains that are not
distributed identically to the training data. The domain adaptation becomes
more challenging for cross-modality medical data with a notable domain shift.
Given that specific annotated imaging modalities may not be accessible nor
complete. Our proposed solution is based on the cross-modality synthesis of
medical images to reduce the costly annotation burden by radiologists and
bridge the domain gap in radiological images. We present a novel approach for
image-to-image translation in medical images, capable of supervised or
unsupervised (unpaired image data) setups. Built upon adversarial training, we
propose a learnable self-attentive spatial normalization of the deep
convolutional generator network's intermediate activations. Unlike previous
attention-based image-to-image translation approaches, which are either
domain-specific or require distortion of the source domain's structures, we
unearth the importance of the auxiliary semantic information to handle the
geometric changes and preserve anatomical structures during image translation.
We achieve superior results for cross-modality segmentation between unpaired
MRI and CT data for multi-modality whole heart and multi-modal brain tumor MRI
(T1/T2) datasets compared to the state-of-the-art methods. We also observe
encouraging results in cross-modality conversion for paired MRI and CT images
on a brain dataset. Furthermore, a detailed analysis of the cross-modality
image translation, thorough ablation studies confirm our proposed method's
efficacy.
Related papers
- ContourDiff: Unpaired Image Translation with Contour-Guided Diffusion Models [14.487188068402178]
Accurately translating medical images across different modalities has numerous downstream clinical and machine learning applications.
We propose ContourDiff, a novel framework that leverages domain-invariant anatomical contour representations of images.
We evaluate our method by training a segmentation model on images translated from CT to MRI with their original CT masks and testing its performance on real MRIs.
arXiv Detail & Related papers (2024-03-16T03:33:52Z) - A2V: A Semi-Supervised Domain Adaptation Framework for Brain Vessel Segmentation via Two-Phase Training Angiography-to-Venography Translation [4.452428104996953]
We present a semi-supervised domain adaptation framework for brain vessel segmentation from different image modalities.
By relying on annotated angiographies and a limited number of annotated venographies, our framework accomplishes image-to-image translation and semantic segmentation.
arXiv Detail & Related papers (2023-09-12T09:12:37Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Unsupervised Image Registration Towards Enhancing Performance and
Explainability in Cardiac And Brain Image Analysis [3.5718941645696485]
Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging.
We present an un-supervised deep learning registration methodology which can accurately model affine and non-rigid trans-formations.
Our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent rep-resentations.
arXiv Detail & Related papers (2022-03-07T12:54:33Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Segmentation-Renormalized Deep Feature Modulation for Unpaired Image
Harmonization [0.43012765978447565]
Cycle-consistent Generative Adversarial Networks have been used to harmonize image sets between a source and target domain.
These methods are prone to instability, contrast inversion, intractable manipulation of pathology, and steganographic mappings which limit their reliable adoption in real-world medical imaging.
We propose a segmentation-renormalized image translation framework to reduce inter-scanner harmonization while preserving anatomical layout.
arXiv Detail & Related papers (2021-02-11T23:53:51Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Patch-based field-of-view matching in multi-modal images for
electroporation-based ablations [0.6285581681015912]
Multi-modal imaging sensors are currently involved at different steps of an interventional therapeutic work-flow.
Merging this information relies on a correct spatial alignment of the observed anatomy between the acquired images.
We show that a regional registration approach using voxel patches provides a good structural compromise between the voxel-wise and "global shifts" approaches.
arXiv Detail & Related papers (2020-11-09T11:27:45Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.