Multi-modal unsupervised brain image registration using edge maps
- URL: http://arxiv.org/abs/2202.04647v1
- Date: Wed, 9 Feb 2022 15:50:14 GMT
- Title: Multi-modal unsupervised brain image registration using edge maps
- Authors: Vasiliki Sideri-Lampretsa, Georgios Kaissis, Daniel Rueckert
- Abstract summary: We propose a simple yet effective unsupervised deep learning-based em multi-modal image registration approach.
The intuition behind this is that image locations with a strong gradient are assumed to denote a transition of tissues.
We evaluate our approach in the context of registering multi-modal (T1w to T2w) magnetic resonance (MR) brain images of different subjects using three different loss functions.
- Score: 7.49320945341034
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffeomorphic deformable multi-modal image registration is a challenging task
which aims to bring images acquired by different modalities to the same
coordinate space and at the same time to preserve the topology and the
invertibility of the transformation. Recent research has focused on leveraging
deep learning approaches for this task as these have been shown to achieve
competitive registration accuracy while being computationally more efficient
than traditional iterative registration methods. In this work, we propose a
simple yet effective unsupervised deep learning-based {\em multi-modal} image
registration approach that benefits from auxiliary information coming from the
gradient magnitude of the image, i.e. the image edges, during the training. The
intuition behind this is that image locations with a strong gradient are
assumed to denote a transition of tissues, which are locations of high
information value able to act as a geometry constraint. The task is similar to
using segmentation maps to drive the training, but the edge maps are easier and
faster to acquire and do not require annotations. We evaluate our approach in
the context of registering multi-modal (T1w to T2w) magnetic resonance (MR)
brain images of different subjects using three different loss functions that
are said to assist multi-modal registration, showing that in all cases the
auxiliary information leads to better results without compromising the runtime.
Related papers
- Large Language Models for Multimodal Deformable Image Registration [50.91473745610945]
We propose a novel coarse-to-fine MDIR framework,LLM-Morph, for aligning the deep features from different modal medical images.
Specifically, we first utilize a CNN encoder to extract deep visual features from cross-modal image pairs, then we use the first adapter to adjust these tokens, and use LoRA in pre-trained LLMs to fine-tune their weights.
Third, for the alignment of tokens, we utilize other four adapters to transform the LLM-encoded tokens into multi-scale visual features, generating multi-scale deformation fields and facilitating the coarse-to-fine MDIR task
arXiv Detail & Related papers (2024-08-20T09:58:30Z) - MAD: Modality Agnostic Distance Measure for Image Registration [14.558286801723293]
Multi-modal image registration is a crucial pre-processing step in many medical applications.
We present Modality Agnostic Distance (MAD), a measure that uses random convolutions to learn the inherent geometry of the images.
We demonstrate that not only can MAD affinely register multi-modal images successfully, but it has also a larger capture range than traditional measures.
arXiv Detail & Related papers (2023-09-06T09:59:58Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - Unsupervised Multi-Modal Medical Image Registration via
Discriminator-Free Image-to-Image Translation [4.43142018105102]
We propose a novel translation-based unsupervised deformable image registration approach to convert the multi-modal registration problem to a mono-modal one.
Our approach incorporates a discriminator-free translation network to facilitate the training of the registration network and a patchwise contrastive loss to encourage the translation network to preserve object shapes.
arXiv Detail & Related papers (2022-04-28T17:18:21Z) - StEP: Style-based Encoder Pre-training for Multi-modal Image Synthesis [68.3787368024951]
We propose a novel approach for multi-modal Image-to-image (I2I) translation.
We learn a latent embedding, jointly with the generator, that models the variability of the output domain.
Specifically, we pre-train a generic style encoder using a novel proxy task to learn an embedding of images, from arbitrary domains, into a low-dimensional style latent space.
arXiv Detail & Related papers (2021-04-14T19:58:24Z) - Recurrent Multi-view Alignment Network for Unsupervised Surface
Registration [79.72086524370819]
Learning non-rigid registration in an end-to-end manner is challenging due to the inherent high degrees of freedom and the lack of labeled training data.
We propose to represent the non-rigid transformation with a point-wise combination of several rigid transformations.
We also introduce a differentiable loss function that measures the 3D shape similarity on the projected multi-view 2D depth images.
arXiv Detail & Related papers (2020-11-24T14:22:42Z) - Unsupervised Multimodal Image Registration with Adaptative Gradient
Guidance [23.461130560414805]
Unsupervised learning-based methods have demonstrated promising performance over accuracy and efficiency in deformable image registration.
The estimated deformation fields of the existing methods fully rely on the to-be-registered image pair.
We propose a novel multimodal registration framework, which leverages the deformation fields estimated from both.
arXiv Detail & Related papers (2020-11-12T05:47:20Z) - Image-to-image Mapping with Many Domains by Sparse Attribute Transfer [71.28847881318013]
Unsupervised image-to-image translation consists of learning a pair of mappings between two domains without known pairwise correspondences between points.
Current convention is to approach this task with cycle-consistent GANs.
We propose an alternate approach that directly restricts the generator to performing a simple sparse transformation in a latent layer.
arXiv Detail & Related papers (2020-06-23T19:52:23Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Unsupervised Multi-Modal Image Registration via Geometry Preserving
Image-to-Image Translation [43.060971647266236]
We train an image-to-image translation network on the two input modalities.
This learned translation allows training the registration network using simple and reliable mono-modality metrics.
Compared to state-of-the-art multi-modal methods our presented method is unsupervised, requiring no pairs of aligned modalities for training, and can be adapted to any pair of modalities.
arXiv Detail & Related papers (2020-03-18T07:21:09Z) - Deform-GAN:An Unsupervised Learning Model for Deformable Registration [4.030402376540977]
In this paper, a non-rigid registration method is proposed for 3D medical images leveraging unsupervised learning.
The proposed gradient loss is robust across sequences and modals for large deformation.
Neither ground-truth nor manual labeling is required during training.
arXiv Detail & Related papers (2020-02-26T12:20:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.